sentences
sequence
labels
sequence
[ "Pre-trained language models (LMs) are currently integral to many natural language processing systems.", "Although multilingual LMs were also introduced to serve many languages, these have limitations such as being costly at inference time and the size and diversity of non-English data involved in their pre-training.", "We remedy these issues for a collection of diverse Arabic varieties by introducing two powerful deep bidirectional transformer-based models, ARBERT and MARBERT.", "To evaluate our models, we also introduce ARLUE, a new benchmark for multi-dialectal Arabic language understanding evaluation.", "ARLUE is built using 42 datasets targeting six different task clusters, allowing us to offer a series of standardized experiments under rich conditions.", "When fine-tuned on ARLUE, our models collectively achieve new state-of-the-art results across the majority of tasks ( 37 out of 48 classification tasks, on the 42 datasets).", "Our best model acquires the highest ARLUE score ( 77 . 40 ) across all six task clusters, outperforming all other models including XLM-R Large ( 3 . 4 larger size).", "Our models are publicly available at https://github.com/UBC-NLP/marbert and ARLUE will be released through the same repository.", "Language models (LMs) exploiting self-supervised learning such as BERT (Devlin et al., 2019) and RoBERTa (Liu et al., 2019a) have recently emerged as powerful transfer learning tools that help improve a very wide range of natural language processing (NLP) tasks.", "Multilingual LMs such as mBERT (Devlin et al., 2019) and XLM-RoBERTa (XLM-R) (Conneau et al., 2020) have also been introduced, but are usually outperformed by monolingual models pre-trained with larger vocabulary and bigger language-specific datasets (Virtanen et al., 2019; Antoun et al., 2020; Dadas et al., 2020; All authors contributed equally. de Vries et al., 2019; Le et al., 2020; Martin et al., 2020; Nguyen and Tuan Nguyen, 2020).", "Since LMs are costly to pre-train, it is important to keep in mind the end goals they will serve once developed.", "For example,", "(i) in addition to their utility on standard' data, it is useful to endow them with ability to excel on wider real world settings such as in social media.", "Some existing LMs do not meet this need since they were trained on datasets that do not sufficiently capture the nuances of social media language (e.g., frequent use of abbreviations, emoticons, and hashtags; playful character repetitions; neologisms and informal language).", "It is also desirable to build models able to", "(ii) serve diverse communities (e.g., speakers of dialects of a given language), rather than focusing only on mainstream varieties.", "In addition, once created, models should be", "(iii) usable in energy efficient scenarios.", "This means that, for example, medium-to-large models with competitive performance should be preferred to large-to-mega models.", "A related issue is", "(iv) how LMs are evaluated.", "Progress in NLP hinges on our ability to carry out meaningful comparisons across tasks, on carefully designed benchmarks.", "Although several benchmarks have been introduced to evaluate LMs, the majority of these are either exclusively in English (e.g., DecaNLP (McCann et al., 2018), GLUE (Wang et al., 2018), SuperGLUE (Wang et al., 2019)) or use machine translation in their training splits (e.g., XTREME (Hu et al., 2020)).", "Again, useful as these benchmarks are, this circumvents our ability to measure progress in real-world settings (e.g., training and evaluation on native vs. translated data) for both cross-lingual NLP and in monolingual, non-English environments.", "Context.", "Our objective is to showcase a scenario where we build LMs that meet all four needs listed above.", "That is, we describe novel LMs that", "(i) excel across domains, including social media,", "(ii) can serve diverse communities, and", "(iii) perform well compared to larger (more energy hungry) models", "(iv) on a novel, standardized benchmark.", "We choose Arabic as the context for our work since it is a widely spoken language ( 400 M native speakers), with a large number of diverse dialects differing among themselves and from the standard variety, Modern Standard Arabic (MSA).", "Arabic is also covered by the popular mBERT (Devlin et al., 2019) and XLM-R (Conneau et al., 2020), which provides us a setup for meaningful comparisons.", "That is, not only are we able to empirically measure monolingual vs. multilingual performance under robust conditions using our new benchmark, ARLUE, but we can also demonstrate how our base-sized models outperform (or at least are on par with) larger models (i.e., XLM-R Large , which is 3 . 4 larger than our models).", "In the context of our work, we also show how the currently best-performing model dedicated to Arabic, AraBERT (Antoun et al., 2020), suffers from a number of issues.", "These include", "(a) not making use of easily accessible data across domains and, more seriously,", "(b) limited ability to handle Arabic dialects and", "(c) narrow evaluation.", "We rectify all these limitations.", "Our contributions.", "With our stated goals in mind, we introduce ARBERT and MARBERT , two Arabic-focused LMs exploiting large-to-massive diverse datasets.", "For evaluation, we also introduce a novel AR abic natural L anguage U nderstanding E valuation benchmark ( ARLUE ).", "ARLUE is composed of 42 different datasets, making it by far the largest and most diverse Arabic NLP benchmark we know of.", "We arrange ARLUE into six coherent cluster tasks and methodically evaluate on each independent dataset as well as each cluster task, ultimately reporting a single ARLUE score.", "Our models establish new state-of-the-art (SOTA) on the majority of tasks, across all cluster tasks.", "Our goal is for ARLUE to serve the critical need for measuring progress on Arabic, and facilitate evaluation of multilingual and Arabic LMs.", "To summarize, we offer the following contributions:", "1. We develop ARBERT and MARBERT , two novel Arabic-specific Transformer LMs pre-trained on very large and diverse datasets to facilitate transfer learning on MSA as well as Arabic dialects.", "on 42 datasets across six different Arabic language understanding cluster tasks, thereby facilitating measurement of progress on Arabic and multilingual NLP.", "3. We fine-tune our new powerful models on ARLUE and provide an extensive set of comparisons to available models.", "Our models achieve new SOTA on all task clusters in 37 out of 48 individual datasets and a SOTA ARLUE score .", "The rest of the paper is organized as follows: In Section 2, we provide an overview of Arabic LMs.", "Section 3 describes our Arabic pre-tained models.", "We evaluate our models on downstream tasks in Section 4, and present our benchmark ARLUE and evaluation on it in Section 5.", "Section 6 is an overview of related work.", "We conclude in Section 7.", "We now introduce existing Arabic LMs.", "The term Arabic refers to a collection of languages, language varieties, and dialects.", "The standard variety of Arabic is MSA, and there exists a large number of dialects that are usually defined at the level of the region or country (Abdul-Mageed et al., 2020a, 2021a,b).", "A number of Arabic LMs has been developed.", "The most notable among these is AraBERT (Antoun et al., 2020), which is trained with the same architecture as BERT (De-vlin et al., 2019) and uses the BERT Base configuration.", "AraBERT is trained on 23 GB of Arabic text, making 70 M sentences and 3 B words, from Arabic Wikipedia, the Open Source International dataset (OSIAN) (Zeroual et al., 2019) ( 3 . 5 M news articles from 24 Arab countries), and 1 .", "5 B words Corpus from El-Khair (2016) ( 5 M articles extracted from 10 news sources).", "Antoun et al. (2020) evaluate AraBERT on three Arabic downstream tasks.", "These are (1) sentiment analysis from six different datasets: HARD (Elnagar et al., 2018), ASTD (Nabil et al., 2015), ArsenTD-Lev (Baly et al., 2019), LABR (Aly and Atiya, 2013), and ArSaS (Elmadany et al., 2018).", "(2) NER, with the ANERcorp (Benajiba and Rosso, 2007), and (3) Arabic QA, on Arabic-SQuAD and ARCD (Mozannar et al., 2019) datasets.", "Another Arabic LM that was also introduced is ArabicBERT (Safaya et al., 2020), which is similarly based on BERT architecture.", "ArabicBERT was pre-trained on two datasets only, Arabic Wikipedia and Arabic OSACAR (Suarez et al., 2019).", "Since both of these datasets are already included in AraBERT, and Arabic OSACAR 1 has significant duplicates, we compare to AraBERT only.", "GigaBERT (Lan et al., 2020), an Arabic and English LM designed with code-switching data in mind, was also introduced.", "2 3 Our Models 3.1 ARBERT 3.1.1 Training Data We train ARBERT on 61 GB of MSA text ( 6 . 5 B tokens) from the following sources: Books (Hindawi) .", "We collect and preprocess 1 , 800 Arabic books from the public Arabic bookstore Hindawi.", "3 El-Khair .", "This is a 5 M news articles dataset from 10 major news sources covering eight Arab countries from El-Khair (2016).", "Gigaword .", "We use Arabic Gigaword 5 th Edition from the Linguistic Data Consortium (LDC).", "4 The dataset is a comprehensive archive of newswire text from multiple Arabic news sources.", "OSCAR .", "This is the MSA and Egyptian Arabic portion of the Open Super-large Crawled Almanach coRpus (Suarez et al., 2019), 5 a huge multilingual subset from Common Crawl 6 obtained using language identification and filtering.", "OSIAN .", "The Open Source International Arabic News Corpus (OSIAN) (Zeroual et al., 2019) consists of 3 .", "5 million articles from 31 news sources in 24 Arab countries.", "Wikipedia Arabic .", "We download and use the December 2019 dump of Arabic Wikipedia.", "We use WikiExtractor 7 to extract articles and remove markup from the dump.", "1 https://oscar-corpus.com.", "2 Since GigaBERT is very recent, we could not compare to it.", "However, we note that our pre-training datasets are much larger (i.e., 15 . 6 B tokens for MARBERT vs. 4 . 3 B Arabic tokens for GigaBERT) and more diverse.", "3 https://www.hindawi.org/books/.", "4 https://catalog.ldc.upenn.edu/LDC2011T11.", "5 https://oscar-corpus.com/.", "6 https://commoncrawl.org.", "7 https://github.com/attardi/wikiextractor.", "We provide relevant size and token count statistics about the datasets in Table", "1. 3.1.2 Training Procedure Pre-processing.", "To prepare the raw data for pretraining, we perform light pre-processing.", "This helps retain a faithful representation of the naturally occurring text.", "We only remove diacritics and replace URLs, user mentions, and hashtags that may exist in any of the collections with the generic string tokens URL , USER , and HASHTAG , respectively.", "We do not perform any further preprocessing of the data before splitting the text off to wordPieces (Schuster and Nakajima, 2012).", "Multilingual models such as mBERT and XLM-R have 5 K (out of 110 K) and 14 K (out of 250 K) Arabic WordPieces, respectively, in their vocabularies.", "AraBERT employs a vocabulary of 60 K (out of 64 K).", "8 For ARBERT, we use a larger vocabulary of 100 K WordPieces.", "For tokenization, we use the WordPiece tokenizer (Wu et al., 2016) provided by Devlin et al. (2019).", "Pre-training.", "For ARBERT, we follow Devlin et al. (2019)'s pre-training setup.", "To generate each training input sequence, we use the whole word masking, where 15% of the N input tokens are selected for replacement.", "These tokens are replaced 80% of the time with the [MASK] token, 10% with a random token, and 10% with the original token.", "We use the original implementation of BERT in the TensorFlow framework.", "9 As mentioned, we use the same network architecture as BERT Base : 12 layers, 768 hidden units, 12 heads, for a total of 163 M parameters.", "We use a batch size of 256 sequences and a maximum sequence length of 128 tokens ( 256 sequences 128 tokens = 32 , 768 tokens/batch) for 8 M steps, which is approximately 42 epochs over the 6 .", "5 B tokens.", "For all our models, we use a learning rate of 1 e 4 .", "We pre-train the model on one Google Cloud TPU with eight cores (v 2 . 8 ) from TensorFlow Research Cloud (TFRC).", "10 Training took 16 days, for 42 epochs over all the tokens.", "Table 2 shows a comparison of ARBERT with mBERT, XLM-R, AraBERT, and MARBERT (see Section 3.2) in terms of data sources and size, vocabulary size, and model parameters.", "As we pointed out in Sections 1 and 2, Arabic has a large number of diverse dialects.", "Most of these dialects are under-studied due to rarity of resources.", "Multilingual models such as mBERT and XLM-R are trained on mostly MSA data, which is also the case for AraBERT and ARBERT.", "As such, these models are not best suited for downstream tasks involving dialectal Arabic.", "To treat this issue, we use a large Twitter dataset to pre-train a new model, MARBERT, from scratch as we describe next.", "To pre-train MARBERT, we randomly sample 1 B Arabic tweets from a large in-house dataset of about 6 B tweets.", "We only include tweets with at least three Arabic words, based on character string matching, regardless whether the tweet has non-Arabic string or not.", "That is, we do not remove non-Arabic so long as the tweet meets the three Arabic word criterion.", "The dataset makes up 128 GB of text ( 15 . 6 B tokens).", "Pre-training.", "We use the same network architecture as BERT Base , but without the next sentence prediction (NSP) objective since tweets are short.", "11 We use the same vocabulary size ( 100 K wordPieces) as ARBERT, and MARBERT also has 160 M parameters.", "We train MARBERT for 17 M steps ( 36 epochs) with a batch size of 256 and a maximum sequence length of 128 .", "Training took 40 days on one Google Cloud TPU (eight cores).", "We now present a comparison between our models and popular multilingual models as well as AraBERT.", "Our models compare to mBERT (Devlin et al., 2019), XLM-R (Conneau et al., 2020) (base and large), and AraBERT (Antoun et al., 2020) in terms of training data size, vocabulary size, and overall model capacity as we summarize in Table", "2. In terms of the actual Arabic variety involved, Devlin et al. (2019) train mBERT with Wikipedia Arabic data, which is MSA.", "XLM-R (Conneau et al., 2020) is trained on Common Crawl data, which likely involves a small amount of Arabic dialects.", "AraBERT is trained on MSA data only.", "ARBERT is trained on a large collection of MSA datasets.", "Unlike all other models, our MARBERT model is trained on Twitter data, which involves both MSA and diverse dialects.", "We now describe our fine-tuning setup.", "We evaluate our models by fine-tuning them on a wide range of tasks, which we thematically organize into six clusters: (1) sentiment analysis (SA), (2) social meaning (SM) (i.e., age and gender, dangerous and hateful speech, emotion, irony, and sar-casm), (3) topic classification (TC), (4) dialect identification (DI), (5) named entity recognition (NER), and (6) question answering (QA).", "For all classification tasks reported in this paper, we compare our models to four other models: mBERT, XLM-R Base , XLM-R Large , and AraBERT.", "We note that XLM-R Large is 3 .", "4 larger than any of our own models ( 550 M parameters vs. 160 M).", "We offer two main types of evaluation: on", "(i) individual tasks , which allows us to compare to other works on each individual dataset ( 48 classification tasks on 42 datasets), and", "(ii) ARLUE clusters (six task clusters).", "For all reported experiments, we follow the same light pre-processing we use for pre-training.", "For all individual tasks and ARLUE task clusters, we fine-tune on the respective training splits for 25 epochs, identifying the best epoch on development data, and reporting on both development and test data.", "12 We typically use the exact data splits provided by original authors of each dataset.", "Whenever no clear 12 A minority of datasets came with no development split from source, and so we identify and report the best epoch only on test data for these.", "This allows us to compare all the models under the same conditions ( 25 epochs) and report a fair comparison to the respective original works.", "For all ARLUE cluster tasks, we identify the best epoch exclusively on our development sets (shown in Table 10).", "splits are available, or in cases where expensive cross-validation was used in source, we divide the data following a standard 80% training, 10% development, and 10% test split.", "For all experiments, whether on individual tasks or ARLUE task clusters, we use the Adam optimizer ( ? ) with input sequence length of 256 , a batch size of 32 , and a learning rate of 2 e 6 .", "These values were identified in initial experiments based on development data of a few tasks.", "13 We now introduce individual tasks.", "Datasets.", "We fine-tune the language models on all publicly available SA datasets we could find in addition to those we acquired directly from authors.", "In total, we have the following 17 MSA and DA datasets: AJGT (Alo-mari et al., 2017), AraNET Sent (Abdul-Mageed et al., 2020b), AraSenTi-Tweet (Al-Twairesh et al., 2017), ArSarcasm Sent (Farha and Magdy, 2020), ArSAS (Elmadany et al., 2018), ArSenD-Lev (Baly et al., 2019), ASTD (Nabil et al., 2015), AWATIF (Abdul-Mageed and Diab, 2012), BBNS & SYTS (Salameh et al., 2015), CAMel Sent (Obeid et al., 2020), HARD (Elnagar et al., 2018), LABR (Aly and Atiya, 2013), Twitter Abdullah (Ab-dulla et al., 2013), Twitter Saad , 14 and SemEval-2017 (Rosenthal et al., 2017).", "Details about the datasets and their splits are in Section A.1.", "Baselines.", "We compare to the STOA listed in Table 3 and Table 4 captions.", "For all datasets with no baseline in Table 3, we consider AraBERT our baseline.", "Details about SA baselines are in Section A.2.", "Results.", "To facilitate comparison to previous works with the appropriate evaluation metrics, we 13 NER and QA are expetions, where we use sequence lengths of 128 and 384 , respectively; a batch sizes of 16 for both; and a learning rate of 2 e 6 and 3 e 5 , respectively.", "Table 4 : SA results (II) in Acc.", "SOTA by Antoun et al. (2020).", "split our results into two tables: We show results in F 1PN in Table 3 and F 1 in Table 4.", "We typically bold the best result on each dataset.", "Our models achieve best results in 13 out of the 17 classification tasks reported in the two tables combined , while XLM-R (which is a much larger model) outperforms our models in the 4 remaining tasks.", "We also note that XLM-R acquires better results than AraBERT in the majority of tasks, a trend that continues for the rest of tasks.", "Results also clearly show that MARBERT is more powerful than than ARBERT.", "This is due to MARBERT's larger and more diverse pre-training data, especially that many of the SA datasets involve dialects and come from social media.", "We collectively refer to a host of tasks as social meaning .", "These are age and gender detection; dangerous, hateful, and offensive speech detection; emotion detection; irony detection; and sarcasm detection.", "We now describe datasets we use for each of these tasks.", "Datasets.", "For both age and gender, we use Task(classes) SOTA mBERT XLM-RBXLM-RL AraBERT ARBERT MARBERT Age(3) 51 .", "Table 5 : Results on social meaning tasks.", "F 1 score is the evaluation metric.", "(cid:63)", "Hassan et al. (2020), (cid:63)(cid:63) Djandji et al. (2020), Zhang and Abdul-Mageed (2019a), ?", ", Farha and Magdy (2020), Abdul-Mageed et al. (2020b).", "Arap-Tweet (Zaghouani and Charfi, 2018).", "We use AraDan (Alshehri et al., 2020) for dangerous speech.", "For offensive language and hate speech, we use the dataset released in the shared task (sub-tasks A and B) of offensive speech by Mubarak et al. (2020).", "We also use AraNET Emo (Abdul-Mageed et al., 2020b), IDAT@FIRE2019 (Ghanem et al., 2019), and ArSarcasm (Farha and Magdy, 2020) for emotion, irony and sarcasm, respectively.", "More information about these datasets and their splits is in Appendix B.1.", "Baselines.", "Baselines for social meaning tasks are the SOTA listed in Table 5 caption.", "Details about each baseline is in Appendix B.2.", "Results.", "As Table 5 shows, our models acquire best results on all eight tasks.", "Of these, MARBERT achieves best performance on seven tasks, while ARBERT is marginally better than MARBERT on one task (irony@FIRE2019).", "The sizeable gains MARBERT achieves reflects its superiority on social media tasks.", "On average, our models are 9 .", "83 F 1 better than all previous SOTA.", "Classifying documents by topic is a classical task that still has practical utility.", "We use four TC datasets, as follows: Datasets.", "We fine-tune on Arabic News Text (ANT) (Chouigui et al., 2017) under three pre-taining settings ( title only , text only , and title+text .), Khaleej (Abbas et al., 2011), and OSAC (Saad and Ashour, 2010).", "Details about these datasets and the classes therein are in Appendix C.1.", "Baselines.", "Since, to the best of our knowledge, there are no published results exploiting deep learning on TC, we consider AraBERT a strong baseline.", "Results.", "As Table 6 shows, ARBERT acquires best results on both OSAC and Khaleej, and the title-only setting of ANT.", "AraBERT slightly outperforms our models on the text-only and title+text Dataset(classes) mBERT XLM-RBXLM-RL AraBERT ARBERT MARBERT ANTText(5) 84 .", "Arabic dialect identification can be performed at different levels of granularity, including binary (i.e., MSA-DA), regional (e.g., Gulf , Levantine ), country level (e.g., Algeria , Morocco ), and recently province level (e.g., the Egyptian province of Cairo , the Saudi province of Al-Madinah ) (Abdul-Mageed et al., 2020a, 2021b).", "Datasets.", "We fine-tune our models on the following datasets: Arabic Online Commentary (AOC) (Zaidan and Callison-Burch, 2014), ArSarcasm Dia (Farha and Magdy, 2020), 15 MADAR (sub-task 2) (Bouamor et al., 2019), NADI-2020 (Abdul-Mageed et al., 2020a), and QADI (Abdelali et al., 2020).", "Details about these datasets are in Table D.1.", "Baselines.", "Our baselines are marked in Table 7 caption.", "Details about the baselines are in Table D.2.", "Results.", "As Table 7 shows, our models outperform all SOTA as well as the baseline AraBERT across all classification levels with sizeable margins.", "These results reflect the powerful and diverse dialectal representation of MARBERT, enabling it to serve wider communities .", "Although ARBERT is developed mainly for MSA, it also outperforms all other models.", "We fine-tune the models on five NER datasets.", "Datasets.", "We use ACE03NW and ACE03BN (Mitchell et al., 2004), ACE04NW (Mitchell et al., 2004), ANERcorp (Benajiba and Rosso, 2007), and TW-NER (Darwish, 2013).", "Table E.1 shows the 15 ArSarcasm Dia carries regional dialect labels.", "Baseline.", "We compare our results with SOTA presented by Khalifa and Shaalan (2019) and follow them in focusing on person (PER), location (LOC) and organization (ORG) named entity labels while setting other labels to the unnamed entity (O).", "Details about Khalifa and Shaalan (2019) SOTA models are in Appendix E.2.", "Results.", "As Table 8 shows, our models outperform SOTA on two out of the five NER datasets.", "We note that even though SOTA (Khalifa and Shaalan, 2019) employ a complex combination of CNNs and character-level LSTMs, which may explain their better results on two datasets, MARBERT still achieves highest performance on the social media dataset (TW-NER) .", "Datasets.", "We use ARCD (Mozannar et al., 2019) and the three human translated Arabic test sections of the XTREME benchmark (Hu et al., 2020): MLQA (Lewis et al., 2020), XQuAD (Artetxe et al., 2020), and TyDi QA (Artetxe et al., 2020).", "Details about these datasets are in Table F.1.", "Baselines.", "We compare to Antoun et al. (2020) and consider their system a baseline on ARCD.", "We follow the same splits they used where we fine-tune on Arabic SQuAD (Mozannar et al., 2019) and 50% of ARCD and test on the remaining 50% of ARCD (ARCD-test).", "For all other experiments, we fine-tune on the Arabic machine translated SQuAD (AR-XTREME) from the XTREME multilingual benchmark (Hu et al., 2020) and test on the human translated test sets listed above.", "Our baselines in these is Hu et al. (2020)'s mBERT Base model on gold (human) data.", "Results.", "As is standard, we report QA results in terms of both Exact Match (EM) and F 1 .", "We find that results with ARBERT and MARBERT on QA are not competitive, a clear discrepancy from what we have observed thus far on other tasks.", "We hypothesize this is because the two models are pre-trained with a sequence length of only 128 , which does not allow them to sufficiently capture both a question and its likely answer within the same sequence window during the pre-training.", "16 To rectify this, we further pre-train the stronger model, MARBERT, on the same MSA data as ARBERT in addition to AraNews dataset (Nagoudi et al., 2020) ( 8 . 6 GB), but with a bigger sequence length of 512 tokens for 40 epochs.", "We call this further pre-trained model MARBERT-v2 , noting it has 29 B tokens.", "As Table 9 shows, MARBERT-v2 acquires best performance on all but one test set , where XLM-R Large marginally outperforms us (only in F 1 ).", "We concatenate the corresponding splits of the individual datasets to form ARLUE , which is a conglomerate of task clusters.", "That is, we concatenate all training data from each group of tasks into a single TRAIN, all development into a single DEV, and all test into a single TEST.", "One exception is the social meaning tasks whose data we keep independent (see ARLUESM below).", "Table 10 shows a summary of the ARLUE datasets.", "17 We now briefly describe how we merge individual datasets into ARLUE.", "ARLUE Senti .", "To construct ARLUE Senti , we collapse the labels very negative into negative , very positive into positive , and objective into neutral , and remove the mixed class.", "This gives us the 3 classes negative , positive , and neutral for ARLUE Senti .", "Details are in Table A.1.", "ARLUESM .", "We refer to the different social meaning datasets collectively as ARLUESM .", "We do not merge these datasets to preserve the conceptual coherence specific to each of the tasks.", "Details about individual datasets in ARLUESM are in B.1.", "ARLUE Topic.", "We straightforwardly merge the TC datasets to form ARLUE Topic , without modifying any class labels.", "Details of ARLUE Topic data are in Table C.1.", "ARLUE Dia.", "We construct three ARLUE Dia categories.", "Namely, we concatenate the AOC and AraSarcasm Dia MSA-DA classes to form ARLUE Dia-B (binary) and the region level classes from the same two datasets to acquire ARLUE Dia-R (4-classes, region ).", "We then merge the country 16 In addition, MARBERT is not trained on Wikipedia data from where some questions come.", "17 Again, ARLUESM datasets are kept independent, but to provide a summary of all ARLUE datasets we collate the numbers in Table 10.", "Table 10 : ARLUE categories across the different data splits.", "(cid:63)", "Refer to Table B.1 for details about individual social meaning datasets in ARLUESM .", "Statistics are at the token level.", "Number of question-answer pairs.", "(21-classes, country ).", "Details are in Table D.1.", "ARLUENER & ARLUEQA .", "We straightforwardly concatenate all corresponding splits from the different NER and QA datasets to form ARLUENER and ARLUEQA , respectively.", "Details of each of these task clusters data are in Tables E.1 and F.1, respectively.", "We present results on each task cluster independently using the relevant metric for both the development split (Table 11) and test split (Table 12).", "Inspired by McCann et al. (2018) and Wang et al. (2018) who score NLP systems based on their performance on multiple datasets, we introduce an ARLUE score .", "The ARLUE score is simply the macro-average of the different scores across all task clusters, weighting each task equally.", "Following Wang et al. (2018), for tasks with multiple metrics (e.g., accuracy and F 1 ), we use an unweighted average of the metrics as the score for the task when computing the overall macro-average.", "As Table 12 shows, our MARBERT-v2 model achieves the highest ARLUE score ( 77 . 40 ) , followed by XLM-RL ( 76 . 55 ) and ARBERT ( 76 . 07 ).", "We also note that in spite of its superiority on social data, MARBERT ranks top 4 .", "This is due to MARBERT suffering on the QA tasks (due to its short input sequence length), and to a lesser extent on NER and TC.", "English and Multilingual LMs.", "Pre-trained LMs exploiting a self-supervised objective with masking such as BERT (Devlin et al., 2019) and RoBERTa (Liu et al., 2019b) have revolutionized NLP.", "Multilingual versions of these models such as mBERT and XLM-RoBERTa (Conneau et al., 2020) were also pre-trained.", "Other models with different objectives and/or architectures such as ALBERT (Lan et al., 2019), T5 (Raffel et al., 2020) and its multilingual version, mT5 (Xue et al., 2021), and GPT3 (Brown et al., 2020) were also introduced.", "More information about BERT-inspired LMs can be found in Rogers et al. (2020).", "Non-English LMs.", "Several models dedicated to individual languages other than English have been developed.", "These include AraBERT (An-toun et al., 2020) and ArabicBERT (Safaya et al., 2020) for Arabic, Bertje for Dutch (de Vries et al., 2019), CamemBERT (Martin et al., 2020) and FlauBERT (Le et al., 2020) for French, PhoBERT for Vietnamese (Nguyen and Tuan Nguyen, 2020), and the models presented by Virtanen et al. (2019) for Finnish, Dadas et al. (2020) for Polish, and Malmsten et al. (2020) for Swedish.", "Pyysalo et al. (2020) also create monolingual LMs for 42 languages exploiting Wikipedia data.", "Our models contributed to this growing work of dedicated LMs, and has the advantage of covering a wide range of dialects.", "Our MARBERT and MARBERT-v2 models are also trained with a massive scale social media dataset, endowing them with a remarkable ability for real-world downstream tasks.", "NLP Benchmarks.", "In recent years, several NLP benchmarks were designed for comparative evaluation of pre-trained LMs.", "For English, McCann et al. (2018) introduced NLP Decathlon (DecaNLP) which combines 10 common NLP datasets/tasks.", "Wang et al. (2018) proposed GLUE, a popular benchmark for evaluating nine NLP tasks.", "Wang et al. (2019) also presented SuperGLUE, a more challenging benchmark than GLUE covering seven tasks.", "In the cross-lingual setting, Hu et al. (2020) Dataset mBERT XLM-RBXLM-RL AraBERT ARBERT MARBERT MARBERT (v2) ARLUE Senti (cid:63) 79 .", "Table 11 : Performance of our models on the DEV splits of ARLUE.", "(cid:63)", "Metric for ARLUE Senti is F 1PN .", "ARLUESM results is the average score across the social meaning tasks described in Table B.2.", "Metric for ARLUEQA is Exact Match (EM) / F 1 .", "Table 12 : Performance of our models on the TEST splits of ARLUE (Acc / F 1 ).", "(cid:63)", "Metric for ARLUE Senti is Acc/ F 1PN .", "ARLUESM results is the average score across the social meaning tasks described in Table 5.", "Metric for ARLUEQA is Exact Match (EM) / F 1 .", "provide a Cross-lingual TRansfer Evaluation of Multilingual Encoders (XTREME) benchmark for the evaluation of cross-lingual transfer learning covering nine tasks for 40 languages ( 12 language families).", "ARLUE complements these benchmarking efforts, and is focused on Arabic and its dialects.", "ARLUE is also diverse (involves 42 datasets) and challenging (our best ARLUE score is at 77 . 40 ).", "We presented our efforts to develop two powerful Transformer-based language models for Arabic.", "Our models are trained on large-to-massive datasets that cover different domains and text genres, including social media.", "By pre-training MARBERT and MARBERT-v2 on dialectal Arabic, we aim at enabling downstream NLP technologies that serve wider and more diverse communities.", "Our best models perform better than (or on par with) XLM-R Large ( 3 . 4 larger than our models), and hence are more energy efficient at inference time.", "Our models are also significantly better than AraBERT, the currently best-performing Arabic pre-trained LM.", "We also introduced AraLU, a large and diverse benchmark for Arabic NLU composed of 42 datasets thematically organized into six main task clusters.", "ARLUE fills a critical gap in Arabic and multilingual NLP, and promises to help propel innovation and facilitate meaningful comparisons in the field.", "Our models are publicly available.", "We also plan to publicly release our ARLUE benchmark.", "In the future, we plan to explore self-training our language models as a way to improve performance following Khalifa et al. (2021).", "We also plan to investigate developing more energy efficient models.", "We gratefully acknowledges support from the Natural Sciences and Engineering Research Council of Canada, the Social Sciences and Humanities Research Council of Canada, Canadian Foundation for Innovation, Compute Canada and UBC ARC-Sockeye ( https://doi.org/10.14288/SOCKEYE ).", "We also thank the Google TFRC program for providing us with free TPU access.", "Although our language models are pre-trained using datasets that were public at the time of collection, parts of these datasets might become private or get removed (e.g., tweets that are deleted by users).", "For this reason, we will not release or redistribute any of the pre-training datasets.", "Data coverage is another important consideration: Our datasets have wide coverage, and one of our contributions is offering models that can serve more diverse communities in better ways than existing models.", "However, our models may still carry biases that we have not tested for and hence we recommend they be used with caution.", "Finally, our models deliver better performance than larger-sized models and as such are more energy conserving.", "However, smaller models that can achieve simply good enough' results should also be desirable.", "This is part of our own future research, and the community at large is invited to develop novel methods that are more environment friendly." ]
[ "abstain", "abstain", "abstain", "objective", "method", "objective", "result", "other", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "objective", "method", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "abstain", "method", "objective", "objective", "objective", "objective", "objective", "objective", "objective", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "other", "method", "abstain", "other", "other", "other", "other", "other", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "method", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "method", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "abstain", "objective", "method", "objective", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "other", "other", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain" ]
[ "Recent research in Visual Question Answering (VQA) has revealed state-of-the-art models to be inconsistent in their understanding of the world they answer seemingly difficult questions requiring reasoning correctly but get simpler associated sub-questions wrong.", "These sub-questions pertain to lower level visual concepts in the image that models ideally should understand to be able to answer the reasoning question correctly.", "To address this, we first present a gradient-based interpretability approach to determine the questions most strongly correlated with the reasoning question on an image, and use this to evaluate VQA models on their ability to identify the relevant sub-questions needed to answer a reasoning question.", "Next, we propose a contrastive gradient learning based approach called Sub-question Oriented Tuning (SOrT) which encourages models to rank relevant sub-questions higher than irrelevant questions for an < image, reasoning-question > pair.", "We show that SOrT improves model consistency by up to 6.5% points over existing approaches, while also improving visual grounding and robustness to rephrasings of questions.", "Current visual question answering (VQA) models struggle with consistency.", "They often correctly answer complex reasoning questions, i.e, those requiring common sense knowledge and logic on top of perceptual capabilities, but fail on associated low-level perception questions, i.e., those directly related to the visual content in the image.", "For e.g., in Fig 1, models answer the reasoning question Was this taken in the daytime? correctly, but fail on the associated perception question Is the sky bright? indicating that the models likely answered the reasoning question correctly for the wrong reason(s).", "In this work, we explore the usefulness of leveraging information about sub-questions , i.e., low-level perception questions relevant to a reasoning question, and irrelevant questions , i.e., any other questions about the image unrelated to the reasoning question, to improve consistency in VQA.", "Selvaraju et al. (2020) have studied this problem and introduced the VQA-Introspect dataset that draws a distinction between higher-level reasoning questions and lower-level perception sub-questions.", "We augment this dataset with additional perception questions from the VQAv2 dataset such that each < image, reasoning question > pair contains a set of relevant perception questions, which we refer to as sub-questions (e.g.,Is the sky bright? in Fig", "1) and irrelevant perception questions, which we refer to as irrelevant questions (e.g., Is the train moving? in Fig", "1) throughout this paper.", "We use Gradient-based Class Activation Mapping (Grad-CAM) vectors (Selvaraju et al., 2019a) a faithful function of the model's parameters, question, answer and image to propose an interpretability technique that determines the questions most strongly correlated with a reasoning question for a model.", "This is measured by ranking questions based on the cosine similarity of their Grad-CAM vectors with that of the reasoning question.", "We find that top-performing VQA models often rank irrelevant questions higher than relevant questions.", "Motivated by this, we introduce a new approach based on contrastive gradient learning to fine-tune a VQA model by enforcing relevant sub-questions to be ranked higher than irrelevant questions while answering a reasoning question.", "This is achieved by forcing the cosine similarity of the reasoning question's Grad-CAM vector with that of a sub-question to be higher than with that of an irrelevant question .", "We find that our approach improves the model's consistency, defined as the frequency with which the model correctly answers a sub-question given that it correctly answers the reasoning question.", "proach on visual grounding by comparing Grad-CAM heatmaps with human attention maps collected in the VQA-HAT dataset (Das et al., 2016).", "We find that our approach of enforcing this language-based alignment through better ranking of sub-questions also improves visual grounding.", "We also demonstrate that training VQA models by aligning Grad-CAM vectors helps in improving robustness to rephrasings of questions, as evaluated on the VQA-Rephrasings dataset (Shah et al., 2019).", "Visual Question Answering .", "The VQA task (Agrawal et al., 2015) requires answering a freeform natural language question about visual content in an image.", "Previous work has shown that models often do well on the task by exploiting language and dataset biases (Agrawal et al., 2017; Zhang et al., 2015; Ramakrishnan et al., 2018; Guo et al., 2019; Manjunatha et al., 2018).", "In order to evaluate the consistency of models, Selvaraju et al. (2020) collected a new dataset, VQA-Introspect, with human explanations via sub-questions and answers for reasoning questions in the VQA dataset.", "Model Interpretability .", "While prior work has attempted to explain VQA decisions in the visual modality (Selvaraju et al., 2019a,b; Qiao et al., 2017; Liang et al., 2019), the multi-modal task of VQA has a language component which cannot always be explained visually, i.e., visual regions can be insufficient to express underlying concepts (Goyal et al., 2016; Hu et al., 2017).", "Park et al. (2018) and Wu and Mooney (2019) generate textual justifications through datasets curated with human explanations.", "Our approach differs by using Grad-CAM vectors which are fully self-contained and faithful to the model, requiring no additional parameters or datasets to interpret its decisions.", "In recent work on Human-AI collaboration (Bansal et al., 2019, 2021), a key finding is that optimizing solely for model accuracy does not always lead to better overall utility in real-world, high-stakes datasets where AI systems advise humans on making decisions.", "Instead, improvements on yardsticks related to the trustworthiness of predictions are important steps towards successfully deploying these algorithms.", "We believe that consistency, the core focus of our work, is an intrinsically important post-hoc explanatory metric and a proxy for common-sense reasoning which could lead to stronger collective performance in such collaborative settings.", "Aligning network importances .", "Ross et al. (2017) introduced an approach to train models with input-gradient penalties that led to the generation of faithful explanations and improved generalizability on image classifiers.", "Selvaraju et al. (2019b) introduced an approach to align visual explanations with regions deemed important by humans, thereby improving visual grounding in VQA models.", "In followup work, Selvaraju et al. (2020) introduced an approach to align attention maps for the reasoning question and associated perception sub-questions from VQA-Introspect to improve language based grounding.", "In contrast to attention maps, our work encourages Grad-CAM vectors of a reasoning question to be closer to those of sub-questions and farther away from those of irrelevant questions .", "Intuitively, this means that we are making the neurons used while answering a reasoning question to be similar to those used while answering a sub-question and dissimilar to those used while answering an irrelevant question .", "Our experiments show that this alignment improves the model's consistency and visual grounding.", "Grad-CAM .", "Grad-CAM, introduced by Selvaraju et al. (2019a), is a technique to obtain visual explanations from any CNN-based deep neural network.", "In this work, we adopt Grad-CAM to compute the contribution of a neuron at the layer in a VQA model where the vision and language modalities are combined.", "This is computed by first taking the gradient of the predicted output class score with respect to the neuron activations in the layer.", "We then point-wise multiply this with the corresponding activations to obtain our Grad-CAM vector.", "Specifically, if y c denotes the score of the ground-truth output class and A k the activations of layer k of the model, the Grad-CAM vector G ck is computed as follows, G ck = y c A k A k (1) Unlike Grad-CAM visualizations, these vectors are not visually interpretable as they are not computed on the final convolutional layer of the CNN.", "Consistency in VQA models .", "As defined in Selvaraju et al. (2020), the consistency of a VQA model refers to the proportion of sub-questions answered correctly, given that their corresponding reasoning questions were answered correctly.", "If a model is inconsistent, it is likely relying on incorrect perceptual signals or biases in the dataset to answer questions.", "Models that are consistent and based on appropriate perceptual signals are more likely to be reliable, interpretable and trustworthy.", "The key idea behind Sub-question Oriented Tuning (SOrT) is to encourage the neurons most strongly relied on (as assessed by Grad-CAM vectors) while answering a reasoning question (Was this taken in the daytime? in Fig", "1) to be similar to those used while answering relevant sub-questions (Is the sky bright?) and dissimilar to those used while answering irrelevant questions (Is the train moving?).", "This enforces the model to use the same visual and lingustic concepts while making predictions on the reasoning question and the sub-questions .", "Our loss has the following two components.", "Contrastive Gradient Loss .", "With the Grad-CAM vectors of the reasoning question ( GR ) , subquestion ( GS ) and irrelevant question ( GI ) , we formalize our contrastive gradient loss LCG as, LCG = max 0 , cosine-sim( GR , GI ) (cid:122) (cid:125)(cid:124) (cid:123) GR GI | GR || GI | GR GS | GR || GS | (cid:124) (cid:123)(cid:122) (cid:125) cosine-sim( GR , GS ) (2) Binary Cross Entropy Loss .", "To retain performance of the model on the base task of answering questions correctly, we add a Binary Cross Entropy Loss term ( LBCE ) that penalizes incorrect answers.", "Total Loss .", "Let o R , gt R , o S , gt S , o I and gt I represent the predicted and ground-truth answers for the reasoning, sub-questions and irrelevant questions respectively, and 1 , 2 , 3 be tunable hyperparameters.", "Our total loss L SOrT is, L SOrT = LCG + 1 LBCE ( o R , gt R ) + 2 LBCE ( o S , gt S ) + 3 LBCE ( o I , gt I ) (3) 4 Experiments Dataset.", "Our dataset pools VQA-Introspect and VQAv2 such that for every reasoning question in VQA-Introspect, we have a set of < sub-question , answer > pairs and a set of < irrelevant question , answer > pairs.", "The training/val splits contain 54,345/20,256 < image, reasoning question > pairs with an average of 2.58/2.81 sub-questions and 7.63/5.80 irrelevant questions for each pair.", "Baselines.", "We compare SOrT against the following baselines:", "1) Pythia (Jiang et al., 2018), and", "2) SQuINT in which Selvaraju et al. (2020) fine-tuned Pythia with an attention alignment loss to ensure that the model looks at the same regions when answering the reasoning and sub-questions .", "1) Mean Precision@1 (MP@1) .", "Proportion of < image, reasoning question > pairs for which the highest ranked question is a sub-question .", "2) Ranking Accuracy .", "Proportion of < image, reasoning question > pairs whose sub-questions are all ranked above their irrelevant questions .", "3) Mean Reciprocal Rank (MRR) .", "Average value of the highest reciprocal rank of a sub-question among all < image, reasoning question > pairs.", "Higher is better.", "4) Weighted Pairwise Rank (WPR) Loss .", "For pairs of incorrectly ranked < sub, irrelevant > questions, this computes the differences of their ConsistencyMetrics AccuracyMetrics RankingMetrics Method R (cid:51) S (cid:51) R (cid:51) S (cid:55) R (cid:55) S (cid:51) R (cid:55) S (cid:55) Consistency% Reas.", "similarity scores with the reasoning question.", "Averaged across all pairs, this computes the extent by which rankings are incorrect.", "Lower is better.", "Model Performance.", "1) Quadrant Analysis.", "a .", "R (cid:51) S (cid:51) The pairs where reasoning and sub-questions are both correctly answered.", "b .", "R (cid:51) S (cid:55) The pairs where the reasoning question is correctly answered, while the sub-question is incorrectly answered.", "c .", "R (cid:55) S (cid:51) The pairs where the reasoning question is incorrectly answered, while the subquestion is correctly answered.", "d .", "R (cid:55) S (cid:55) The pairs where reasoning and sub-questions are both incorrectly answered.", "2) Consistency.", "The frequency with which a model correctly answers a sub-question given that it correctly answers the reasoning question.", "3) Reasoning Accuracy .", "The accuracy on the reasoning split of VQAv2 dataset, and", "4) Overall Accuracy .", "Accuracy on the VQAv2 validation set.", "We attempt to answer the following questions: Does SOrT help models better identify the perception questions relevant for answering a reasoning question?", "As described in Sec 3.2, the model ranks perception questions ( sub-questions and irrelevant questions ) associated with an < image, reasoning question > pair according to the cosine similarities of their Grad-CAM vectors with that of the reasoning question.", "As seen in Table 1, we find that our approach outperforms its baselines on nearly all the ranking metrics.", "We observe gains of 4-6% points on MP@1 and MRR, and 1.5-2.5% points on Ranking Accuracy.", "Likewise, the improvement in WPR the soft metric that computes the extent by which rankings are incorrect is a substantial 12% points over Pythia.", "This confirms that our approach helps better distinguish Figure 2: An example of improvement in consistency between Pythia (top) and SOrT (below) brought about by better sub-question ranking.", "Does recognizing relevant sub-questions make models more consistent?", "We find that the improved ranking of sub-questions through SOrT improves consistency by 6.5% points over Pythia, 1.47% points over SQuINT and 0.4% points over an approach that just uses sub-questions while discarding irrelevant questions 1 .", "As seen in Table 1, the consistency gains are due to significant improvements in the R (cid:51) S (cid:51) and R (cid:51) S (cid:55) quadrants.", "As seen in Table 1, the consistency gains are due to significant improvements in the R (cid:51) S (cid:51) and R (cid:51) S (cid:55) quadrants.", "This comes at the expense of a drop in overall accuracy and reasoning accuracy by 1% point, likely due to the active disincentization of memorizing language priors and dataset biases through our contrastive gradient learning approach.", "Gradient-based explanations have been shown to be more faithful to model decisions compared to attention maps (Selvaraju et al., 2019b).", "Our results confirm this by showing that aligning Grad-CAM vectors for reasoning and sub-questions makes models more consistent compared to SQuINT, which aligns their attention maps.", "Fig 2 shows an example of improved consistency using SOrT.", "The Pythia model answers its sub-question incorrectly.", "Our approach ranks the relevant subquestion higher than the irrelevant ones and answers it correctly thus improving consistency.", "1 These numbers are averaged values from 10-fold cross validation runs on the val split.", "Does our approach also help with syntactic consistency as tested on rephrased questions?", "To test whether our approach of aligning Grad-CAM vectors also helps with making models consistent to rephrasings of questions, we use the VQA-Rephrasings dataset introduced in Shah et al. (2019), split into appropriate train / val / test splits containing 85,042 / 24,297 / 12,148 pairs of rephrased questions.", "We follow the same training protocols outlined earlier for each of our baselines, and retrain Pythia with the additional data.", "On the held-out test split of this dataset, we observe improvements in consistency 80.73 (SOrT) v/s 79.98 (SQuINT) v/s 79.51 (Pythia).", "Interestingly, we observe a minor improvement in accuracy as well 66.52 (SOrT) v/s 65.45 (SQuINT) v/s 66.38 (Pythia).", "This confirms the effectiveness of our approach for both semantic and syntactic consistency.", "Does enforcing language-based alignment lead to better visual grounding?", "To evaluate this, we compute visual grounding through Grad-CAM applied on the final convolutional layer.", "We then compute the correlation of Grad-CAM heatmaps with the validation split of the VQA-Human ATtenion (VQA-HAT) dataset (Das et al., 2016), comprising 4,122 attention maps.", "This dataset contains human-annotated ground truth' attention maps which indicate the regions humans chose to look at while answering questions about images in the VQAv1 dataset.", "The proposed method to compare human and model-based attention maps in this work was to rank their pixels according to their spatial attention, and then compute the correlation between these two ranked lists.", "for Pythia and 0 .", "060 0 .", "008 for SQuINT.", "These statistically significant improvements indicate that enforcing language-based alignment during training improves visual grounding on an unseen dataset.", "A qualitative example that demonstrates the supe-rior visual grounding of SOrT compared to its baselines is shown in Fig 3.", "For the question Is the baby using the computer?", "and its corresponding answer Yes , we see that the Grad-CAM heatmap generated by SOrT is closest to that of the human attention map.", "It is also the only heatmap in this example that actually points to the fingers of the child, which is the essential visual component for answering the question.", "In this work, we seek to improve consistency in VQA.", "We first develop language-based interpretability metrics to measure the relevance of a lower-level perception question while answering a higher-level reasoning question.", "Evaluating state-of-the-art VQA models on these metrics reveals that models often rank irrelevant questions higher than relevant ones.", "We present SOrT (Sub-question Oriented Tuning), a contrastive gradient learning based approach for teaching VQA models to distinguish between relevant and irrelevant perceptual concepts while answering a reasoning question.", "SOrT aligns Grad-CAM vectors of reasoning questions with those of sub-questions , while distancing them from those of irrelevant questions .", "We demonstrate SOrT's effectiveness on datasets that test for semantic as well as syntactic consistency without major changes to accuracy, while also improving visual grounding.", "The Georgia Tech effort was supported in part by NSF, AFRL, DARPA, ONR YIPs, ARO PECASE, Amazon.", "The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the U.S. Government, or any sponsor.", "The key ethical considerations for this work relate to fairness.", "Although not ubiquitous in application today, the progress of research in VQA necessitates work in the direction of transparency so as to build trust among users before these systems are widely deployed in the real world.", "Prior work in this domain has revealed VQA models to exploit visual and language based priors in the datasets they are trained on (Das et al., 2016; Agrawal et al., 2017; Zhang et al., 2015; Ramakrishnan et al., 2018; Guo et al., 2019; Manjunatha et al., 2018).", "Such models tend to compound the biases prevalent in these datasets, and could have detrimental effects on fairness.", "Our work could better explain these biases by identifying the most relevant perceptual concepts used by the model while answering reasoning questions.", "In addition, by improving consistency and visual grounding in VQA systems, our work contributes to mitigating some of these biases." ]
[ "abstain", "abstain", "objective", "objective", "result", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "result", "objective", "abstain", "result", "abstain", "result", "abstain", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "method", "other", "other", "other", "other", "objective", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "method", "abstain", "abstain", "other", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain" ]
[ "Representation learning is widely used in NLP for a vast range of tasks.", "However, representations derived from text corpora often reflect social biases.", "This phenomenon is pervasive and consistent across different neural models, causing serious concern.", "Previous methods mostly rely on a pre-specified, user-provided direction or suffer from unstable training.", "In this paper, we propose an adversarial disentangled debiasing model to dynamically decouple social bias attributes from the intermediate representations trained on the main task.", "We aim to denoise bias information while training on the downstream task, rather than completely remove social bias and pursue static unbiased representations.", "Experiments show the effectiveness of our method, both on the effect of debiasing and the main task performance.", "Supervised neural networks have achieved remarkable success in a wide range of natural language processing (NLP) tasks.", "The fundamental capability of these neural models is to learn effective feature representations (Bengio et al., 2013) for the downstream prediction task.", "Unfortunately, the learned representations frequently contain undesirable biases with respect to things that we would rather not use for decision making.", "We refer to such inappropriate factors as protected attributes (Elazar and Goldberg, 2018a).", "Biased information has serious real-world consequences.", "For example, concerns have been raised about automatic resume filtering systems giving preference to male applicants when the only distinguishing factor is the applicants' gender (Sun et al., 2019).", "In this paper, we focus on social bias, such as gender bias which is the preference or prejudice towards one gender over the other (Moss-Racusin et al., 2012), race bias and age bias.", "From the perspective of the debiasing target, previous debiasing works can be approximately clas-sified into two types, word embedding (Bolukbasi et al., 2016; Caliskan et al., 2017; Zhao et al., 2018; Manzini et al., 2019; Wang et al., 2020; Kumar et al., 2020) and sentence embedding (Xu et al., 2017; Elazar and Goldberg, 2018a; Zhang et al., 2018; Ravfogel et al., 2020).", "The former aims to reduce the gender bias in word embedding, either as a post-processing step (Bolukbasi et al., 2016) or as part of the training procedure (Zhao et al., 2018).", "The latter focuses on removing these protected attributes from the downstream intermediate representations (Elazar and Goldberg, 2018a; Ravfogel et al., 2020).", "In this paper, we consider the latter setting and focus on how to mitigate undesirable social bias from the encoded representations without hurting the performance of the main task.", "In terms of debiasing methods, previous models are either based on projection on a pre-specified, user-provided direction (Bolukbasi et al., 2016) or null-space (Xu et al., 2017; Ravfogel et al., 2020), or on adding an additional gender discriminator (Xie et al., 2017; Elazar and Goldberg, 2018a).", "The former first trains an intermediate feature extractor on the main task, then using a separate projection method to remove social bias from the representations, finally fine-tuning on the main task.", "The debiasing procedure can be regarded as static because of no direct interaction between the main task and the debiasing task.", "Therefore, these methods have no guarantee that the representations for predicting the main task do not contain any bias information.", "Existing work, (Gonen and Goldberg, 2019), has shown that these methods only cover up the bias and that in fact, the information is deeply ingrained in the representations.", "Compared to these static debiasing methods, gender discriminator based methods (Elazar and Goldberg, 2018a; Zhang et al., 2018) use the traditional generative adversarial network (GAN) (Goodfellow et al., 2014) to distinguish protected gender attributes from encoded representations.", "However, they are notoriously hard to train (Ganin and Lempitsky, 2015).", "Elazar and Goldberg (2018a) has shown that the complete removal of the protected information is nontrivial: even when the attribute seems protected, different classifiers of the same architecture can often still succeed in extracting it.", "Hence, we aim to dynamically disentangle the social bias from the encoded representations while jointly training on the main task in a more stable way, rather than directly remove protected attributes.", "In fact, we show that bias information always remains even after adversarial debiasing and can be reconstructed from the encoded representations.", "The main goal of debiasing is to prevent downstream models from utilizing these social bias in the representations, that is, dynamic disentanglement instead of complete removal, as Fig 1 displays.", "In this paper, we propose an adversarial disentangled debiasing model to dynamically decouple social bias attributes from the intermediate representations trained on the main task.", "Our motivation is to denoise bias information while training on the downstream task, rather than completely remove social bias and pursue static unbiased representations.", "Previous works (Elazar and Goldberg, 2018a; Gonen and Goldberg, 2019) show that even debiasing models achieve high fairness (Hardt et al., 2016), a fair amount of protected information still remains and can be extracted from the encoded representations.", "We argue that one can hardly re-Embeddings Attentive Pooling Classifer (main task) Classifer (main task) Classifer (protected) Classifer (protected) L main L protected Input Text Normalize Scale Gradient of L protected w.r.t embeddings debiasing Protected ClassifierBoundary 1. protected forward 2. debiasing backward 3. main task forward Context Encoder 4. update parameters Figure 2: The overall architecture of our proposed approach.", "move all gender or race directions in the latent space but only preserve bias-free prediction on the downstream task.", "Specifically, we use a protected attribute classifier to generate model-agnostic adversarial worst-case perturbations to the representations in the direction that significantly increases the classifier's loss.", "Then we apply the perturbations to train the model of the downstream task end-to-end.", "The main difference between our method and GAN-based counterparts is that GANs suffer from unstable training for the two-stage min-max procedure but our method directly computes gradient-based perturbations to disentangle bias information from the representations.", "We hope to provide new insights and directions towards solving social bias issues.", "1 2 Approach 2.1 Problem Formulation Our main goal is to disentangle protected attributes from the representations of downstream tasks so that biased information can not affect the decision of the model on the main task.", "In other words, we aim to achieve fairness by equalizing the opportunity (Hardt et al., 2016) between individuals with different protected attributes (e.g. gender/race).", "Given a set of input samples x i , and corresponding discrete attributes Z, z i { 1 , . . . , k } (e.g. gender or race) 2 , we aim to learn unbiased representations 1 Our source code is available at https://github.", "Fig 2 shows the overall architecture of our proposed method, including four core steps: protected forward, debiasing backward, main task forward, and update parameters.", "(1) protected forward : We first pre-train a protected attribute classifier then compute the classification cross-entropy loss L protected for each input sample x .", "(2) debiasing backward : We maximize the loss L protected of the protected attribute classifier to obtain the adversarial decoupling perturbation .", "(3) main task forward : Then we sum the original input x and perturbation to get a new adversarial sample x adv .", "We forward the sample x adv to the main task classifier to compute the loss L main of the downstream task.", "(4) update parameters : Finally, the overall model is updated by the sum of two losses L protected , L main .", "We will dive into the details of each procedure in the following section.", "Protected Forward In Fig 2, we adopt BiLSTM as the shared context encoder by the main task classifier and protected attribute classifier.", "We first feed each token to an embedding layer to get token embedding e , then a BiLSTM encoder is adopted to get the context-aware representation h i for each token x i .", "Then, we use an attentive pooling layer to calculate the sentence embedding h .", "After that, a fully-connected layer followed by a softmax output layer is used to predict the protected attribute y i .", "Finally, we can get the classification cross-entropy loss L protected .", "3 In the experiment, we observe that pre-training the protected attribute classifier can effectively accelerate the whole training progress of debiasing.", "We also demonstrate that jointly training the protected attribute classifier and the main task classifier achieves superior performance in Section 4.2.", "Debiasing Backward This is the primary step of our adversarial semantic disentanglement.", "Our main idea is to perform adversarial attacks (Good-fellow et al., 2015; Kurakin et al., 2016; Miyato et al., 2016; Jia and Liang, 2017; Zhang et al., 2019; Ren et al., 2019) to dynamically decouple social bias attributes from the intermediate representations trained on the main task.", "Specifically, we 3 If the protected attribute is continuous, we can apply the regression objectives.", "need to compute a worst-case perturbation that maximizes the original classification cross-entropy loss L protected of the protected attribute classifier: = arg max (cid:107) (cid:48) (cid:107) (cid:15) L protected (cid:0) , x + (cid:48) (cid:1) (1) where represents the parameters of the protected attribute classifier and x denotes a given sample.", "(cid:15) is the norm bound of the perturbation .", "However, due to model complexity, accurate computation for is costly and inefficient.", "Similar to Vedula et al. (2020) and Ru et al. (2020), we apply Fast Gradient Value (FGV) (Rozsa et al., 2016) to approximate a worst-case perturbation : = (cid:15) g || g || ; where g = e L ( f ( e ; ) , Y ) (2) where f represents the protected attribute classifier.", "We perform normalization to g and then use a small (cid:15) to ensure the approximate is reasonable.", "Section 4.3 validates a proper value of (cid:15) can balance the debiasing effect and the main task performance.", "Finally, we can obtain the pseudo adversarial sample x adv = x + .", "Intuitively we aim to obtain a debiased representation x adv by confusing the protected attribute classifier.", "Thus, the main task classifier can make a fair decision conditioned on the disentangled representation.", "Main Task Forward After obtaining the pseudo adversarial sample x adv , we forward the sample x adv to the main task classifier to compute the loss L main of the downstream task, similar to protected forward.", "We find the location of adding adversarial perturbation plays a role in debiasing performance in Section 4.4.", "In a nutshell, adding noise to the word embedding layer achieves the best debiasing performance.", "Update Parameters Finally, we apply the two classification objectives to update the parameters of the model as the dashed lines in Fig 2 show.", "Note that the loss L protected of the protected attribute classifier only updates the MLP and softmax layers while the loss L main of the main task classifier updates all the model parameters, including the low-level encoding layers.", "The setting aims to avoid the negative effect of the protected attribute classifier on main task performance.", "method on the dialectal tweets (DIAL) corpus collected by Blodgett et al. (2016) in a controlled setup, and the biography corpus (De-Arteaga et al., 2019) in a wild setup.", "The dialectal tweets corpus consists of 59.2 million tweets, where each tweet contains \"race\" information, and emojis correspond with specific emotion groups.", "According to the label of race and sentiment, we split the data into four classes: African American English (AAE) speaker with \"happy\" sentiment, Standard American English (SAE) speaker with \"happy\" sentiment, AAE speaker with \"sad\" sentiment and SAE speaker with \"sad\" sentiment.", "Following (Elazar and Goldberg, 2018b), we filter the corpus and 176K tweets left (44k for each class).", "Then we divide them into 40k samples for training, 2k for developing, and 2k for testing, following (Ravfogel et al., 2020).", "In the controlled setup, we introduce a bias ratio relevant to the sentiment and race to control the imbalance proportion of samples in four groups, following (Ravfogel et al., 2020).", "e.g., in the 0.8 condition, the AAE class contains 80% happy / 20% sad samples, while the SAE class contains 80% sad / 20% happy samples.", "And in the 0.5 conditions, all four categories contain the same number of samples.", "In all experiments, the unbalance factor of the development set and test set is set to 0.5.", "The biography corpus contains 393,423 biographies, the corresponding professions (28 classes) labels and gender (protected attributes) labels.", "We split the dataset into 255,710, 39,369, 98,344 samples for training, validation and testing, as consistent with (De-Arteaga et al., 2019; Ravfogel et al., 2020).", "Baselines We compare our model with these baselines as follow: Original is the main task classifier without any debiasing procedure as a baseline.", "INLP (Ravfogel et al., 2020) is a linear debiasing method, which removes the protected information from neural representations by iterative training the linear classifiers which predict the protected attributes.", "4 Random Noise replaces the debiasing perturbation generated by the protected classifier with random noise.", "Implementation Details To demonstrate the effectiveness of our method, we use the same model structure of the main task (sentiment classification) as (Ravfogel et al., 2020), where the DeepMoji encoder (Felbo et al., 2017) and an one-hidden-layer MLP constitute the classifier.", "Besides, for simplicity, we use the same structure of classifier for predicting protected attributes.", "Both the unbalanced training data and the pre-trained DeepMoji model which has been proven that encodes demographic information would lead the downstream MLP classifier to make biased predictions.", "We then perform debiasing training for the main-task model following the process described in section 2.3 on the imbalanced training set with the imbalance factor and test the debiased model on the balanced test set.", "Besides, we follow (Ravfogel et al., 2020) to evaluate our debiasing method on the biography corpus as a wild setup to verify the validity of our method in a less artificial setting.", "In this wild set up, we construct a similar model structure to the DeepMoji encoder, with a two-layer bidirectional RNN as the encoder, except for the attention operation.", "There are two input representation types of the encoder: FastText and BERT (Devlin et al., 2019).", "In the FastText experiments, we directly use the trained word embedding that provided by (Rav-fogel et al., 2020), to represent each biography as a sequence of vectors.", "And in the BERT experiments, we use BERT as a sequence-to-sequence encoder 4 Note that the original results reported in the published version contain some mistakes.", "We rerun the updated evaluation scripts according to the official code and report all the results for a fair comparison.", "to obtain the representation of each word in the sentence.", "Then we feed the sentence representations into the model and perform the debiasing training process.", "For all the experiments, we train and test our model on single 2080Ti GPU, and we use Al-lenNLP framework (Gardner et al., 2017) to implement our model.", "The hidden size of the 1-hidden-layer MLP classifier used in all of the above experiments is set to 300.", "In a controlled experiment, our debiasing method takes an average of ten minutes to run, and the total parameters of our models are 23M, including a DeepMoji encoder, a main task classifier, and a protected classifier.", "In the wild experiment, the model size of the FastText experiment is 127M, which takes an average of 15 minutes to run.", "While the model size of the BERT experiment is 114M, and it takes an average of 55 minutes to run, due to the use of BERT to encode the sentences.", "It's worth mentioning that our method converges with only one or two epochs, which is faster than other debiasing methods.", "In practice, we empirically find that the debiasing performance can reach the best when the L2-Norm of perturbation is between 1/3 and 2/3 of the corresponding disturbed vectors' L2-Norm.", "For example, in the first experiment, the L2-Norm size of the embedding vector is around 4, then we could set the normalized scale to (1.2, 1.8).", "Metrics To evaluate the bias in the model, following (Ravfogel et al., 2020; De-Arteaga et al., 2019), we calculate TPR-GAP to measure the difference (GAP) in the True Positive Rate (TPR) between the groups with different protected attributes which can reflect the unfairness existing in NLP models: T P R p,y = P [ Y = y | P = p, Y = y ] (3) GAP TPRp,y = T P R p,y T P R p (cid:48) ,y (4) where y is the main task label of the input representation X , and p , p (cid:48) denote the protected attribute P 's two values.", "Then we use TPR-GAP to measure the degree of bias, which calculate the root-mean square of GAP TPRp,y over all main task label y : GAP TPR,RMS = (cid:115) 1 | N | (cid:88) y N ( GAP TPRp,y ) 2 (5) where N is the label set of all main task (sentiment or profession).", "De-Arteaga et al. (2019) did the experiment on the biography corpus, and proved FastText BERT Accuracy (profession) Original 78.1 80.9 INLP 73.0 75.2 Ours 80.1 77.8 TPR-GAP Original 0.184 0.184 INLP 0.089 0.095 Ours 0.082 0.092 Table 2: Fair classification on the Biographies corpus.", "the indicator GAP TPRp,y have a strong correlation with the percentage of a certain gender group in different profession y , therefore GAP TPR,RMS can reflect an overview of bias across all different main attributes.", "We use GAP TPR,RMS to measure the bias existing in the models.", "Table 1 displays the experimental results on the DIAL dataset under different ratios of data imbalance proportion which can reflect the degree of dataset bias.", "We analyze the results from two perspectives, TPR-GAP (Debias) and Sentiment (Main Task).", "For TPR-GAP (Debias), our method consistently outperforms other baselines under all ratios, especially on the more biased dataset.", "It demonstrates the effectiveness of our proposed adversarial semantic disentanglement.", "We also observe Random Noise can hardly mitigate social bias which confirms the necessity of the protected attribute classifier.", "For the performance of the main sentiment classification task, our method reaches close to the original baseline while INLP suffers from a severe drop under a large ratio.", "The results prove that our method can better avoid the negative effect of the debiasing procedure on main task performance.", "To further evaluate the debiasing effect, we also show the results of the wild biography classification dataset in Table 2. Results show that our method both achieves superior performance than other baselines on Accuracy of the main task and TPR-GAP of debiasing.", "Compared to the significant improvements on the DIAL dataset, we hypothesize that the bias degree of the dataset makes a difference to the range of improvements.", "In previous works, it is common to pre-train the sentence encoder in advance and keep the encoder fixed while applying the debias algorithm.", "However, it is unclear whether this conventional experiment setup is applicable to our approach.", "Since 0.4 0.5 0.6 0.7 0.8 0.9 1.0 1.1 1.2 1.3 1.4 1.5 Perturbation Intensity 0.710 0.715 0.720 0.725 0.730 0.735 C l a ss i f i c a t i o n A cc u r a c y fixed encoder non-fixed encoder 0.4 0.5 0.6 0.7 0.8 0.9 1.0 1.1 1.2 1.3 1.4 1.5 Perturbation Intensity 0.02 0.04 0.06 0.08 0.10 0.12 B i a s D e c r e m e n t fixed encoder non-fixed encoder Figure 3: Performance comparison between fixed and non-fixed encoders.", "our approach dynamically generates perturbation to decouple social bias from context via adversarial attacks, we expect the non-fixed encoder to generate perturbation of higher quality.", "To check this, we conduct two groups of experiments in the DIAL dataset, where one group uses a fixed encoder while the other group keeps the contextual encoder trainable.", "Note that we set the bias ratio to 0.6 in both two groups of experiments.", "Fig 3 shows the experimental results.", "In Fig 3 above, we observe that our approach with the non-fixed encoder consistently achieves better debias effectiveness compared to the fixed encoder counterpart with a large margin.", "When the perturbation intensity increases, both experimental settings achieve an increasingly better debias effect.", "On the other hand, as shown in Fig 3 below, the fixed encoder approach suffers a severe performance drop in classification accuracy with increasing perturbation intensity.", "Meanwhile, the classification accuracy under the non-fixed encoder setting is still increasing, and even outperforms the fixed encoder one when a relatively large perturbation intensity is applied.", "We argue that, with a non-fixed encoder, our approach can learn a high-quality perturbation for representation debias, and meanwhile continuously optimize for the main task.", "As discussed in the previous section, our proposed adversarial disentangled debiasing method requires the protected classifier to learn an accurate decision boundary of the protected attributes, such that the debiasing perturbation approximates the direction that mostly eliminates the model's discrimination of the protected attributes.", "Naturally, we have two options: either fix the parameters of protected classifier to generate the relatively static debiasing perturbation, or train the protected classifier on-the-fly during the main classifier training process to offer a relatively dynamic perturbation.", "To verify which one performs better, we adopt two groups of experiments.", "In the static setting, we keep the parameters of the protected classifier fixed.", "Whether the parameters of the encoder are fixed or not, the debiasing perturbation generated by the protected classifier would be relatively static.", "It's worth noting that if the parameters of the encoder are fixed, the debiasing perturbation would be totally static.", "While in the training on-the-fly setting, we reserve the gradient of the protected classifier and update its parameters together with the main task model (context encoder and main task classifier).", "According to the conclusions in 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0 5.5 6.0 6.5 7.0 Perturbation Intensity 0.66 0.68 0.70 0.72 0.74 C l a ss i f i c a t i o n A cc u r a c y 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0 5.5 6.0 6.5 7.0 Perturbation Intensity 0.125 0.150 0.175 0.200 0.225 0.250 0.275 0.300 0.325 B i a s D e c r e m e n t Figure 5: The debias effectiveness (above) as well as the classification accuracy on the main task (below) of our proposed approach in the DIAL dataset, with the perturbation intensity increases from 0.1 to 7.0.", "section 4.1, we make the context encoder trainable in both settings and use the same objective to train the main classifier.", "The results are displayed in Fig 4. We can find that both settings have the ability to debiasing in the DIAL dataset, showing the effectiveness of our approach in both settings.", "However, the training on-the-fly strategy consistently outperforms the static strategy under various perturbation intensities.", "We hypothesize that the difference is mainly because under the training on-the-fly strategy, the protected classifier will have a chance to adjust the decision boundary when the context encoder updates, and thus continuously generates better dynamic debiasing perturbation via adversarial attacks.", "To explore how the perturbation intensity influ-ences the debias effectiveness and the performance of main task, we run multiple experiments with only changing the perturbation intensity.", "We experiment with a wide range of perturbation intensity, from 0.1 to 7.0.", "The experimental results are illustrated in Fig 5. From the figure above, we find that the bias decrement rapidly increases at the beginning period with the intensity increasing from 0.1 to 0.7.", "Then, between a wide range from 0.7 to 6.6, the bias decrement keeps relatively stable, oscillate in a DIAL bias ratio 0.5 0.6 0.7 0.8 Accuracy Original 0.75 0.75 0.74 0.71 To sent emb 0.75 0.72 0.72 0.72 To word emb 0.73 0.72 0.72 0.73 TPR-GAP Original 0.14 0.23 0.31 0.40 To sent emb 0.09 0.14 0.19 0.21 To word emb 0.09 0.11 0.10 0.09 Table 3: Analysis on which representation space is best for debiasing.", "small range of 0.275 0.325, reflecting the stability of our approach.", "However, when the perturbation intensity exceed some threshold (6.6 in this case), the bias decrement drops again.", "Meanwhile, with the perturbation intensity increasing, the classification accuracy of main task keeps falling (figure below), indicating that the perturbation with high intensity will also disturb the main task, leading to a low classification accuracy.", "The result provides a principle of how to choose a suitable perturbation intensity the minimal intensity while effective enough for debiasing.", "Another pivotal consideration for our dynamically disentangling approach is which representation space should we add the perturbation to?", "Typically, we have two choices:", "a) adding the perturbation to the sentence embedding space or", "b) adding the perturbation to the word embedding space.", "The sentence embedding is closer to the output space with the key information condensed into a single vector, while the word embedding is closer to the input side, keeping separated for each token.", "To check out which one performs better for social debiasing, we conduct experiments in the DIAL dataset with different bias ratio.", "Table 3 illustrates the experiment results.", "Compared the result of To sent emb to To word emb, we found adding the perturbation to word embedding space often gains better debiasing results, especially when the bias ratio of the dataset is large.", "For example, when the bias ratio is 0.8, adding to word embedding space achieves a GAP TPR,RMS of 0.09, while adding to sentence embedding space achieves 0.21.", "We believe that, when applying our debiasing approach to a deeper representation DIAL bias ratio 0.5 0.6 0.7 0.8 Accuracy Original 0.75 0.75 0.74 0.71 Entropy 0.74 0.71 0.70 0.72 Cross entropy 0.75 0.72 0.72 0.73 TPR-GAP Original 0.14 0.23 0.31 0.40 Entropy 0.13 0.15 0.17 0.17 Cross entropy 0.09 0.11 0.10 0.09 Table 4: Experimental results on accuracy and debiasing effect with different objectives of the protected classifier.", "space, the perturbation is also context-aware (since the context encoder is also related when calculating the gradient) and thus more dynamic for the complex data distribution.", "As mentioned in Section 2.3, we need to calculate a cross-entropy loss L protected to generate the debiasing perturbation via FGV.", "Thus, during the training of the main task, we must obtain the protected attribute for each training example to calculate the cross-entropy loss.", "This severely limits the usefulness of our approach, as it may be difficult to obtain the ground truth protected attribute when training the main task.", "To this end, we also propose to use the entropy loss (Zheng et al., 2020) to substitute the cross-entropy loss: L protected = H ( P ( y protected | x )) (6) where H indicates the Shannon entropy and P ( y protected | x ) is the distribution output by protected classifier.", "This objective forces the protected classifier to obtain high entropy, which means the classifier is not confident and almost distributed uniformly across all values of the protected attributes.", "In Table 4, we compare the debiasing effectiveness of using entropy with cross-entropy.", "From the table, we observe that using the entropy objective also works for debiasing as the TPR-GAP also drops compared with the baseline.", "However, the debiasing effect still can't exceed our approach with cross-entropy.", "This seems reasonable since the cross-entropy objective introduces extra information about the protected attribute.", "With the extra supervision signal, our approach generates pertur-DIAL bias ratio 0.5 0.6 0.7 0.8 RIM INLP 0.143 0.164 0.362 0.473 Ours 0.357 0.482 0.650 0.814 Table 5: The debiasing effect under our proposed Relative Improve Metric (RIM).", "To more clearly show the performance differences of our model over data sets with varying degrees of bias, we introduce a new metric named Relative Improve Metric (RIM):", "where Acc and Acc (cid:48) represent the main task accuracy of the model before and after debiasing respectively, and GAP , GAP (cid:48) represent the TPR-GAP indicator of the model before and after debiasing respectively.", "RIM could synthetically reflect the stability of the main task and the debiasing performance of a debiasing method.", "We calculate the RIM indicator of our model and INLP based on the results in Table 1, and the new results are shown in Table 5. We can observe that the stronger bias in the dataset, the better performance of the two methods.", "Besides, we can find that our debiasing method is more robust.", "To better understand the effectiveness of our method, we display a feature visualization of sentence representations in Fig 6. We can observe that the different race classes are no longer linearly", "separable after debiasing.", "Therefore, downstream tasks can not make decisions conditioned on the race information in the representations.", "5 Related Work The objective of controlled removal of specific types of information from neural representations is tightly related to the task of disentanglement of the representations (Bengio et al., 2013), that is, controlling and separating the different kinds of information encoded in them.", "Previous models are either based on projection on a pre-specified, user-provided direction (Bolukbasi et al., 2016) or null-space (Xu et al., 2017; Ravfogel et al., 2020), or adding an additional gender discriminator (Xie et al., 2017; Elazar and Goldberg, 2018a), or the impact of data decisions (Beutel et al., 2017).", "The former first train an intermediate feature extractor on the main task, then use a separate projection method to remove social bias from the representations, finally finetune on the main task.", "Compared to these static debiasing methods, gender discriminator based methods (Elazar and Goldberg, 2018a; Zhang et al., 2018) use the traditional generative adversarial network (GAN) (Goodfellow et al., 2014) to remove protected gender attributes from encoded representations.", "However, they are notoriously hard to train (Ganin and Lempitsky, 2015).", "Elazar and Goldberg (2018a) has shown that the complete removal of the protected information is nontrivial: even when the attribute seems protected, different classifiers of the same architecture can often still succeed in extracting it.", "Therefore, in this paper, we aim to dynamically disentangle the social bias from the encoded representations while jointly training on the main task in a more stable way, rather than directly remove protected attributes.", "The main goal of debiasing is to prevent downstream models from utilizing these social biases in the representations, that is, dynamic disentanglement instead of complete removal.", "In this paper, we focus on removing social bias in representation learning.", "We argue that the main goal of debiasing is to prevent downstream models from utilizing these social biases in the representation, that is, dynamic disentanglement instead of complete removal.", "Therefore, we propose an adversarial disentangled debiasing model to dynamically decouple social bias attributes from the intermediate representation trained on the main task.", "We perform extensive experiments and analysis to demonstrate the effectiveness of our method.", "We hope to provide new insights and directions towards solving social bias.", "In recent years, neural network based models have demonstrated remarkable performance in many natural language processing tasks and thus have been applied to a wide range of real-world applications.", "However, a lot of works reveal that such models are easily affected by social bias and thus makes incorrect and biased decisions.", "In domains with the greatest potential for societal impacts, using such biased models for real-world applications is dangerous and faces many problems such as human morality.", "The social bias implicit in the natural language processing model may be exposed and become a social hot spot, thus becoming an unstable factor that causes social unrest.", "Meanwhile, some existing debiasing methods, although able to slightly reduce bias in such model, often cause great damage to model performance in the main task, thus difficult to be applied in practice.", "This work proposes a new adversarial training method for end-to-end debiasing.", "Due to the robustness of the adversarial attack, the model can eliminates bias without losing much performance.", "This work was partially supported by National Key RD Program of China No. 2019YFF0303300 and Subject II No. 2019YFF0303302, DOCOMO Beijing Communications Laboratories Co., Ltd, MoE-CMCC \"Artifical Intelligence\" Project No.", "MCM20190701." ]
[ "abstain", "abstain", "abstain", "abstain", "objective", "objective", "result", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "result", "abstain", "objective", "objective", "abstain", "objective", "abstain", "method", "method", "method", "objective", "objective", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "method", "method", "method", "abstain", "method", "method", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "method", "abstain", "result", "result", "abstain", "result", "abstain", "result", "result", "abstain", "abstain", "abstain", "other", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "objective", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other" ]
[ "We propose a new architecture for adapting a sentence-level sequence-to-sequence transformer by incorporating multiple pretrained document context signals and assess the impact on translation performance of (1) different pretraining approaches for generating these signals, (2) the quantity of parallel data for which document context is available, and (3) conditioning on source, target, or source and target contexts.", "Experiments on the NIST ChineseEnglish, and IWSLT and WMT EnglishGerman tasks support four general conclusions: that using pretrained context representations markedly improves sample efficiency, that adequate parallel data resources are crucial for learning to use document context, that jointly conditioning on multiple context representations outperforms any single representation, and that source context is more valuable for translation performance than target side context.", "Our best multi-context model consistently outperforms the best existing context-aware transformers.", "Generating an adequate translation for a sentence often requires understanding the context in which the sentence occurs (and in which its translation will occur).", "Although single-sentence translation models demonstrate remarkable performance (Chen et al., 2018; Vaswani et al., 2017; Bahdanau et al., 2015), extra-sentential information can be necessary to make correct decisions about lexical choice, tense, pronominal usage, and stylistic features, and therefore designing models capable of using this information is a necessary step towards fully automatic high-quality translation.", "A series of papers have developed architectures that permit the broader translation model to condition on extra-sentential context (Zhang et al., 2018; Miculicich et al., 2018), operating jointly on multiple sentences at once (Junczys-Dowmunt, 2019), or indirectly conditioning on target side document context using Bayes' rule (Yu et al., 2020b).", "While noteworthy progress has been made at modeling monolingual documents (Brown et al., 2020), progress on document translation has been less remarkable, and continues to be hampered by the limited quantities of parallel document data relative to the massive quantities of monolingual document data.", "One recurring strategy for dealing with this data scarcityand which is the basis for this workis to adapt a sentence-level sequence-to-sequence model by making additional document context available in a second stage of training (Maruf et al., 2019; Zhang et al., 2018; Miculicich et al., 2018; Haffari and Maruf, 2018).", "This two-stage training approach provides an inductive bias that encourages the learner to explain translation decisions preferentially in terms of the current sentence being translated, but these can be modulated at the margins by using document context.", "However, a weakness of this approach is that the conditional dependence of a translation on its surrounding context given the source sentence is weak, and learning good context representations purely on the basis of scarce parallel document data is challenging.", "A recent strategy for making better use of document context in translation is to use pretrained BERT representations of the context, rather than learning them from scratch (Zhu et al., 2020).", "Our key architectural innovation in this paper is an architecture for two-staged training that enables jointly conditioning on multiple context types, including both the source and target language context.", "Practically, we can construct a weak context representation from a variety of different contextual signals, and these are merged with the source sentence encoder's representation at each layer in the transformer.", "To examine the potential of this architecture, we explore two high-level research questions.", "First, using source language context, we explore the relative impact of different kinds of pretraining objectives on the performance obtained (BERT and PEGASUS), the amount of parallel document training data required, and the size of surrounding context.", "Second, recognizing that maintaining consistency in translation would seem to benefit from larger contexts in the target language, we compare the impact of source language context, target language context, and context containing both.", "Our main findings are (1) that multiple kinds of source language context improves performance of document translation over existing contextual representations, especially those that do not use pretrained context representations; (2) that although fine-tuning using pretrained contextual representations improves performance, large performance is strongly determined by the availability of contextual parallel data; and (3) that while both source and target language context provide benefit, source language context is more valuable, unless the quality of the target language context translations is extremely high.", "Our architecture is designed to incorporate multiple sources of external embeddings into a pretrained sequence-to-sequence transformer model.", "We execute this by creating a new attention block for each embedding we wish to incorporate and stack them.", "We then insert this attention stack as a branching path in each layer of the encoder and decoder.", "The outputs of the new and original paths are averaged before being passed to the feed forward block at the end of the layer.", "Details are discussed below ( 2.4), and the architecture is shown in Figure 1.", "The model design follows the adapter pattern (Gamma et al., 1995).", "The interface between the external model and translation model takes the form of an attention block which learns to perform the adaptation.", "The independence between the models means that different input data can be provided to each, which enables extra information during the translation process.", "In this work, we leverage this technique to: (1) enhance a sentence-level model with additional source embeddings; (2) convert a sentence-level model to a document-level model by providing contextual embeddings.", "Like BERT-fused (Zhu et al., 2020), we use pretrained masked language models to generate the external embeddings.", "We use two kinds of pretrained models: BERT (De-vlin et al., 2019) and PEGASUS (Zhang et al., 2020).", "Although similar in architecture, we conjecture that these models will capture different signals on account of their different training objectives.", "BERT is trained with a masked word objective and a two sentence similarity classification task.", "During training, it is provided with two sentences that may or may not be adjacent, with some of their words masked or corrupted.", "BERT predicts the correct words and determining if the two sentences form a contiguous sequence.", "Intuitively, BERT provides rich word-in-context embeddings.", "In terms of machine translation, it's reasonable to postulate that BERT would provide superior representations of the source sentence and reasonable near sentence context modulation.", "On the other hand, we expect it to fail to provide contextual conditioning when the pair of sentences are not adjacent.", "This shortcoming is where PEGASUS comes in.", "PEGASUS is trained with a masked sentence objective.", "During training, it is given a document that has had random sentences replaced by a mask token.", "Its task is to decode the masked sentences in the same order they appear in the document.", "As a result, PEGASUS excels at summarization tasks, which require taking many sentences and compressing them into a representation from which another sentence can be generated.", "In terms of providing context for document translation, we conjecture that PEGASUS will be able to discover signals across longer ranges that modulate output.", "To keep track of the type of embeddings being incorporated in a particular configuration, we use the notational convention Model Side (Inputs) .", "Model: B for BERT, P for PEGASUS, and D for Document Transformer (Zhang et al., 2018).", "Side: s for the source and t for the target language.", "Inputs: c for the current source (or target), i.e., x i , p for the previous source (target), and n for the next one.", "Note that 3p means the three previous sources (targets), ( x i 3 , x i 2 , x i 1 ) .", "When multiple embeddings are used, we include a to indicate the order of attention operations.", "We can thus represent the BERT-fused document model proposed by Zhu et al. (2020) as B s (p,c) since it passes the previous and current source sentences as input to BERT.", "The core of this work is to understand the benefits that adding a diverse set of external embeddings has on the quality of document translation.", "To this effect, we introduce two new models that leverage the output from both BERT and PEGASUS: Multi-source := B s (c) P s (c) Multi-context := B s (p,c) B s (c,n) P s (3p,c,3n) There are a few ways to integrate the output of external models into a transformer layer.", "We could stack them vertically after the self-attention block (Zhang et al., 2018) or we could place them horizontally and average all of their outputs together like MAT (Fan et al., 2020).", "Our preliminary experiments show that the parallel attention stack, depicted in Figure 1, works best.", "Therefore, we adopt this architecture in our experiments.", "If we let A = B s (p,c) , B = B s (c,n) , and C = P s (3p,c,3n) refer to the output of the external pretrained models computed once per translation example, then the Multi-context encoder layer is defined as", "R (cid:96) = AttnBlock ( E (cid:96) 1 , E (cid:96) 1 , E (cid:96) 1 ) S a(cid:96) = AttnBlock ( A , A , E (cid:96) 1 ) S b(cid:96) = AttnBlock ( B , B , S a(cid:96) ) S (cid:96) = AttnBlock ( C , C , S b(cid:96) ) T (cid:96) = (cid:26) DropBranch ( R (cid:96) , S (cid:96) ) training 12 ( R (cid:96) + S (cid:96) ) otherwise E (cid:96) = LayerNorm ( FeedForward ( T (cid:96) )) + T (cid:96)", "The intermediate outputs of the attention stack are S a(cid:96) S b(cid:96) S (cid:96) .", "To reproduce BERT-fused, we remove S a(cid:96) and S b(cid:96) from the stack and set S (cid:96) directly to AttnBlock ( A , A , E (cid:96) 1 ) .", "We use attention block to refer to the attention, layer normalization, and residual operations, AttnBlock ( K , V , Q ) = LayerNorm ( Attn ( K , V , Q )) + Q While drop-branch (Fan et al., 2020) is defined as DropBranch ( M , N ) = 1 ( u . 5) M + 1 ( u < . 5) N where u Uniform(0 , 1) and 1 is the indicator function.", "We evaluate our model on three translation tasks, the NIST Open MT ChineseEnglish task, 1 the IWSLT'14 English-German translation task, 2 and the WMT'14 English-German news translation task.", "3 Table 1 provides a breakdown of the type, quantity, and relevance of the data used in the various dataset treatments.", "NIST provides the largest amount of in domain contextualized sentence pairs.", "IWSLT'14 and WMT'14 are almost an order of magnitude smaller.", "See Appendix A for preprocessing details.", "NIST ChineseEnglish is comprised of LDC distributed news articles and broadcast transcripts.", "We use the MT06 dataset as validation set and MT03, MT04, MT05, and MT08 as test sets.", "The validation set contains 1,649 sentences and the test set 5,146 sentences.", "Chinese sentences are frequently underspecified with respect to grammatical features that are obligatory in English (e.g., number for nouns, tense on verbs, and dropped arguments), making it a common language pair to study for document translation.", "IWSLT'14 EnglishGerman is a corpus of translated TED and TEDx talks.", "Following prior work (Zhu et al., 2020), we use the combination of dev2010 , dev2012 , tst2010 , tst2011 , and tst2012 as the test set which contains 6,750 sentences.", "We randomly selected 10 documents from the training data for validation.", "We perform a data augmentation experiment with this dataset by additionally including news commentary v15 .", "We denote this treatment as IWSLT+ and consider this to be out of domain data augmentation.", "1 https://www.nist.gov/itl/iad/mig/ open-machine-translation-evaluation 2 https://sites.google.com/site/ iwsltevaluation2014/mt-track 3 http://statmt.org/wmt14/ translation-task.html PEGASUS Enc BERT Encoderlayer Decoderlayer Figure 1: Architecture of our Multi-context model.", "WMT'14 EnglishGerman is a collection of web data, news commentary, and news articles.", "We use newstest2013 for validation and newstest2014 as the test set.", "For the document data, we use the original WMT'14 news commentary v9 dataset.", "We run two document augmentation experiments on this dataset.", "The first, denoted as WMT+, replaces news commentary v9 with the newer news commentary v15 dataset.", "The second augmentation experiment, denoted as WMT++, builds on the first by additionally incorporating the Tilde Rapid 2019 corpus.", "The Rapid corpus is comprised of European Commission press releases and the language style is quite different from the style used in the News Commentary data.", "For this reason, we consider Rapid to be out of domain data for this task.", "We construct enhanced models with additional attention blocks and restore all previously trained parameters.", "We randomly initialize the newly added parameters and exclusively update these during training.", "For a given dataset, we train a model on all the training data it is compatible with.", "This means that for document-level models, only document data is used, while for sentence-level models both document and sentence data is used.", "In our work, this distinction only matters for the WMT'14 dataset where there is a large disparity between the two types of data.", "Transformer models are trained on sentence pair data to convergence.", "For NIST and IWSLT'14 we use transformer base while for WMT'14 we use transformer big .", "We use the following vari-Dataset In Domain Out Domain Sent Doc Sent Doc NIST 1.45M 1.45M -IWSLT 173K 173K -IWSLT+ 173K 173K 345K 345K WMT 4.7M 200K -WMT+ 4.85M 345K -WMT++ 4.85M 345K 1.63M 1.63M Table 1: We breakdown the type, quantity, and relevance of parallel sentences used when training models for each dataset.", "ants of BERT from Google Research GitHub: 4 BERT-Base Chinese on NIST, BERT-Base Uncased on IWSLT'14, and BERT-Large Uncased (Whole Word Masking) on WMT'14.", "We pretrain three PEGASUS base models for the languages en, de, and zh using the Multilingual C4 dataset as detailed in TensorFlow's dataset catalog.", "5 When training our models, we only mask a single sentence per training example and do not include a masked word auxiliary objective.", "We use the public PEGASUS large 6 on the English side of WMT'14, for everything else, we use our models.", "See Appendix B for batch size and compute details.", "To reduce the variance of our results and help with reproducibility, we use checkpoint averaging .", "We select the ten contiguous checkpoints with the highest average validation BLEU.", "We do this at two critical points: (1) with the transformer models used to bootstrap enhanced models; (2) before calculating the validation and test BLEU scores we report.", "We use the sacreBLEU script (Post, 2018) 7 on our denormalized output to calculate BLEU.", "In this section, we present our main results and explore the importance of each component in the multi-context model.", "Additionally, we investigate the performance impact of document-level parallel data scarcity, the value of source-side versus target-side context, and the importance of target context quality.", "Table 2 compares our Multi-source and Multi-context models to baselines of related prior work, transformer (Vaswani et al., 2017), document transformer (Zhang et al., 2018), and the BERT-fused model for machine translation (Zhu et al., 2020).", "We see that a multi-embedding model outperforms all the single embedding models in each of the datasets we try.", "However, the best multi-embedding configuration varies by dataset.", "We find that incorporating target-side context does not improve performance beyond using source-side context alone.", "We will present our ablation studies in the subsequent sections to further shed light on the causes of this pattern of results.", "To preserve the value of test set, we report results on the validation set for these experiments.", "In some language pairs, the source language is underspecified with respect to the obligatory information that must be given in the target language.", "For example, in English every inflected verb must have tense and this is generally not overtly marked in Chinese.", "In these situations, being able to condition on prior translation decisions would be valuable.", "However, in practice, the target context is only available post translation, meaning there is a risk of cascading errors.", "In this section, we seek to answer two questions: (1) how does the quality of target context affect document-level translation; (2) whether incorporating high-quality target context into source only models adds additional value.", "To answer the first question, we evaluate the target context model P t (3p,3n) using various translations as context.", "Table 3 shows the BLEU scores achieved by the target context models on the validation set.", "The lowest quality context comes from using the output of the baseline transformer model to furnish the context (valid BLEU of 48.76); the middle level comes from a model that conditions on three views of source context (valid BLEU of 52.8) and the third is an oracle experiment that uses a human reference translation.", "We see that the Zh | En En | De En | De Model Type Embeddings NIST IWSLT WMT Base-lines Transformer sent -46.69 28.68 28.46 Doc Transformer doc D s (p,c) 47.28 28.74 -BERT-fused doc B s (p,c) 50.08 29.44 28.35 Thiswork Multi-source sent B s (c) P s (c) 49.72 30.17 29.65 Multi-context doc B s (p,c) B s (c,n) P s (3p,c,3n) 51.07 29.97 28.11 + target doc Multi-context P t (3p,3n) 50.93 30.10 28.26 Table 2: Our two main findings, sacreBLEU on Test .", "BLEU score improves as the quality of the target context improves; however, the impact is still less than the Multi-context source modeleven in the oracle case!", "Next, we explore whether leveraging both source and target context works better than only using source context.", "To control for the confounding factor of target context quality, we remove one of the references from the validation dataset and use it only as context.", "We believe this provides an upper bound on the effect of target context for two reasons: (1) it's reasonable to assume that, at some point, machine translation will be capable of generating human quality translations; (2) even when this occurs, we will not have access to the style of a specific translator ahead of time.", "For these reasons, we calculate BLEU scores using only the three remaining references.", "We can see in Table 4 that adding human quality target context to Multi-context only produces a 0.14 BLEU improvement.", "This challenges the notion that target context can add more value than source context alone.", "To assess the importance of the various embeddings incorporated in the Multi-context model, we perform an ablation study by adding one component at a time until we reach its full complexity.", "Table 5 shows the study results.", "We can see that much of the improvement comes from the stronger sentence-level model produced by adding BERT's encoding of the source sentencea full 2.25 BLEU improvement.", "The benefit of providing contextual embeddings is more incremental, yet consistent.", "Adding the previous sentence gives us 0.44 BLEU, adding additional depth provides another .49, and including the next sentence adds .37.", "Finally, adding PEGASUS' contextual embedding on top of all this results in a boost of .49.", "Holistically, we can assign 2.45 BLEU to source embedding enrichment and 1.59 to contextual representations.", "NIST is a high resource document dataset containing over 1.4M contextualized sentence pairs.", "In NIST Zh En Embedding Ablation Embeddings Valid Transformer 48.76 B s (c) 51.01 B s (c) P s (c) 51.21 B s (p,c) 51.45 B s (p,c) B s (p,c) 51.94 B s (p,c) B s (c,n) 52.31 B s (p,c) P s (3p,c,3n) 52.30 B s (p,c) B s (p,c) B s (c,n) 52.10 B s (p,c) B s (c,n) P s (3p,c,3n) 52.80 Table 5: We perform ablation experiments on the NIST validation dataset to better understand the performance increase of the Multi-context model.", "this section, we investigate to what extent the quantities of parallel documents affect the performance of our models.", "To do so, we retrain enhanced models with subsets of the NIST training dataset.", "It is important to note that the underlying sentence transformer model was not retrained in these experiments meaning that these experiments simulate adding document context to a strong baseline as done in Lopes et al. (2020).", "Figure 2 shows the BLEU scores of different models on the NIST validation set with respect to the number of contextualized sentences used for training.", "We can see that it requires an example pool size over 300K before these models outperform the baseline.", "We conjecture that sufficient contextualized sentence pairs are crucial for document-level models to achieve good performance, which would explain why these models don't perform well on the IWSLT'14 and WMT'14 datasets.", "Further, this pattern of results helps shed light on the inconsistent findings in the literature regarding the effectiveness of document context models.", "A few works (Kim et al., 2019; Li et al., 2020; Lopes et al., 2020) have found that the benefit provided by many document context models can be explained away by factors other than contextual conditioning.", "We can now see from Figure 2 that these experiments were done in the low data regime.", "The randomly initialized context model needs around 600K training examples before it significantly outperform the baseline, while the pretrained contextual models reduce this to about 300K.", "It is important to note that none of the conextual models we tried outperformed the baseline below this point.", "This indicates that data quantity is not the only factor that matters but it is a prerequisite for the current class of document context architectures.", "We further validate our hypothesis about the importance of sufficient contextualized data by experimenting with document data augmentation, this time drawing data from different domains.", "We augment the IWSLT dataset with news commentary v15 , an additional 345K document context sentence pairs, and repeat the IWSLT experiments.", "During training, we sample from the datasets such that each batch contains roughly 50% of the original IWSLT data.", "To ensure a fair comparison, we first finetune the baseline transformer model on the new data, which improves its performance by 1.61 BLEU.", "We use this stronger baseline as the foundation for the other models and show the results in Table", "6. Although Multi-context edges ahead of Multi-source, the significance lies in the relative impact additional document data has on the two classes of models.", "The average improvement of the sentence-level models is 1.58 versus the 1.98 experienced by the document models.", "Huo et al. (2020) observed a similar phenomenon when using synthetic document augmentation.", "This further emphasizes the importance of using sufficient contextualized data when comparing the impact of various document-level architectures, even when the contextualized data is drawn from a new domain.", "WMT'14 offers an opportunity to combine the insights gained from the aforementioned experiments.", "This dataset provides large quantities of sentence pair data and a small amount of document pair data.", "Not surprisingly, both BERT-fused 8 and Multi-context struggle in this environment.", "On the other hand, Multi-source benefits from the abundance of sentence pair data.", "8 Here we mention that, while we were able to reproduce the baseline relative uplift of BERT-fused on the other datasets, we were unable to do so on the WMT'14 dataset.", "We do not know what document data they used and this probably accounts for the differences observed.", "IWSLT'14 En De Document Augmentation Type Model IWSLT IWSLT+ Sent Transformer 28.68 30.29 Multi-source 30.17 31.71 Doc BERT-fused 29.44 31.50 Multi-context 29.97 31.86 Table 6: Model performance before and after document data augmentation.", "we add a third stage to our training regime.", "As before, in stage one, we train the transformer model with the sentence pair data.", "In stage two, we train the Multi-source model also using the sentence pair data.", "In stage three, we add an additional P s (3p,3n) attention block to the Multi-source model and train it with document data.", "We perform two document augmentation experiments.", "In the first, we replace news commentary v9 with v15 .", "In the second, we train on a mix of news commentary v15 and Tilde Rapid 2019.", "The optimal mix was 70% and 30% respectably, which we found by tuning on the validation dataset.", "For each of the augmentation experiments, we created new Multi-source baselines by fine-tuning the original baseline on the new data.", "When training these new baselines we only updated the parameters in the B s (c) and P s (c) attention blocks.", "In contrast, when training the treatment models, we froze these blocks and only updated the parameters in the P s (3p,3n) block.", "In this way, both the new baselines and treatments started from the same pretrained Multi-source model, were exposed to the same data, and had only the parameters under investigation updated.", "We see in Table 7 that this method can be used to provide the document-level model with a much stronger sentence-level model to start from.", "As we saw in the previous data augmentation experiments ( 4.4), document augmentation helps the document-level model more than the sentence-level model.", "It is interesting to note that out of domain document data helps the document-level model yet hurts the sentence-level model.", "9 5 Related Work This work is closely related to two lines of research: document-level neural machine translation and representation learning via language modeling.", "Earlier work in document machine translation exploits the context by taking a concatenated string of adjacent source sentences as the input of neural sequence-to-sequence models (Tiedemann and 9 While tuning on the validation dataset, we observed that the optimal proportion of Rapid data to include for the new baseline was 0%.", "Meaning, don't include any of the off domain data.", "However, we needed a fair comparison baseline so left it at 30% when making Table", "7. WMT'14 En De Three Stage Training Stage Model Data Test 1 Transformer sent 28.46 2 Multi-source sent-WMT 29.64 3 Multi-source sent-WMT+ 29.74 sent-WMT++ 29.62 Multi-source P s (3p,3n) doc-WMT 29.60 doc-WMT+ 29.78 doc-WMT++ 29.89 Table 7: Results from using a three staged training approach.", "Scherrer, 2017).", "Follow-up work adds additional context layers to the neural sequence-to-sequence models in order to have a better encoding of the context information (Zhang et al., 2018; Miculicich et al., 2018, inter alia ).", "They vary in terms of whether to incorporate the source-side context (Bawden et al., 2018; Zhang et al., 2018; Miculicich et al., 2018) or target-side context (Tu et al., 2018), and whether to condition on a few adjacent sentences (Jean et al., 2017; Wang et al., 2017; Tu et al., 2018; Voita et al., 2018; Zhang et al., 2018; Miculicich et al., 2018) or the full document (Haf-fari and Maruf, 2018; Maruf et al., 2019).", "Our work is similar to this line of research since we have also introduced additional attention components to the transformer.", "However, our model is different from theirs in that the context encoders were pretrained with a masked language model objective.", "There has also been work on leveraging monolingual documents to improve document-level machine translation.", "Junczys-Dowmunt (2019) creates synthetic parallel documents generated by backtranslation (Sennrich et al., 2016; Edunov et al., 2018) and uses the combination of the original and the synthetic parallel documents to train the document translation models.", "Voita et al. (2019) train a post-editing model from monolingual documents to post-edit sentence-level translations into document-level translations.", "Yu et al. (2020b,a) uses Bayes' rule to combine a monolingual document language model probability with sentence translation probabilities.", "improving systems in language understanding, leading to state-of-the-art results on a wide range of tasks (Peters et al., 2018; Devlin et al., 2019; Radford et al., 2018; McCann et al., 2017; Yang et al., 2019; Chronopoulou et al., 2019; Lample and Con-neau, 2019; Brown et al., 2020).", "They have also been used to improve text generation tasks, such as sentence-level machine translation (Song et al., 2019; Edunov et al., 2019; Zhu et al., 2020) and summarization (Zhang et al., 2019, 2020; Dong et al., 2019), and repurposing unconditional language generation (Ziegler et al., 2019; de Oliveira and Rodrigo, 2019).", "Our work is closely related to that from Zhu et al. (2020), where pretrained large-scale language models are applied to document-level machine translation tasks.", "We advance this line of reasoning by designing an architecture that uses composition to incorporate multiple pretrained models at once.", "It also enables conditioning on different inputs to the same pretrained model, enabling us to circumvent BERT's two sentence embedding limit.", "We have introduced an architecture and training regimen that enables incorporating representations from multiple pretrained masked language models into a transformer model.", "We show that this technique can be used to create a substantially stronger sentence-level model and, with sufficient document data, further upgraded to a document-level model that conditions on contextual information.", "Through ablations and other experiments, we establish document augmentation and multi-stage training as effective strategies for training a document-level model when faced with data scarcity.", "And that source side context is sufficient for these models, with target context adding little additional value.", "We would like to thank our teammates, Laurent Sartran, Phil Blunsom, Susie Young, Wang Ling, and Wojciech Stokowiec, for their feedback and shared engineering efforts.", "We thank Yao Zhao for helping us to better understand the PEGASUS codebase.", "We thank Dani Yogatama and our three anonymous reviewers for their feedback on the earlier draft of the paper.", "Their feedback was taken seriously and we believe this work has benefited from the items they requested." ]
[ "objective", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "objective", "method", "result", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "other", "abstain", "abstain", "method", "method", "other", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "method", "other", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "method", "abstain", "method", "method", "abstain", "method", "method", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "result", "abstain", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "abstain", "abstain", "abstain", "abstain", "result", "method", "abstain", "other", "other", "other", "other" ]
[ "Pre-trained language models like BERT have proven to be highly performant.", "However, they are often computationally expensive in many practical scenarios, for such heavy models can hardly be readily implemented with limited resources.", "To improve their efficiency with an assured model performance, we propose a novel speed-tunable FastBERT with adaptive inference time.", "The speed at inference can be flex-ibly adjusted under varying demands, while redundant calculation of samples is avoided.", "Moreover, this model adopts a unique self-distillation mechanism at fine-tuning, further enabling a greater computational efficacy with minimal loss in performance.", "Our model achieves promising results in twelve English and Chinese datasets.", "It is able to speed up by a wide range from 1 to 12 times than BERT if given different speedup thresholds to make a speed-performance tradeoff.", "Last two years have witnessed significant improvements brought by language pre-training, such as BERT (Devlin et al., 2019), GPT (Radford et al., 2018), and XLNet (Yang et al., 2019).", "By pretraining on unlabeled corpus and fine-tuning on labeled ones, BERT-like models achieved huge gains on many Natural Language Processing tasks.", "Despite this gain in accuracy, these models have greater costs in computation and slower speed at inference, which severely impairs their practicalities.", "Actual settings, especially with limited time and resources in the industry, can hardly enable such models into operation.", "For example, in tasks like sentence matching and text classification, one often requires to process billions of requests per second.", "What's more, the number of requests varies with time.", "In the case of an online shopping site, the Corresponding author: Qi Ju ([email protected]) number of requests during the holidays is five to ten times more than that of the workdays.", "A large number of servers need to be deployed to enable BERT in industrial settings, and many spare servers need to be reserved to cope with the peak period of requests, demanding huge costs.", "To improve their usability, many attempts in model acceleration have been made, such as quan-tinization (Gong et al., 2014), weights pruning (Han et al., 2015), and knowledge distillation (KD) (Romero et al., 2014).", "As one of the most popular methods, KD requires additional smaller student models that depend entirely on the bigger teacher model and trade task accuracy for ease in computation (Hinton et al., 2015).", "Reducing model sizes to achieve acceptable speed-accuracy balances, however, can only solve the problem halfway, for the model is still set as fixated, rendering them unable to cope with drastic changes in request amount.", "By inspecting many NLP datasets (Wang et al., 2018), we discerned that the samples have different levels of difficulty.", "Heavy models may overcalculate the simple inputs, while lighter ones are prone to fail in complex samples.", "As recent studies (Kovaleva et al., 2019) have shown redundancy in pre-training models, it is useful to design a one-size-fits-all model that caters to samples with varying complexity and gains computational efficacy with the least loss of accuracy.", "Based on this appeal, we propose FastBERT, a pre-trained model with a sample-wise adaptive mechanism.", "It can adjust the number of executed layers dynamically to reduce computational steps.", "This model also has a unique self-distillation process that requires minimal changes to the structure, achieving faster yet as accurate outcomes within a single framework.", "Our model not only reaches a comparable speedup (by 2 to 11 times) to the BERT model, but also attains competitive accuracy in comparison to heavier pre-training models.", "Experimental results on six Chinese and six English NLP tasks have demonstrated that FastBERT achieves a huge retrench in computation with very little loss in accuracy.", "The main contributions of this paper can be summarized as follows: This paper proposes a practical speed-tunable BERT model, namely FastBERT, that balances the speed and accuracy in the response of varying request amounts; The sample-wise adaptive mechanism and the self-distillation mechanism are combined to improve the inference time of NLP model for the first time.", "Their efficacy is verified on twelve NLP datasets; The code is publicly available at https:// github.com/autoliuweijie/FastBERT .", "BERT (Devlin et al., 2019) can learn universal knowledge from mass unlabeled data and produce more performant outcomes.", "Many works have followed: RoBERTa (Liu et al., 2019) that uses larger corpus and longer training steps.", "T5 (Raffel et al., 2019) that scales up the model size even more.", "UER (Zhao et al., 2019) pre-trains BERT in different Chinese corpora.", "K-BERT (Liu et al., 2020) injects knowledge graph into BERT model.", "These models achieve increased accuracy with heavier settings and even more data.", "However, such unwieldy sizes are often hampered under stringent conditions.", "To be more specific, BERT-base contains 110 million parameters by stacking twelve Transformer blocks (Vaswani et al., 2017), while BERT-large expands its size to even 24 layers.", "ALBERT (Lan et al., 2019) shares the parameters of each layer to reduce the model size.", "Obviously, the inference speed for these models would be much slower than classic architectures (e.g., CNN (Kim, 2014), RNN (Wang, 2018), etc).", "We think a large proportion of computation is caused by redundant calculation.", "Knowledge distillation : Many attempts have been made to distill heavy models (teachers) into their lighter counterparts (students).", "PKD-BERT (Sun et al., 2019a) adopts an incremental extraction process that learns generalizations from intermediate layers of the teacher model.", "TinyBERT (Jiao et al., 2019) performs a two-stage learning involving both general-domain pre-training and task-specific fine-tuning.", "DistilBERT (Sanh et al., 2019) Big Model (Teacher) Softmax Small Model (Student) Softmax Input x Loss( \" , $ ) Prediction \" Prediction $ Figure 1: Classic knowledge distillation approach: Distill a small model using a separate big model.", "further leveraged the inductive bias within large models by introducing a triple loss.", "As shown in Figure 1, student model often require a separated structure, whose effect however, depends mainly on the gains of the teacher.", "They are as indiscriminate to individual cases as their teachers, and only get faster in the cost of degraded performance.", "Adaptive inference : Conventional approaches in adaptive computations are performed token-wise or patch-wise, who either adds recurrent steps to individual tokens (Graves, 2016) or dynamically adjusts the number of executed layers inside discrete regions of images (Teerapittayanon et al., 2016; Figurnov et al., 2017).", "To the best of our knowledge, there has been no work in applying adaptive mechanisms to NLP pre-training language models for efficiency improvements so far.", "Distinct to the above efforts, our approach fusions the adaptation and distillation into a novel speed-up approach, shown in Figure 2, achieving competitive results in both accuracy and efficiency.", "As shown in Figure 2, FastBERT consists of backbone and branches.", "The backbone is built upon 12-layers Transformer encoder with an additional teacher-classifier, while the branches include student-classifiers which are appended to each Transformer output to enable early outputs.", "The backbone consists of three parts: the embedding layer, the encoder containing stacks of Transformer blocks (Vaswani et al., 2017), and the teacher classifier.", "The structure of the embedding layer and the encoder conform with those of BERT Transformer 0 Transformer 1 Transformer 2 Transformer L-1 This book is really good!", "(Devlin et al., 2019).", "Given the sentence length n , an input sentence s = [ w 0 , w 1 , ...w n ] will be transformed by the embedding layers to a sequence of vector representations e like (1), e = Embedding ( s ) , (1) where e is the summation of word, position, and segment embeddings.", "Next, the transformer blocks in the encoder performs a layer-by-layer feature extraction as (2), h i = T ransformer i ( h i 1 ) , (2) where h i ( i = 1 , 0 , 1 , ..., L 1 ) is the output features at the i th layer, and h 1 = e .", "L is the number of Transformer layers.", "Following the final encoding output is a teacher classifier that extracts in-domain features for downstream inferences.", "It has a fully-connected layer narrowing the dimension from 768 to 128 , a self-attention joining a fully-connected layer without changes in vector size, and a fully-connected layer with a softmax function projecting vectors to an N -class indicator p t as in (3), where N is the task-specific number of classes.", "To provide FastBERT with more adaptability, multiple branches, i.e. the student classifiers, in the", "same architecture with the teacher are added to the output of each Transformer block to enable early outputs, especially in those simple cases.", "The student classifiers can be described as (4), p s i = Student Classifier i ( h i ) .", "The student classifier is designed carefully to bal-ance model accuracy and inference speed, for simple networks may impair the performance, while a heavy attention module severely slows down the inference speed.", "Our classifier has proven to be lighter with ensured competitive accuracy, detailed verifications are showcased in Section 4.1.", "FastBERT requires respective training steps for the backbone and the student classifiers.", "The parameters in one module is always frozen while the other module is being trained.", "The model is trained in preparation for downstream inference with three steps: the major backbone pre-training, entire backbone fine-tuning, and self-distillation for student classifiers.", "The pre-training of backbone resembles that of BERT in the same way that our backbone resembles BERT.", "Any pre-training method used for BERT-like models (e.g., BERT-WWM (Cui et al., 2019), RoBERTa (Liu et al., 2019), and ERNIE (Sun et al., 2019b)) can be directly applied.", "Note that the teacher classifier, as it is only used for inference, stays unaffected at this time.", "Also conveniently, FastBERT does not even need to perform pre-training by itself, for it can load high-quality pre-trained models freely.", "For each downstream task, we plug in the task-specific data into the model, fine-tuning both the major backbone and the teacher classifier.", "The structure of the teacher classifier is as previously described.", "At this stage, all student classifiers are not enabled.", "With the backbone well-trained for knowledge extraction, its output, as a high-quality soft-label containing both the original embedding and the generalized knowledge, is distilled for training student classifiers.", "As student are mutually independent, their predictions p s are compared with the teacher soft-label p t respectively, with the differences measured by KL-Divergence in (5), DKL ( p s , p t ) = N (cid:88) i =1 p s ( i ) log p s ( i ) p t ( j ) .", "As there are L 1 student classifiers in the FastBERT, the sum of their KL-Divergences is used as the total loss for self-distillation, which is formulated in (6),", "(6) where p s i refers to the probability distribution of the output from student-classifier i .", "Since this process only requires the teachers output, we are free to use an unlimited number of unlabeled data, instead of being restricted to the labeled ones.", "This provides us with sufficient resources for self-distillation, which means we can always improve the student performance as long as the teacher allows.", "Moreover, our method differs from the previous distillation method, for the teacher and student outputs lie within the same model.", "This learning process does not require additional pretraining structures, making the distillation entirely a learning process by self.", "With the above steps, FastBERT is well-prepared to perform inference in an adaptive manner, which", "means we can adjust the number of executed encoding layers within the model according to the sample complexity.", "At each Transformer layer, we measure for each sample on whether the current inference is credible enough to be terminated.", "Given an input sequence, the uncertainty of a student classifier's output p s is computed with a normalized entropy in (7), Uncertainty = (cid:80) Ni =1 p s ( i ) log p s ( i ) log 1 N , (7) where p s is the distribution of output probability, and N is the number of labeled classes.", "With the definition of the uncertainty, we make an important hypothesis.", "Hypothesis 1.", "LUHA: the Lower the Uncertainty, the Higher the Accuracy.", "Definition 1.", "Speed: The threshold to distinguish high and low uncertainty.", "LUHA is verified in Section 4.4.", "Both Uncertainty and Speed range between 0 and 1 .", "The adaptive inference mechanism can be described as: At each layer of FastBERT, the corresponding student classifier will predict the label of each sample with measured Uncertainty .", "Samples with Uncertainty below the Speed will be sifted to early outputs, while samples with Uncertainty above the Speed will move on to the next layer.", "Intuitively, with a higher Speed , fewer samples will be sent to higher layers, and overall inference speed will be faster, and vice versa.", "Therefore, Speed can be used as a halt value for weighing the inference accuracy and efficiency.", "In this section, we will verify the effectiveness of FastBERT on twelve NLP datasets (six in English and six in Chinese) with detailed explanations.", "Floating-point operations (FLOPs) is a measure of the computational complexity of models, which indicates the number of floating-point operations that the model performs for a single process.", "The FLOPs has nothing to do with the model's operating environment (CPU, GPU or TPU) and only reveals the computational complexity.", "Generally speaking, the bigger the model's FLOPs is, the longer the inference time will be.", "With the same accuracy, models with low FLOPs are more efficient and more suitable for industrial uses.", "We list the measured FLOPs of both structures in Table 1, from which we can infer that, the calculation load (FLOPs) of the Classifier is much lighter than that of the Transformer .", "This is the basis of the speed-up of FastBERT, for although it adds additional classifiers, it achieves acceleration by reducing more computation in Transformers.", "In this section, we compare FastBERT against two baselines:", "BERT 1 The 12-layer BERT-base model was pre-trained on Wiki corpus and released by Google (Devlin et al., 2019).", "DistilBERT 2 The most famous distillation method of BERT with 6 layers was released by Huggingface (Sanh et al., 2019).", "In addition, we use the same method to distill the DistilBERT with 3 and 1 layer(s), respectively.", "To verify the effectiveness of FastBERT, especially in industrial scenarios, six Chinese and six English datasets pressing closer to actual applications are used.", "The six Chinese datasets include 1 https://github.com/google-research/ bert 2 https://github.com/huggingface/ transformers/tree/master/examples/distillation Ag.News Amz.F Dbpedia Yahoo Yelp.F Yelp.P S peedup ( t i m e s ) 0x 2x 4x 6x 8x 10x 12x Speed 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 A cc .", "the sentence classification tasks (ChnSentiCorp, Book review(Qiu et al., 2018), Shopping review, Weibo and THUCNews) and a sentences-matching task (LCQMC(Liu et al., 2018)).", "All the Chinese datasets are available at the FastBERT project.", "The six English datasets (Ag.News, Amz.F, DBpedia, Yahoo, Yelp.F, and Yelp.P) are sentence classification tasks and were released in (Zhang et al., 2015).", "To perform a fair comparison, BERT / DistilBERT / FastBERT all adopt the same configuration as follows.", "In this paper, L = 12 .", "The number of self-attention heads, the hidden dimension of embedding vectors, and the max length of the input sentence are set to 12, 768 and 128 respectively.", "Both FastBERT and BERT use pre-trained parameters provided by Google, while DistilBERT is pre-trained with (Sanh et al., 2019).", "We fine-tune these models using the AdamW (Loshchilov and Hut-ter) algorithm, a 2 10 5 learning rate, and a 0 .", "1 warmup.", "Then, we select the model with the best accuracy in 3 epochs.", "For the self-distillation of FastBERT, we increase the learning rate to 2 10 4 and distill it for 5 epochs.", "We evaluate the text inference capabilities of these models on the twelve datasets and report their accuracy (Acc.) and sample-averaged FLOPs under different Speed values.", "The result of comparisons are shown in Table 2, where the Speedup is obtained by using BERT as the benchmark.", "It can be observed that with the setting of Speed = 0 .", "1 , FastBERT can speed up 2 to 5 times without losing accuracy for most datasets.", "If a little loss of accuracy is tolerated, FastBERT can be 7 to 11 times faster than BERT.", "Comparing to DistilBERT, FastBERT trades less accuracy to catch higher efficiency.", "Figure 3 illustrates FastBERT's tradeoff in accuracy and efficiency.", "The speedup ratio of FastBERT are free to be adjusted between 1 and 12, while the loss of accuracy remains small, which is a very attractive feature in the industry.", "As is described in the Section 3.3, the adaptive inference of FastBERT is based on the LUHA hypothesis, i.e., the Lower the Uncertainty, the Higher the Accuracy .", "Here, we prove this hypothesis using the book review dataset.", "We intercept the classification results of Student-Classifier0 , Student-Classifier5 , and Teacher-Classifier in FastBERT, then count their accuracy in each uncertainty interval statistically.", "As shown in Figure 4, the statistical indexes confirm that the classifier follows the LUHA hypothesis, no matter it sits at the bottom, in the middle or on top of the model.", "From Figure 4, it is easy to mistakenly conclude that Student s has better performance than Teacher due to the fact that the accuracy of Student in each uncertainty range is higher than that of Teacher .", "This conclusion can be denied by analysis with Figure", "6(a) together.", "For the Teacher , more samples are located in areas with lower uncertainty, while the Student s' samples are nearly uniformly distributed.", "Therefore the overall accuracy of the Teacher is still higher than that of Student s.", "In this section, we conduct a set of in-depth analysis of FastBERT from three aspects: the distribution of exit layer, the distribution of sample uncertainty, and the convergence during self-distillation.", "In FastBERT, each sample walks through a different number of Transformer layers due to varied complexity.", "For a certain condition, fewer executed layers often requires less computing resources.", "As illustrated in Figure 5, we investigate the distribution of exit layers under different constraint of Speed s (0.3, 0.5 and 0.8) in the book review dataset.", "Take Speed = 0 .", "8 as an example, at the first layer Transformer0 , 61% of the samples is able to complete the inference.", "This significantly eliminates unnecessary calculations in the next eleven layers.", "The distribution of sample uncertainty predicted by different student classifiers varies, as is illustrated in Figure 6.", "Observing these distributions help us to Fine-tuning(0~3 epochs) Self-distillation (3~8 epochs) Acc.FLOPs A cc .", "further understand FastBERT.", "From Figure 6", "(a) , it can be concluded that the higher the layer is posited, the lower the uncertainty with given Speed will be, indicating that the high-layer classifiers more decisive than the lower ones.", "It is worth noting that at higher layers, there are samples with uncertainty below the threshold of Uncertainty (i.e., the Speed ), for these high-layer classifiers may reverse the previous judgments made by the low-layer classifiers.", "Self-distillation is a crucial step to enable FastBERT.", "This process grants student classifiers with the abilities to infer, thereby offloading work from the teacher classifier.", "Taking the Book Review dataset as an example, we fine-tune the FastBERT with three epochs then self-distill it for five more epochs.", "Figure 7 illustrates its convergence in accuracy and FLOPs during fine-tune and self-distillation.", "It could be observed that the accuracy increases with fine-tuning, while the FLOPs decrease during the self-distillation stage.", "Adaptation and self-distillation are two crucial mechanisms in FastBERT.", "We have preformed ablation studies to investigate the effects brought by these two mechanisms using the Book Review dataset and the Yelp.P dataset.", "The results are presented in Table 3, in which without self-distillation' implies that all classifiers, including both the teacher and the students, are trained in the fine-tuning; while without adaptive inference' means that the number of executed layers of each sample is fixated to two or six.", "From Table 3, we have observed that: (1) At almost the same level of speedup, FastBERT without self-distillation or adaption performs poorer; (2) When the model is accelerated more than five times, downstream accuracy degrades dramatically without adaption.", "It is safe to conclude that both the adaptation and self-distillation play a key role in FastBERT, which achieves both significant speedups and favorable low losses of accuracy.", "In this paper, we propose a fast version of BERT, namely FastBERT.", "Specifically, FastBERT adopts a self-distillation mechanism during the training phase and an adaptive mechanism in the inference phase, achieving the goal of gaining more efficiency with less accuracy loss.", "Self-distillation and adaptive inference are first introduced to NLP model in this paper.", "In addition, FastBERT has a very practical feature in industrial scenarios, i.e., its inference speed is tunable.", "Our experiments demonstrate promising results on twelve NLP datasets.", "Empirical results have shown that FastBERT can be 2 to 3 times faster than BERT without performance degradation.", "If we slack the tolerated loss in accuracy, the model is free to tune its speedup between 1 and 12 times.", "Besides, FastBERT remains compatible to the parameter settings of other BERT-like models (e.g., BERT-WWM, ERNIE, and RoBERTa), which means these public available models can be readily loaded for FastBERT initialization.", "These promising results point to future works in (1) linearizing the Speed-Speedup curve; (2) extending this approach to other pre-training architectures such as XLNet (Yang et al., 2019) and ELMo (Pe-ters et al., 2018); (3) applying FastBERT on a wider range of NLP tasks, such as named entity recognition and machine translation.", "This work is funded by Elite Training Program.", "Work done while this author was an intern at Tencent.", "Olga Kovaleva, Alexey Romanov, Anna Rogers, and Anna Rumshisky.", "2019.", "Revealing the dark secrets of BERT.", "In Proceedings of EMNLP-IJCNLP , pages 43564365.", "Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut.", "2019.", "ALBERT: A lite BERT for self-supervised learning of language representations.", "arXiv preprint arXiv:1909.11942 .", "Weijie Liu, Peng Zhou, Zhe Zhao, Zhiruo Wang, Qi Ju, Haotang Deng, and Ping Wang.", "2020.", "K-BERT: Enabling language representation with knowledge graph.", "In Proceedings of AAAI .", "Xin Liu, Qingcai Chen, Chong Deng, Huajun Zeng, Jing Chen, Dongfang Li, and Buzhou Tang.", "2018.", "Lcqmc: A large-scale chinese question matching corpus.", "In Proceedings of the ICCL , pages 1952 1962.", "Ilya Loshchilov and Frank Hutter.", "Fixing weight decay regularization in adam.", "arXiv preprint arXiv:1711.05101 .", "Yuanyuan Qiu, Hongzheng Li, Shen Li, Yingdi Jiang, Renfen Hu, and Lijiao Yang.", "2018.", "Revisiting correlations between intrinsic and extrinsic evaluations of word embeddings.", "In Proceedings of CCL , pages 209221.", "Springer.", "Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever.", "2018.", "Improving language understanding by generative pre-training.", "Technical report, OpenAI.", "Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu.", "2019.", "Exploring the limits of transfer learning with a unified text-to-text transformer.", "arXiv preprint arXiv:1910.10683 .", "Adriana Romero, Nicolas Ballas, Samira Ebrahimi Ka-hou, Antoine Chassang, Carlo Gatta, and Yoshua Bengio.", "2014.", "Fitnets: Hints for thin deep nets.", "arXiv preprint arXiv:1412.6550 .", "Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf.", "2019.", "DistilBERT, a distilled version of bert: smaller, faster, cheaper and lighter.", "In NeurIPS EMC2 Workshop ." ]
[ "abstain", "abstain", "objective", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "objective", "abstain", "objective", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain" ]
[ "Neural networks tend to gradually forget the previously learned knowledge when learning multiple tasks sequentially from dynamic data distributions.", "This problem is called catastrophic forgetting , which is a fundamental challenge in the continual learning of neural networks.", "In this work, we observe that catastrophic forgetting not only occurs in continual learning but also affects the traditional static training.", "Neural networks, especially neural machine translation models, suffer from catastrophic forgetting even if they learn from a static training set.", "To be specific, the final model pays imbalanced attention to training samples, where recently exposed samples attract more attention than earlier samples.", "The underlying cause is that training samples do not get balanced training in each model update, so we name this problem imbalanced training .", "To alleviate this problem, we propose Complementary Online Knowledge Distillation (COKD), which uses dynamically updated teacher models trained on specific data orders to iteratively provide complementary knowledge to the student model.", "Experimental results on multiple machine translation tasks show that our method successfully alleviates the problem of imbalanced training and achieves substantial improvements over strong baseline systems.", "1 1 Introduction Neural Machine Translation (NMT) has achieved impressive translation performance on many benchmark datasets in the past few years (Cho et al., 2014; Sutskever et al., 2014; Bahdanau et al., 2015; Vaswani et al., 2017).", "In the domain adaptation task where we have large-scale out-domain data to improve the in-domain translation performance, continual learning, which is also referred to as fine-tuning, is often employed to Corresponding author: Yang Feng 1 Code is available at https://github.com/ictnlp/COKD.", "transfer the out-domain knowledge to in-domain (Luong and Manning, 2015).", "After fine-tuning, the model performs well in in-domain translation, but there is significant performance degradation in out-domain translation because it forgets the previously learned knowledge.", "This phenomenon is called catastrophic forgetting (McCloskey and Cohen, 1989; French, 1999) and has attracted a lot of attention (Goodfellow et al., 2013; Kirkpatrick et al., 2017; Li and Hoiem, 2017; Lee et al., 2017).", "In this work, we observe that catastrophic forgetting not only occurs in continual learning but also affects the traditional static training.", "To be specific, the final model pays imbalanced attention to training samples.", "At the end of training, the recently exposed samples attract more attention and tend to have lower losses, while earlier samples are partially forgotten by the model and have higher losses.", "In short, training samples receive imbalanced attention from the model, which mainly depends on the time when the model last saw the training sample (i.e., the data order of the last training epoch).", "The underlying cause of this phenomenon is mini-batch gradient descent (LeCun et al., 2012), that is, we do not simultaneously use all training samples to train the model but divide them into mini-batches.", "Therefore, training samples do not get balanced training in each update step, so we name this problem imbalanced training .", "This problem is less severe in some tasks (e.g., image classification and text classification), but it has a significant impact on NMT as machine translation is a challenging task containing numerous translation rules, which are easily forgotten during the training process.", "Besides, we find that the imbalanced training problem is especially severe and non-negligible on low-resource machine translation.", "To demonstrate that the imbalanced training problem does affect the model accuracy, we first review a widely used technique called checkpoint averaging technique, which has proved to be effec-2023 tive in improving model accuracy but its internal mechanisms are not fully understood.", "We analyze it from the perspective of catastrophic forgetting and find that their success can be attributed to the alleviation of imbalanced training.", "We also notice that checkpoint averaging has some limitations, leaving room for further improvements.", "Inspired by the existing solution of checkpoint averaging which leverages the complementarity of checkpoints to improve model accuracy, we propose Complementary Online Knowledge Distillation (COKD) to address the problem of imbalanced training.", "As the model tends to forget knowledge learned from early samples, the main idea of COKD is to construct complementary teachers to re-provide this forgotten knowledge to the student.", "Specifically, we divide the training set into mutually exclusive subsets and reorganize them in a specific orders to train the student and teachers.", "We perform COKD in an online manner where teachers are on-the-fly updated to fit the need of student.", "When training the student on a subset, teachers can always provide the student with complementary knowledge on the other subsets, thereby preventing the student from catastrophic forgetting.", "Experimental results on multiple machine translation tasks show that our method successfully alleviates the problem of imbalanced training and achieves substantial improvements over strong baseline systems.", "Especially, on the low-resource translation tasks that are severely affected by imbalanced training, our method is particularly effective and improves baseline models by about 2 BLEU points on average.", "In summary, our contribution is threefold: We observe the problem of imbalanced training that training samples receive imbalanced attention from the model.", "We find that NMT, especially low-resource translation tasks, is seriously affected by imbalanced training.", "We rethink the widely used checkpoint averaging technique and explain its success from the perspective of imbalanced training, which also demonstrates that the imbalanced training problem does affect the model accuracy.", "We propose Complementary Online Knowledge Distillation for NMT, which can successfully alleviate the imbalanced training problem and improve the translation quality.", "Knowledge distillation (Hinton et al., 2015) is a class of methods that transfers knowledge from a pre-trained teacher network to a student network.", "Assume that we are training a classifier p ( y | x ; ) with |V| classes, and we can access the pre-trained teacher q ( y | x ) .", "Instead of minimizing the cross-entropy loss between the ground-truth label and the model output probability, knowledge distillation uses the teacher model prediction q ( y | x ) as a soft target and minimizes the loss: LKD ( )= |V| (cid:88) k =1 q ( y = k | x ) log p ( y = k | x ; ) .", "where X = { x 1 , ..., x N } and Y = { y 1 , ..., y T } are the source sentence and the target sentence, respectively.", "Kim and Rush (2016) proposed to train the student model to mimic the teacher's prediction at each decoding step, which is called Word-level Knowledge Distillation (Word-KD) and its loss is calculated as follows: L Word-KD ( ) = T (cid:88) t =1 |V| (cid:88) k =1 q ( y t = k | y <t , X ) log p ( y t = k | y <t , X , ) .", "(3) Conventional offline knowledge distillation only allows the student to learn from static pre-trained teacher models.", "On the contrary, online knowledge distillation trains teachers from scratch and dynamically updates them, so the student learns from different teachers during the training process.", "Zhang et al. (2018) first overcame the offline limitation by training peer models simultaneously and conducted an online distillation in one-phase training between peer models.", "Since mutual learning requires training multiple networks, Lan et al. (2018); Song and Chai (2018) proposed to use a single multi-branch network for online knowledge distillation, which treats each branch as a student and the ensemble of branches as a teacher.", "The multi-branch architecture subsequently became the mainstream for 2024 online knowledge distillation (Guo et al., 2020; Chen et al., 2020; Wu and Gong, 2021).", "Besides, Furlanello et al. (2018) performed iterative self-distillation where the student network is identical to the teacher in terms of the network graph.", "In each new iteration, under the supervision of the earlier iteration, a new identical model is trained from scratch.", "In NMT, Wei et al. (2019) on-the-fly selected the best checkpoint from the training path as the teacher to guide the training process.", "Catastrophic forgetting is a problem faced by many machine learning models during continual learning, as models tend to forget previously learned knowledge when being trained on new tasks (McCloskey and Cohen, 1989).", "A typical class of methods to mitigate catastrophic forgetting is based on regularization which constrains the update of model parameters.", "Goodfellow et al. (2013) empirically find that the dropout regularization can effectively alleviate the catastrophic forgetting phenomenon.", "Kirkpatrick et al. (2017) proposed elastic weight consolidation, which implements the modified regularization term that imposes constraints on the update of important parameters in the previous task.", "Lee et al. (2017) proposed drop-transfer, which is a variant of dropout that drops the weight vector of turned off nodes to the weight learned on the previous task instead of a zero vector.", "Learning without Forgetting (LWF) (Li and Hoiem, 2017) is the approach most relevant to our work.", "They only use new task data to train the network but preserve the original capabilities by distilling knowledge from the pre-trained model.", "There are also a number of efforts to address the catastrophic forgetting problem for the domain adaptation of NMT.", "Kirkpatrick et al. (2017); Thompson et al. (2019) added regularization terms to constrain the update of parameters.", "Dakwale and Monz (2017) proposed to minimize the KL-divergence between the predictions of general-domain model and fine-tuned model.", "Zeng et al. (2018); Gu et al. (2019) introduced a discriminator to preserve the domain-shared features.", "Liang et al. (2021); Gu et al. (2021); Xie et al. (2021) fixed important parameters during the fine-tuning to preserve the general-domain performance.", "Gu and Feng (2020) investigated the cause of catastrophic forgetting from the perspectives of modules and parameters.", "Before drawing any conclusions, we first conduct experiments on three different tasks, namely, image classification, text classification, and machine translation, to show that the problem of imbalanced training does exist.", "For image classification, we conduct experiments on CIFAR-10 and CIFAR-100 (Krizhevsky, 2009), both of which contain 50,000/10,000 training/testing images with 32 32 pixels drawn from 10/100 classes.", "For text classification, we conduct experiments on AG-News, which contains 120,000/7,600 train-ing/testing sentences drawn from 4 classes.", "For machine translation, we conduct experiments on three translation tasks: WMT14 English-German (En-De), IWSLT15 English-Vietnamese (En-Vi), and WMT17 English-Turkish (En-Tr).", "We use the ResNet-32 network (He et al., 2016) for image classification, the VDCNN network (Conneau et al., 2017) for text classification and Transformer-base (Vaswani et al., 2017) for machine translation.", "All the models are trained using cross-entropy loss.", "We refer readers to Appendix A and section 6.1 for the detailed configurations.", "We train the model until convergence and then take the last checkpoint to calculate losses of training samples in the data order of the last training epoch.", "If there is a problem of imbalanced training, then training samples at the end of the epoch, which are recently exposed to the model, will tend to have lower losses.", "In contrast, training samples at the beginning will tend to have higher losses.", "For quantitative analysis, we use the Spearman correlation coefficient between the data order and loss to measure the degree of imbalanced training.", "Specifically, we assign each batch in the training dataset with a batch-id according to the order they appear in the last training epoch, where batch i is the i -th trained batch.", "We disable regularization techniques such as dropout and label smoothing and calculate the loss for each batch.", "The correlation coefficient between the batch-id and the loss is used to measure the degree of imbalanced training, and a large negative correlation coefficient indicates that this problem is severe.", "Figure 1 illustrates the relationship between the batch-id and loss.", "By comparing the loss curves and correlation coefficients on these six datasets, we obtain the following three main observations.", "CIFAR-10 has a positive correlation coefficient.", "Two datasets (i.e., AG-News and WMT14 En-De) have small negative correlation coefficients.", "Three datasets (i.e., CIFAR-100, IWSLT15 En-Vi, and WMT17 En-Tr) have an apparent decline in losses accompanied by large negative correlation coefficients.", "Therefore, we can conclude that the problem of imbalanced training does exist, but the degree of impact varies.", "Imbalanced training is related to task complexity.", "Intuitively, imbalanced training is more likely to occur on complex tasks where previously learned knowledge may be easily forgotten during the learning of numerous new knowledge.", "Comparing the two image classification datasets, CIFAR-10 and CIFAR-100 have the same dataset size but a different number of classes.", "The correlation coefficient on the complex task CIFAR-100 is 0 .", "29 , while the correlation coefficient on CIFAR-10 is 0 .", "01 .", "The text classification task, which only contains 4 classes, has a small correlation coefficient 0 .", "04 .", "Machine translation is generally considered a complex task with exponential search space and numerous translation rules.", "Notably, WMT17 En-Tr has the largest correlation coefficient of 0 .", "64 .", "These results are consistent with our intuition that imbalanced training has a greater impact on complex tasks like machine translation.", "Low-resource translation suffers from imbalanced training.", "Comparing the three machine translation datasets, the imbalanced training problem has a much larger impact on low-resource datasets (i.e., IWSLT15 En-Vi and WMT17 En-Tr), where the high-resource dataset WMT14 En-De is less affected.", "To eliminate the influence of language, we randomly select 100K sentences from the WMT14 En-De dataset for the training to simulate the low-resource scenario.", "We show the loss curve in Appendix B, where the corresponding correlation coefficient is 0 .", "63 , which also supports the conclusion.", "This is counter-intuitive since when there are many training samples, the early samples seem to be more easily forgotten.", "Actually, as Figure 1 shows, the loss curves are generally less steep at the beginning, indicating that early samples are nearly equally forgotten by the model.", "For high-resource datasets, most samples are nearly equally forgotten and only the losses of a few samples at the end are highly correlated with the batch-id, so the overall correlation is low.", "In comparison, nearly the whole loss curve of low-resource datasets is steep, so the model may simultaneously overfit recent samples and underfit early samples due to imbalanced training.", "Therefore, the problem of imbalanced training is more serious and nonnegligible in low-resource machine translation.", "Loss rises in the end due to the momentum of optimizer.", "On CIFAR-100, IWSLT15 En-Vi, and WMT17 En-Tr, though their loss curves are generally downward, they all have a sudden rise at the end.", "This abnormal phenomenon is actually consistent with our conclusion.", "Because of the momentum factor in the adam optimizer, the impact of a model update is not limited to the current step.", "The optimizer retains the gradient in the form of momentum, which will affect the gradient updates in the next few steps.", "Therefore, the impact of momentum is not fully released in the last few training steps, so the loss rises in the end.", "Checkpoint averaging, which directly takes the average of parameters of the last few checkpoints as the final model, is a widely used technique in NMT (Junczys-Dowmunt et al., 2016; Vaswani et al., 2017).", "The averaged checkpoint generally performs better than any single checkpoint.", "However, to the best of our knowledge, its internal mechanism is not fully understood.", "In this section, we analyze the success of checkpoint averaging from the perspective of imbalanced training.", "Though training samples receive imbalanced attention from each checkpoint, this imbalance is different among checkpoints.", "If we understand the imbalanced training as the noise on each checkpoint, noises among different checkpoints can be approximately regraded as i.i.d. random variables.", "By averaging checkpoints, the variance of random noise is reduced and thereby alleviating the problem of imbalanced training.", "Based on the above analysis, we make the following hypothesis and verify it through experiments.", "Hypothesis Checkpoint averaging improves the model performance through alleviating the problem of imbalanced training.", "Experiments We conduct experiments on the six datasets to study the relationship between checkpoint averaging and imbalanced training.", "We average the last five epoch checkpoints and compare their performance with the best single checkpoint.", "Table 1 reports the model performance along with the correlation coefficient on the six datasets.", "We can see that checkpoint averaging achieves considerable improvements on datasets where the problem of imbalanced training is severe.", "On datasets with small correlation coefficients, the improvements of checkpoint averaging are very limited.", "These results confirm our hypothesis and also demonstrate that the imbalanced training problem does affect the model accuracy.", "Limitations Though checkpoint averaging can alleviate the problem of imbalanced training and improve the model performance, it also has some limitations and its success largely depends on the empirical choice of checkpoint interval.", "If the checkpoint interval is small, then the i.i.d. assumption does not hold, so the imbalance cannot be effectively eliminated and may even become stronger (Appendix C).", "If the checkpoint interval is large, then checkpoints may not lie in the same parameter space, making the direct averaging of checkpoints problematic.", "In this section, we propose Complementary Online Knowledge Distillation (COKD) to alleviate the problem of imbalanced training.", "We apply knowledge distillation with dynamically updated complementary teachers to re-provide the forgotten knowledge to the student model.", "We first introduce the construction of complementary teachers.", "Assume that we have n teacher models T 1: n and the student model is S , and both teacher models and the student model are randomly initialized.", "We expect that teacher models should be dynamically updated so that they are always complementary to the student.", "While the student 2027 learns from new training samples and gradually forgets early samples, teacher models should reprovide the forgotten knowledge to the student.", "Recall that the model pays imbalanced attention to different training samples depending on the data order of the training.", "Therefore, a natural way to obtain complementary teachers is to train teachers in different data orders.", "Specifically, in each epoch, we divide the training dataset D into n +1 mutually exclusive splits ( D 1 , D 2 , ..., D n +1 ) .", "The student model sequentially learns from D 1 to D n +1 , where the data order is different for teacher models.", "We use a ordering function O ( i, t ) to denote the training data for teacher T i at time t .", "After teacher models T 1: n learn from data splits DO (1: n,t ) respectively, the student S learn from both D t and teachers.", "To make teachers complementary with the student, the ordering function O ( , t ) should cover all data splits except D t .", "To ensure that each teacher can access the whole training data, the ordering function O ( i, ) should also cover all data splits.", "Fortunately, we find that a simple assignment of O satisfies the above requirements: O ( i, t ) = (cid:26) i + t, i + t n + 1 i + t n 1 , i + t > n + 1 .", "where i { 1 , 2 , ..., n } and t { 1 , 2 , ..., n + 1 }", "Under this assignment, teacher T i simply uses the data split that has offset i from the student, which ensures that all teachers are complementary with the student and can access the whole training set.", "The knowledge of n complementary teachers can be transfered to the student through word-level knowledge distillation:", "LKD ( )= T (cid:88) t =1 |V| (cid:88) k =1 n (cid:88) i =1 q i ( y t = k | y <t , X ) n log p ( y t = k | y <t , X , ) , (5)", "where p is the prediction of student S and q i is the prediction of teacher T i .", "We use a hyperparameter to interpolate the distillation loss and the cross-entropy loss: L ( ) = LKD ( ) + (1 ) LNLL ( ) .", "In this way, the student model learns both new knowledge from the training set and complementary knowledge from teacher models.", "With an appropriate , we can achieve a balance between the Algorithm 1 COKD Input: training set D , the number of teachers n Output: student model S 1: randomly initialize student S and teachers T 1: n 2: while not converge do 3: randomly divide D into n + 1 subsets ( D 1 , D 2 , ..., D n +1 ) 4: for t = 1 to n + 1 do 5: for i = 1 to n do 6: train T i on DO ( i,t ) 7: train S on D t according to Equation 6 8: for i = 1 to n do T i S 9: return student model S two kinds of knowledge and alleviate the problem of imbalanced training.", "However, this method is based on knowledge distillation where knowledge is transferred unidirectionally from teachers to the student.", "Though the student can benefit from balanced training, these complementary teachers also set an upperbound to the student and prevent it from performing better.", "To overcome this limitation, we follow the underlying idea of two-way knowledge transfer where the knowledge is also transferred from the student to teachers (Zhang et al., 2018; Lan et al., 2018).", "We use a simple reinitialization method to achieve the two-way knowledge transfer.", "At the end of each epoch, we reinitialize teacher models with the parameters of the student model: T i S , i { 1 , 2 , ..., n } .", "Through the reinitialization, the student and teachers are exactly the same at the beginning of each epoch.", "In this way, both the student and teachers are iteratively improved so the student performance is no longer limited by the fixed ability of teachers.", "We summarize the training process of COKD in Algorithm 1.", "To evaluate the performance of COKD, we conduct experiments on multiple machine translation tasks.", "For low-resource translation where the problem of imbalanced training is severe, we run experiments on WMT17 English-Turkish (En-Tr, 207K sentence pairs), IWSLT15 English-Vietnamese (En-Vi, 133K sentence pairs), and TED bilingual 2028 dataset.", "We also evaluate the high-resource performance of COKD on WMT14 English-German (En-De, 4.5M sentence pairs).", "For WMT17 En-Tr and IWSLT15 En-Vi, we use case-sensitive Sacre-BLEU (Post, 2018) to report reproducible BLEU scores.", "For TED bilingual dataset, following Xu et al. (2021), we report the tokenized BLEU.", "For WMT14 En-De translation, we report the tokenized BLEU (Papineni et al., 2002) with compound split.", "For WMT17 En-Tr, we use newstest2016 as the validation set and newstest2017 as the test set.", "We learn a joint BPE model (Sennrich et al., 2016) with 16K operations.", "For IWSLT15 En-Vi, we use the pre-processed data used in Luong and Manning (2015) 2 .", "For TED bilingual dataset, we use the pre-processed data used in Xu et al. (2021) 3 .", "For WMT14 En-De, the validation set is newstest2013 and the test set is newstest2014 .", "We learn a joint BPE model with 32K operations.", "In the main experiments, we set the number of teachers n to 1 and the hyperparameter to 0.95.", "We implemented our approach based on the base version of Transformer (Vaswani et al., 2017).", "Following Wei et al. (2019), we increase the dropout rate to 0.2 on WMT17 En-Tr and IWSLT15 En-Vi.", "For TED bilingual dataset, we further increase the dropout rate of Transformer baseline to 0.3.", "All models are optimized with Adam (Kingma and Ba, 2014) with the optimizer settings in Vaswani et al. (2017).", "The batch size is 32K for all translation tasks.", "For inference, we average the last 5 checkpoints and use beam search with beam size", "5. The checkpoint interval is 1000 for low-resource tasks and 5000 for WMT14 En-De.", "We first conduct experiments on the two low-resource datasets WMT17 En-Tr and IWSLT15 En-Vi and the high-resource dataset WMT14 EnDe to evaluate the capability of our method.", "We compare our method with knowledge distillation methods and deep mutual learning (Zhang et al., 2018), and also report the results of Online Distillation from Checkpoints (ODC) (Wei et al., 2019) for comparison.", "The results are listed in Table", "2. Low-Resource First, we focus on the results on the two low-resource datasets where the problem of imbalanced training is severe.", "Since we have applied the checkpoint averaging technique on the 2 https://github.com/tefan-it/nmt-en-vi 3 https://github.com/Jingjing-NLP/VOLT Models En-Tr En-Vi En-De Transformer 12.20 28.56 ODC 12.92 29.47 Transformer 13.42 29.08 27.45 Word-KD 13.66 29.54 27.76 Seq-KD 13.91 29.69 27.84 Mutual 13.72 29.83 27.81 COKD 16.66 31.95 28.26 Table 2: BLEU scores on three translation tasks.", "baseline system, our baseline is very competitive and outperforms the baseline of Wei et al. (2019).", "We refer readers to Appendix D for results without checkpoint averaging.", "Knowledge distillation techniques and deep mutual learning bring some improvements to the baseline, but the improvements are relatively weak.", "In comparison, COKD substantially improves the baseline performance by about 3 BLEU scores, demonstrating the effectiveness of COKD on low-resource translation tasks.", "High-Resource On the high-resource dataset WMT14 En-De, COKD still outperforms the baseline and knowledge distillation methods.", "The improvement of COKD is relatively small compared to the low-resource setting, which can be explained from the perspective of imbalanced training.", "As illustrated in Figure 1, high-resource datasets like WMT14 En-De is less affected by the problem of imbalanced training, so the alleviation of this problem may not bring strong improvements on high-resource datasets.", "TED Bilingual Dataset We further conduct experiments on TED bilingual dataset to confirm the effectiveness of COKD on low-resource translation tasks.", "We evaluate COKD on both En-X and X-En directions and report the results in Table", "3. The performance of COKD is still very impressive, which improves the baseline by 1.59 BLEU on average in the En-X direction, and improves the baseline by 2.15 BLEU on average in the En-X direction.", "In this section, we study the effect of complementary teachers and teacher reinitialization in COKD.", "We remove each of them respectively and report 2029 En-X Es PTbr Fr Ru He Ar It Nl Ro Tr De Vi Ave Base 40.86 40.31 41.27 21.86 29.01 18.40 36.37 34.06 28.33 17.70 31.46 29.66 30.77 COKD 42.50 42.46 43.15 22.94 30.22 19.36 37.78 35.87 29.70 19.50 33.48 31.33 32.36 X-En Es PTbr Fr Ru He Ar It Nl Ro Tr De Vi Ave Base 42.94 45.52 41.32 26.21 38.78 33.06 39.55 37.52 36.50 27.19 36.89 27.64 36.09 COKD 44.72 47.84 43.33 27.87 40.81 35.03 41.48 39.66 38.78 29.68 39.73 29.91 38.24 Table 3: BLEU scores on the TED bilingual dataset.", "their performance in Table", "4. By removing complementary teachers, we do not split the dataset and assign random data order to teachers, which leads to obvious performance degradtion.", "We also notice that a large part of improvement comes from the reinitialization, suggesting the importance of two-way knowledge transfer where both the student and teachers are iteratively improved.", "There are two hyperparameters in COKD: the number of teachers n and the loss weight , whose default settings are n = 1 , = 0 .", "95 in the main experiments.", "In this section, we conduct experiments on WMT17 En-Tr to show the effect of the two hyperparameters.", "The number of teachers We change the number of teachers n from 1 to 5 to evaluate the effect of n in COKD and report the BLEU score and training time in Table", "5. We find that using more teachers does not necessarily lead to better performance, suggesting that the main improvement is not due to the ensemble of multiple teachers.", "Large n may slightly outperform the n = 1 setting but comes with a larger training cost.", "Therefore, we recommend the n = 1 setting in practical applications.", "Though the training cost is still larger, it is acceptable on low-resource datasets considering the strong performance improvement.", "Hyperparameter We set the hyperparameter to 0 .", "75 , 0 .", "9 , 0 .", "95 , 0 .", "98 , 1 respectively.", "The corresponding BLEU scores are listed in Table", "6. We can see that the model performance is sensitive n 0 1 2 3 4 BLEU 12.57 15.43 15.35 15.66 15.47 Time 1.34h 2.87h 4.56h 6.15h 7.83h Table 5: BLEU scores of COKD on the validation set of WMT17 En-Tr.", "to the hyperparameter .", "Generally, the model prefers large , where a slightly smaller may significantly degrade the model performance.", "We explain this phenomenon as the imbalance of complementary knowledge and new knowledge.", "The distillation loss carries the complementary knowledge and the cross-entropy loss carries the new knowledge, so an appropriate should balance the two kinds of knowledge.", "Considering that the distillation loss is only a little biased to the complementary knowledge, should be much larger than 0 .", "5 , otherwise it cannot keep the balance.", "We empirically recommend the = 0 .", "95 setting, which also shows good performance on other datasets.", "In this section, we evaluate the effectiveness of COKD in alleviating the problem of imbalanced training.", "We take the final model of COKD and measure the correlation between batch-id and loss in the last epoch.", "We conduct experiments on the WMT17 En-Tr dataset where the problem of imbalanced training is severe.", "As Figure 2 shows, the downward trend of loss is successfully alleviated by COKD, and the correlation coefficient is improved from 0 .", "64 to 0 .", "16 .", "In this paper, we observe that catastrophic forgetting will cause imbalanced training, which is severe in low-resource machine translation and will affect the translation quality.", "We rethink the checkpoint averaging technique and explain its success from the perspective of imbalanced training.", "We further propose Complementary Online Knowledge Distillation (COKD), which successfully alleviates the imbalanced training problem and achieves substantial improvements in translation quality.", "We thank the anonymous reviewers for their insightful comments.", "This work was supported by National Key R&D Program of China (NO.2017YFE9132900)." ]
[ "abstain", "abstain", "result", "abstain", "abstain", "abstain", "objective", "result", "abstain", "other", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "result", "abstain", "result", "abstain", "objective", "abstain", "method", "method", "abstain", "result", "result", "objective", "result", "objective", "objective", "other", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "abstain", "method", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "result", "abstain", "result", "other", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "other", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "objective", "objective", "abstain", "abstain" ]
[ "To quantify how well natural language understanding models can capture consistency in a general conversation, we introduce the DialoguE COntradiction DEtection task (DE-CODE) and a new conversational dataset containing both human-human and human-bot contradictory dialogues.", "We show that:", "(i) our newly collected dataset is notably more effective at providing supervision for the dialogue contradiction detection task than existing NLI data including those aimed to cover the dialogue domain;", "(ii) Transformer models that explicitly hinge on utterance structures for dialogue contradiction detection are more robust and generalize well on both analysis and out-of-distribution dialogues than standard (un-structured) Transformers.", "We also show that our best contradiction detection model correlates well with human judgments and further provide evidence for its usage in both automatically evaluating and improving the consistency of state-of-the-art generative chatbots.", "Recent progress on neural approaches to natural language processing (Devlin et al., 2019; Brown et al., 2020), and the availability of large amounts of conversational data (Lowe et al., 2015; Smith et al., 2020) have triggered a resurgent interest on building intelligent open-domain chatbots.", "Newly developed end-to-end neural bots (Zhang et al., 2020; Adiwardana et al., 2020; Roller et al., 2020) are claimed to be superior to their predecessors (Worsnick, 2018; Zhou et al., 2020) using various human evaluation techniques (See et al., 2019; Li et al., 2019; Adiwardana et al., 2020) that aim to give a more accurate measure of what makes a good conversation.", "While the success is indisputable, there is still a long way to go before we * Dolphins are mammals, not fish.", "arrive at human-like open-domain chatbots.", "For example, it has been shown that open-domain chatbots frequently generate annoying errors (Adiwar-dana et al., 2020; Roller et al., 2020) and a notorious one among these is the class of contradiction, or consistency errors.", "When interacting with chatbots, people carry over many of the same expectations as when interacting with humans (Nass and Moon, 2000).", "Self-contradictions by these bots (see Fig.1, bottom) are often jarring, immediately disrupt the conversational flow, and help support arguments about whether generative models could ever really understand what they are saying at all (Marcus, 2018).", "From a listener's perspective, such inconsistent bots fail to gain user trust and their long-term communication confidence.", "From a speaker's perspective, it violates the maxim of quality in Grice's cooperative principles (Grice, 1975) Do not say what you believe to be", "false. Hence, efforts on reducing contradicting or inconsistent conversations by open-domain chatbots are imperative.", "Prior works (Welleck et al., 2019) characterized the modeling of persona-related consistency as a natural language inference (NLI) problem (Dagan et al., 2005; Bowman et al., 2015), and constructed a dialog NLI dataset based on Persona-Chat (Zhang et al., 2018), but so far state-of-the-art chatbots (Roller et al., 2020) have not been able to make use of NLI techniques in improving dialogue consistency.", "Overall, the challenge remains that we are still unable to answer the simple yet important question how good are we at modeling consistency (including persona, logic, causality, etc.) in a general conversation? .", "The inability to measure this obscures to what degree building new modules or techniques can in turn help prevent contradicting responses during generation.", "Seeking to answer this question, we introduce the DialoguE COntradiction DEtection task (DE-CODE) 1 and collect a new conversational dataset containing human written dialogues where one of the speakers deliberately contradicts what they have previously said at a certain point during the conversation.", "We also collect an out-of-distribution (OOD) set of dialogues in human-bot interactive settings which contain human-labeled self-contradictions made by different chatbots.", "We then compare a set of state-of-the-art systems, including a standard unstructured approach and a proposed structured approach for utilizing NLI models to detect contradictions.", "In the unstructured approach, a Transformer NLI model directly takes in the concatenation of all utterances of the input dialogue for prediction, following the paradigm of NLU modeling.", "In the structured approach, utterances are paired separately before being fed into Transformer NLI models, explicitly taking account of the natural dialogue structure.", "dataset is notably more effective at providing supervision for the contradiction detection task than existing NLI data including those aimed at covering the dialogue domain; (2) the structured utterance-based approach for dialogue consistency modeling is more robust in our analysis and more transferable to OOD human-bot conversation than the unstructured approach.", "This finding challenges the mainstream unstructured approach of simply applying pre-trained Transformer models and expecting them to learn the structure, especially for OOD scenarios which are often the case when incorporating NLU modules into NLG systems, since intermediate in-domain data are scarce.", "Finally, with such improvements on the contradiction detection task, we show that our best resulting detector correlates well with human judgments and can be suitable for use as an automatic metric for checking dialogue consistency.", "We further provide evidence for its usage in improving the consistency of state-of-the-art generative chatbots.", "Several prior works on improving dialogue consistency have explored using direct modeling of the dialogue context in generation algorithms.", "The modeling can be implicit where the dialogue consistency-related information like style (Wang et al., 2017), topics, or personal facts are maintained in distributed embeddings (Li et al., 2016; Zhang et al., 2019a), neural long-term memories (Bang et al., 2015), hierarchical neural architecture (Serban et al., 2016), latent variables (Ser-ban et al., 2017), topical attention (Dziri et al., 2019a), or even self-learned feature vectors (Zhang et al., 2019b).", "Some works have grounded generation models on explicit user input (Qian et al., 2018), or designated personas (Zhang et al., 2018).", "Although, improvements on automatic generation metrics were often shown on guided response generation based on the consistency modeling, the issue of contradiction has never been resolved, nor have generally applicable methods to gauge the consistency improvements been developed.", "Further, simply scaling models has not made the problem go away, as is evident in the largest chatbots trained such as BlenderBot with up to 9.4B parameter Transformers (Roller et al., 2020).", "More similar to our work is utilizing NLI models in dialogue consistency.", "Dziri et al. (2019b) attempted to use entailment models trained on synthetic datasets for dialogue topic coherence evaluation.", "Particularly, Welleck et al. (2019) constructed the dialogue NLI dataset and (Li et al., 2020) utilized it to try to reduce inconsistency in generative models via unlikelihood training in a preliminary study that reports perplexity results, but did not measure actual generations or contradiction rates.", "We note that the dialogue NLI dataset is only semi-automatically generated, with limited coverage of only Persona-chat data (Zhang et al., 2018), whereas our DECODE is human-written and across multiple domains.", "Our task also involves logical and context-related reasoning beyond personal facts.", "We show that transfer of DECODE is subsequently more robust than dialogue NLI on both human-human and human-bot chats.", "We formalize dialogue contradiction detection as a supervised classification task.", "The input of the task is a list of utterances x = { u 0 , u 1 , u 2 , ..., u n } representing a dialogue or a dialogue snippet.", "The output is y , indicating whether the last utterance u n contradicts any previously conversed information contained in the dialogue { u 0 , u 1 , ..., u n 1 } , where y can be 0 or 1 corresponding to the noncontradiction and the contradiction label respectively.", "Preferably, the output should also include a set of indices I { 0 , 1 , ..., n 1 } representing a subset of { u 0 , u 1 , ..., u n 1 } which contain information that is actually contradicted by the last utterance u n .", "The extra indices I output require models to pinpoint the evidence for the contradiction, providing an extra layer of explainability.", "Our goal is first to collect training and evaluation data for this task.", "We thus collect dialogues in which the last utterance contradicts some previous utterances in the dialogue history.", "To obtain such dialogues, we give annotators dialogue snippets from pre-selected dialogue corpora, and then ask them to continue the conversation by writing one or two utterances such that the last utterance by the last speaker contradicts the dialogue history.", "We also ask annotators to mark all the utterances in the dialogue history that are involved in the contradiction as supporting evidence.", "We ask annotators to write contradicting utterances based partly on existing dialogues rather than collecting new dialogue from scratch because the provided dialogues can often convey semantic-rich contexts from different domains and inspire annotators to write more diverse examples.", "We don't impose constraints on the annotation such that the annotator could have the flexibility to write more diverse contradictory responses that might not belong to pre-defined types (knowledge, emotion, persona, etc).", "Also note that we ask the annotator to write contradictory dialogues based on pre-selected human-human dialogue rather than collecting dialogues from human-bot interaction for the main dataset because we want the examples to be general and less bound to specific bots.", "2 We crowdsource the continuation and annotation data with Amazon Mechanical Turk via ParlAI (Miller et al., 2017).", "To ensure data quality, we apply three techniques:", "(i) an onboarding test every annotator has to pass to contribute examples;", "(ii) each annotator can only create up to 20 examples; and", "(iii) all examples in the validation and test set are verified by asking 3 additional workers.", "More details about annotation are provided in Appendix.", "We collected 17,713 human-written contradicting dialogues in which 4,121 are verified by 3 annotators.", "The pre-selected dialogue source corpora are Wizard of Wikipedia (Dinan et al., 2019), EMPATHETICDIALOGUES (Rashkin et al., 2019), Blended Skill Talk (Smith et al., 2020), and ConvAI2 (Dinan et al., 2020), covering various conversational topics.", "To facilitate the evaluation of consistency modeling on the dialogue contradiction detection classification task, we sample an equal number of non-contradicting dialogues according to the same dialogue length distribution as the contradicting ones from the same dialogue corpus.", "Then, we make the splits such that the train split contains unverified examples, and dev and test splits only contain verified examples.", "Each split has balanced labels between contradiction and non-contradiction.", "The breakdown of each of the dataset sources is shown in Appendix.", "Auxiliary (Checklist) Test Sets.", "We further create two auxiliary checklist evaluation sets by transforming the contradiction examples in the original test in two ways such that the ground truth label is 2 Alongside the main dataset, another portion of the examples are collected via human-bot interaction and used as out-of-domain evaluation.", "either invariant or expected to flip.", "The two resultant sets serve as diagnostic tests on the behavior, generalization and transferability of our models.", "The transformations are described below: Add Two Turns (A2T) We insert a pair of randomly sampled utterances into the dialogue such that the inserted utterances are between the two original contradicting utterances.", "This gives a new contradicting dialogue with a longer dialogue history.", "Remove Contradicting Turns (RCT) We remove all the turns (all pairs of utterances) 3 marked as supporting evidence for the contradiction in the dialogue except the last utterance.", "This results in a new non-contradiction dialogue.", "Human-Bot Test Set.", "Our main dataset involves human-written dialogues containing contradicting utterances based on human-human dialogues from existing corpora.", "In practice, to evaluate the response quality of a machine rather than a human in terms of its consistent responses, we care about how well a contradiction detector can perform in human-bot interactive conversations.", "To that end, we further collect human-bot dialogue data by employing crowdworkers to interact with a diverse set of open-domain bots.", "These include Poly-encoder (Humeau et al., 2019) based retrieval models, generative models (Roller et al., 2020), unlikelihood trained models (Li et al., 2020), retrieve-and-refine models (We-ston et al., 2018; Roller et al., 2020), models either pre-trained on a previously existing Reddit dataset 3 The dataset dialogues involve two speakers taking turns speaking.", "To maintain this structure, for each marked utterance, we remove a pair of utterances that represents a turn.", "This also helps remove information involved in the contradiction such that the new label should be non-contradiction.", "extracted and obtained by a third party that was hosted by pushshift.io (Baumgartner et al., 2020) or fine-tuned on the Blended Skill Talk (BST) dialogue tasks (Smith et al., 2020) that is, all the dialogue models that are compared in the study in Roller et al. (2020).", "During the collection, if the bot generates an utterance that contradicts itself, we ask the worker to mark the utterance.", "In some of the dialogues, workers are explicitly instructed to goad the bots into making contradicting utterances.", "The final human-bot test set we derive contains 764 dialogues, half of which end with a contradicting utterance by the bot.", "All the dialogues in the set, with either contradiction or non-contradiction labels, are verified by 3 additional annotators, beside the human who actually talked to the bot.", "The auxiliary and human-bot test sets aim to test models' robustness and generalizability beyond the collected human-written test set (Ribeiro et al., 2020; Gardner et al., 2020), and give a more comprehensive analysis of the task.", "Table 1 summarizes the final overall dataset.", "Examples are provided for each dataset type in Fig. 1 and Appendix Table 5.", "To model the dialogue consistency task, we first employ some of the techniques used in NLI sequence-to-label modeling, where the input is a pair of textual sequences and the output is a label.", "The benefit of such modeling is that we can directly make use of existing NLI datasets during training.", "However, unlike previous work (Welleck et al., 2019) that directly utilized NLI models giving a 3-way output among entailment, contradiction, and neu-tral, we modify the model with a 2-way output between contradiction and non-contradiction (either entailment or neutral) labels, as our task is centered around the detection of inconsistency.", "More formally, we denote the model as y pred = f ( C , u ) , where y pred is the prediction of the label y , i.e. whether the textual response u contradicts some textual context C = { u 0 , u 1 , ..., u n 1 } , and are the parameters of the model.", "Unstructured Approach.", "In this approach, we simply concatenate all the previous utterances in the dialogue history to form a single textual context.", "Then, we apply f to the context and the last utterance to infer the probability of contradiction: y pred = f ([ u 0 , u 1 , u 2 , ..., u n 1 ] , u n ) (1) When concatenating the utterances, we insert special tokens before each utterance to indicate the speaker of that utterance.", "This is aimed to provide a signal of the dialogue structure to the models.", "Still, this approach assumes that the model can use these features adequately to learn the underlying structure of the dialogue implicitly during training.", "Structured Utterance-based Approach.", "Since the reasoning crucially depends on the last utterance, in this method we first choose all the utterances by the last speaker to form a set S .", "We then pair every utterance in the set with the last utterance and feed them one by one into f UB .", "The final contradiction probability is the maximum over all the outputs: y pred = max (cid:8) f UB ( u i , u n ) : u i S (cid:9) (2) Additionally, the utterance-based approach is able to give a set of utterances as supporting evidence for a contradiction decision by choosing the pairs having contradiction probability higher than a threshold e : I = (cid:8) i : f UB ( u i , u n ) > e (cid:9) (3) This not only gives explanations for its prediction but can also help diagnose the model itself, e.g. we can measure metrics of the model's ability to provide these explanations by comparing them against gold supporting evidence annotations.", "One downside of this modeling approach is that it will not be able to capture reasoning between speakers.", "A case for that would be a pronoun by one speaker might refer to something initiated by the other speaker.", "Nevertheless, the utterance-based approach explicitly adds an inductive structure bias to learning and inference which we will see can aid its generalization capability.", "Thresholding.", "For both the unstructured and utterance-based approaches, the detection of contradiction is made by comparing y pred with a threshold and by default is 0.5.", "We study four base pre-trained models variants for f : BERT (Devlin et al., 2019), Electra (Clark et al., 2020), RoBERTa (Liu et al., 2019), and", "Model Training Data MT MT (Strict) HB SE", "BART (Lewis et al., 2020).", "They represent the start-of-the-art language representation models and have yielded successes in many NLU tasks.", "The input format of f follows how these models handle sequence-pairs ( C and u ) for classification tasks with padding, separator and other special tokens such as position embeddings and segment features inserted at designated locations accordingly.", "We fine-tune f on different combinations of NLI training data including SNLI (Bowman et al., 2015), MNLI (Williams et al., 2018), ANLI-R3 (Nie et al., 2020a) 4 , DNLI (Welleck et al., 2019), as well as our DECODE Main training set.", "We convert the 3-way labels of the examples in existing NLI datasets to 2-way, as described before, and is optimized using cross-entropy loss.", "When training f UB in the utterance-based approach using the DECODE training set, the input sequences 4 ANLI data is collected in three rounds resulting in three subsets (R1, R2, R3).", "We only used training data in R3 since it contains some dialogue-related examples.", "are sampled utterance pairs from the DECODE dialogue.", "In other scenarios, f or f UB are trained with data treated as in normal NLI training.", "The models are evaluated on the test sets described in subsection 3.3.", "For the utterance-based approach, which provides supporting evidence utterances (Equation 3), we report F1 on evidence retrieval.", "We also report a stricter score which evaluates whether both 2-way contradiction detection and supporting evidence retrieval exactly match with the ground truth on DECODE Main test.", "Our main results comparing various detectors on DECODE are shown in Table", "2. We now describe our key observations.", "DECODE is notably more effective than other existing NLI data in providing supervision for contradiction detection in dialogue.", "We found that models trained on DECODE achieve higher accuracy than that of those trained on DNLI or ANLI-R3, on all evaluation sets, with large improvements, e.g. a 12-point jump from the same model training on ANLI-R3 and a 16-point jump from training on DNLI using utterance-based RoBERTa on the DECODE Main test set.", "Moreover, while training on All datasets (SNLI, MNLI, ANLI-R3, DNLI & DECODE) is effective, the removal of DECODE from the training data induces a consequential downgrade on the performance.", "Training on NLI data which does not cover the dialogue domain, e.g., SNLI+MNLI is even worse, only achieving 77.4% on DECODE Main (Test) vs. 93.19% for DECODE and cannot even reach the majority baseline on the Main (Test-Strict).", "Further, training on DECODE is also more helpful than DNLI or ANLI-R3 for supporting evidence retrieval.", "These findings indicate that existing NLI data has limited transferability to the dialogue contradiction detection task despite their coverage of the dialogue domain in addition to other domains and that our DECODE data provides a valuable resource for modeling dialogue consistency and developing data-driven approaches for contradiction detection.", "Different pre-training models that perform similarly on the in-domain test set can have very different performance on OOD human-bot dialogue.", "The last four rows of the table show the results of utterance-based RoBERTa, BERT, Elec-Utterance-based Unstructured 0.0 20.0 40.0 60.0 80.0 100.0 A cc u r a cy 93.2 96.8 84.7 70.0 91.5 97.5 78.4 34.4 DECODE Main (Test) Human-Bot A2T RCT Figure 2: Comparison between utterance-based and unstructured approaches of RoBERTa pre-trained, DECODE fine-tuned models on DECODE Main (Test), Human-bot, and auxiliary test sets.", "tra, and BART trained on DECODE.", "We can see that RoBERTa, Electra, and BART got similar in-domain accuracy on DECODE, around 93%-94%.", "RoBERTa stands out when comparing their performance on the human-bot test set with the highest score of 84.69% across the column and with better performance on supporting evidence retrieval as well.", "We speculate that this is due to the fact that RoBERTa pre-training data has a broader coverage than Electra and BART.", "We hope future work on dialogue contradiction detection could explore pretraining models on more dialogue-focused corpora.", "The unstructured approach gets higher accuracy on the in-domain test set.", "A direct comparison between unstructured RoBERTa and utterance-based RoBERTa trained on DECODE reveals that the unstructured approach more often than not gets a higher accuracy than its corresponding utterance-based approach when other experimental setups are kept identical.", "Noticeably, unstructured RoBERTa trained on all NLI data got a 97.46% score, whereas utterance-based yielded 94.19%.", "This seemingly indicates that training an unstructured model is able to yield a good representation of the consistency of the dialogue.", "However, analysis on the human-bot and auxiliary test sets shows that such high accuracy is an over-amplification of the model's real understanding ability, as we discuss next.", "The structured utterance-based approach is more robust, and more transferable.", "Figure 2 gives a comparison between utterance-based and unstructured RoBERTa on each of the evaluation sets.", "We can see that the utterance-based model is able to maintain satisfactory performance across all the sets whereas the unstructured model under-performs at the human-bot and RCT auxiliary test sets with a 34.4% accuracy on RCT compared to 78.4% for utterance-based, in stark contrast to the high performance of the unstructured method on the in-domain DECODE Main test set.", "This result indicates the unstructured approach overfits on superficial patterns in the DECODE Main training data which are still present due to RCT's construction process.", "5 We also provide further analysis in Appendix E, including experiments showing that simply removing speaker utterances not uttered by the last speaker does not greatly improve the unstructured method.", "The fact that the utterance-based approach has good transferability to the OOD human-bot test set indicates that injecting the correct inductive structure bias is beneficial for modeling dialogue consistency.", "We believe this is an interesting result generally for research using Transformers, where there is currently a belief amongst some practitioners that they can just use a standard Transformer and it will learn all the structure correctly on its own.", "In our setting that is not the case, and we provide a method that can rectify that failing.", "In general, there is still much room for improvement.", "The results in Table 2 also demonstrate that the modeling of dialogue consistency is a demanding task.", "On the contradiction detection task, the best score achieved by the state-of-the-art pre-trained language models on DECODE (Test-Strict) is 80.86% and the best human-bot test score is 84.69%.", "Considering all the examples in the test sets are verified by at least 3 annotators, humans are able to swiftly identify such contradictions.", "This suggests there is a large ability gap between our best automatic detectors and humans.", "Closing this gap is an important challenge for the community.", "Model vs. Human Judgment.", "To further understand the detector predictions and how well they might align with human judgments, we consider the Human-Bot data again.", "We first divide all the utterances into two categories based on whether they are generated by a human or a bot.", "Then, the bot-generated utterances that have been marked by annotators as contradicting utterances are cat-5 Overfitting on superficial patterns is a typical issue and open problem in NLU modeling (Nie et al., 2020a).", "Utterance-based (DECODE) Utterance-based (DNLI) Unstructured (DECODE) F i r e R a t e 5.5 17.9 14.0 22.9 29.4 21.7 44.3 44.8 31.7 46.3 48.9 39.7 74.3 65.1 50.1 Human Bot @1 @2 @3 Figure 3: The fire rate (the percentage that it predicts contradiction) of RoBERTa models with different setups on utterances belonging to different categories.", "egorized into three sets based on the number of annotators that agree on the contradiction label.", "By design, the more annotators that agree on the contradiction label, the more plausible that it is a contradiction.", "We examine detector model fire rate on the utterances in the 5 different categories and results are shown in Figure", "3. The fire rate of utterance-based RoBERTa trained on DECODE on human utterances is 5.5% contrasting to the 74.3% on 3-agreed contradicting utterances, whereas the fire rates of unstructured RoBERTa on different categories are more clustered together.", "This finding demonstrates that our models can discriminate between utterances with a distinct nature, and the model predictions are aligned with human judgments.", "Moreover, a strong discriminative detector could be a useful tool to stratify utterances.", "Using DECODE as an Automatic Metric.", "The results presented above indicate that the prediction of the detector can easily differentiate between the quality of utterances by humans and the utterances by bots.", "We further investigate whether it can differentiate the quality of the utterances by different bots and be used as an automatic metric checking generation consistency.", "We compare the average contradiction score of the detector with the contradiction rate by human judgments on the utterances generated by different classes of model (bots).", "The bots are the same set of models described in subsection 3.3 from which we collected our human-bot 0.04 0.06 0.08 0.1 0.12 Human Identified Contradiction Rate 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 A v g .", "dialogue examples.", "The trend in Figure 4 reveals that the scores are positively correlated with human judgments, with a Pearson correlation coefficient of 0.81.", "We would expect that improvement on the DECODE task will directly increase the correlation between the automatically produced detection score and human judgments, where use of such an automatic metric can ease the burden on laborious human evaluation of consistency.", "Given a contradiction detector, an obvious question other than using it as an automatic metric, is: can it be used to improve the consistency of dialogue generation models?", "We consider a very simple way to do that in the state-of-the-art generative model, BlenderBot (BST 2.7B) (Roller et al., 2020).", "During the decoding phase, for decoding methods that can output multiple hypotheses, we simply rerank the top scoring hypotheses using the contradiction detection classifier.", "We use our best performing classifier, our utterance-based RoBERTa model with DECODE fine-tuning, and consider three methods of decoding: beam search, topk sampling (Fan et al., 2018) and sample-and-rank (Adiwardana et al., 2020), and compare the standard and DECODE-reranked decoding methods to each other.", "For beam search we use the best found parameters from (Roller et al., 2020) which are beam size 10, minimum beam length 20 and beam blocking of 3-grams.", "For topk we use k = 40 .", "For Sample-and-Rank we use k =40 and 20 samples.", "We consider the same human-bot dialogue logs as before, but only between Blenderbot BST 2.7B and humans, selecting only contradicting Model + DECODE Human Decoding Strategy Contradict% Contradict% Standard generation Beam Search 69.7% 84.2% Topk ( k = 40 ) 42.1% 69.7% Sample-and-Rank 39.5% 55.3% DECODE Re-ranking Beam Search 46.1% 55.3% Topk ( k = 40 ) 2.6% 39.5% Table 3: Generation Re-ranking using DECODE vs. standard methods, reporting the contradiction % as flagged by our contradiction detection classifier (i.e., an automatic metric, DECODE Contradict%) in addition to human judgments (Human Contradict%).", "utterances.", "Table 3 presents the results.", "Automatic metric using DECODE.", "Using our same DECODE contradiction classifier as the automatic metric, as in subsection 5.2, we observe that by re-ranking the beam of beam search (size 10) we can improve the metric.", "Still, 46.1% of the time the detector flags generations as contradictions (vs. 69.7% without re-ranking).", "Upon observation of the outputs, this seems to be due to the beam of beam decoding not being diverse enough (Vijayakumar et al., 2016): when the top scoring utterance is flagged as contradicting, many of the other utterances in the beam are similar responses with slight rephrases, and are flagged contradicting as well.", "Topk sampling fares much better, where reranking in our test can very often find at least one from the k = 40 samples that does not flag the classifier, leaving only a 2.6% contradiction firing rate.", "We note we expect these numbers are over-optimistically low because the metric itself is being used to search (re-rank) and evaluate in this case.", "Human Judgments.", "The last column of Table 3 presents human judgments of the various model generations, judged using the same approach as before with human verifiers, and reporting the percentage of contradictions.", "We observe similar results to the automatic metric findings.", "DECODE re-ranking reduces the number of contradictions, particularly for Topk re-ranking vs. Topk : testing for significance with a Wilcoxon signed-rank test, we get p = 0 .", "051 using two human verifiers and p = 0 .", "023 for three verifiers.", "More detailed results and analysis can be found in Appendix G. 6 Conclusion We introduce the DialoguE COntradiction DEtection task (DECODE) and a new conversational dataset containing both human-human and human-bot contradictory dialogues.", "Training models on DECODE achieves better performance than other existing NLI data by a large margin.", "We further propose a structured utterance-based approach where utterances are paired before being fed into Transformer NLI models to tackle the dialogue contradiction detection task.", "We show the superiority of such an approach when transferring to out-of-distribution dialogues compared to a standard unstructured approach representative of mainstream NLU modeling.", "We further show that our best contradiction detector correlates with human judgments, and provide evidence for its usage in both automatic checking and improving the consistency of state-of-the-art generative chatbots.", "We thank the reviewers, and Jie Lei and Hao Tan for their helpful discussions.", "YN interned at Facebook.", "YN and MB were later sponsored by NSF-CAREER Award 1846185, DARPA MCS Grant N66001-19-2-4031, and DARPA YFA17-D17AP00022." ]
[ "objective", "result", "objective", "abstain", "result", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "objective", "method", "objective", "abstain", "abstain", "objective", "abstain", "result", "result", "other", "other", "other", "other", "other", "method", "other", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "other", "other", "other" ]
[ "Sentence compression reduces the length of text by removing non-essential content while preserving important facts and grammaticality.", "Unsupervised objective driven methods for sentence compression can be used to create customized models without the need for ground-truth training data, while allowing flexibility in the objective function(s) that are used for learning and inference.", "Recent unsupervised sentence compression approaches use custom objectives to guide discrete search; however, guided search is expensive at inference time.", "In this work, we explore the use of reinforcement learning to train effective sentence compression models that are also fast when generating predictions.", "In particular, we cast the task as binary sequence labelling and fine-tune a pre-trained transformer using a simple policy gradient approach.", "Our approach outperforms other unsupervised models while also being more efficient at inference time.", "In general, the information content of text is correlated with its length.", "However, for a given text, a shorter version may still convey the essential information while preserving grammaticality (Sid-dharthan, 2014).", "The definition of essential can change depending on the downstream application, thus models for text compression must be able to adapt based on information about the downstream task.", "Sentence compression models have been used as sub-modules of text and speech summarization (Banerjee et al., 2015; Shang et al., 2018), for headline generation (Dorr et al., 2003), subtitle generation (Vandeghinste and Pan, 2004), and summarizing emails (Zajic et al., 2008).", "Potential applications also include snippet generation and highlighting for social media, blog posts or search results.", "Given a particular text compression task, relevant evaluation metrics and auxiliary models of compression quality may not be straightforward to formulate as well-behaved differentiable objectives that can be used with standard backpropagation.", "In addition, ground-truth examples may be difficult to obtain because the annotation task is difficult to fully specify, and metrics which capture different facets of compression quality, such as fluency and optimal sentence length, may be negatively correlated.", "Even in the case where ground-truth examples are available, they are likely to represent only a subset of the possible outputs, so there is a risk of over-fitting or biasing models when relying solely upon a small amount of gold training data for optimization.", "Recent unsupervised sentence compression approaches leverage powerful neural language models to directly optimize objectives such as fluency and faithfulness of compressed sentences, using discrete search strategies, without relying on ground-truth examples (Niu et al., 2019; Zhou and Rush, 2019; Schumann et al., 2020).", "However, these search-based methods are very inefficient at inference-time because the search must navigate through a large candidate space while recomputing expensive reward functions.", "To allow for flexible reward specification, while also enabling efficient inference, we design a simple and effective reinforcement learning (RL) setup: our model is initialized as an unsupervised pretrained language model with an untrained binary classification head (see Figure 2), and the sentence compression task is framed as sequence labeling, with optimization via policy gradient using a suite of reward functions.", "Sentences are compressed in an instantaneous, one-step fashion, similar to mod-ern part-of-speech tagging or named entity recognition models.", "This approach simplifies the learning setup while also allowing for high throughput.", "According to quantitative evaluation on several summarization benchmarks, our approach shows similar or superior performance compared to search-based methods, while also being much faster at inference time.", "Our approach to unsupervised extractive sentence compression has the following benefits: Unsupervised: No labelled examples are required.", "Fast inference: At test time, the model only performs one-step sequence labeling.", "Configurable: Rewards can be tailored to specific use cases.", "We review related work in Section 2.", "Section 3 formalizes the task.", "Section 4 gives a detailed description of the model and reward functions.", "Section 5 presents experimental results, and Sections 6 and 7 provide analysis and discussion of our findings.", "Early work on sentence compression casts the task as an optimization problem under linguistically motivated constraints (Hori and Furui, 2004; Clarke", "and Lapata, 2006, 2008).", "The objectives to be optimized include n-gram language model scores and frequency-based word relevance measures.", "Constraints are designed to ensure the grammaticality of compressions.", "Some recent work follows the discrete optimization paradigm while leveraging powerful models as objective functions in place of hand-crafted constraints, while exploring different strategies for heuristic search: Zhou and Rush (2019) use beam search to optimize a fluency and a similarity objective.", "Schumann et al. (2020) use a greedy hill-climbing search to optimize fluency and similarity objectives.", "Niu et al. (2019) use a greedy search with a look-ahead mechanism, only optimizing fluency.", "All of these recent approaches use large neural language models to estimate fluency.", "While the approach presented in whis work does not involve discrete search, we consider it complementary and orthogonal to our RL-based approach (see Section 7 for more discussion).", "Another commonly proposed unsupervised framework is to use autoencoders and reconstruc-tion objectives (Miao and Blunsom, 2016; Fvry and Phang, 2018; Malireddy et al., 2020).", "These approaches are based on the assumption that a good sentence compression is one from which the original sentence can be inferred.", "Wang et al. (2018) is an example of prior work using reinforcement learning for unsupervised sentence compression.", "They use a Deep Q-Network to optimize a reward incorporating n-gram language model probabilities and grammatical constraints.", "This model repeatedly deletes a token until it terminates, as opposed to our one-step approach.", "Zhao et al. (2018) also use RL to optimize a syntax-focused language model score.", "However, their policy is initialized with a supervised sentence compression model, whereas ours is fully unsupervised.", "Reinforcement learning has become popular in the wider field of text summarization, finding applications in both extractive and abstractive sub-tasks.", "One use case of RL is in supervised scenarios, where rewards are computed based on ground-truth examples, e.g., ROUGE scores, to overcome issues with cross-entropy losses (Paulus et al., 2017; Narayan et al., 2018; Dong et al., 2018).", "BANDITSUM (Dong et al., 2018) in particular has a very similar RL setup to ours: they train in one-step episodes where a policy predicts extractive labels 1268 and immediately receives a reward.", "Scialom et al. (2019) augment a ROUGE-based reward with a reward based on question answering.", "Bhm et al. (2019) and Stiennon et al. (2020) learn reward functions from human quality ratings of summaries.", "Similar to our unsupervised approach, Laban et al. (2020) use RL for unsupervised abstractive summarization, optimizing reward functions representing fluency, coverage under a length constraint, and also use a policy gradient approach.", "We focus on the specific task of summarizing a sentence by extracting a subset of its tokens in their original order.", "Given an input sentence x consisting of n tokens x = ( x 0 , x 1 , ..., x n ) , we aim to produce a sequence of binary labels y = ( y 0 , y 1 , ..., y n ) { 0 , 1 } n , where each label indicates whether the corresponding input token should be included in the compressed version of a sentence.", "We further assume an objective function, or reward function R ( x, y ) that measures how well applying the labels y summarizes the original sentence x .", "For a particular x , the goal is to find argmax y R ( x, y ) , without access to any ground-truth examples.", "In general, there are 2 n possibilities to shorten a sentence in this task.", "A fixed summary length L would reduce this to (cid:0) nL (cid:1) possibilities, peaking at L = n 2 (for even n ).", "We do not constrain our approach to a fixed length, but we compare it to search-based techniques that are constrained to the (cid:0) nL (cid:1) search space.", "We train a policy with parameters to produce binary labels.", "Given an input x , the policy predicts a binary keep/discard probability distribution for each token index in x .", "We use the notation ( | x ) to refer to the collection of these distributions for all tokens in x .", "We obtain the probability ( y | x ) of a label sequence y given input sequence x as follows: ( y | x ) = (cid:89) i ( y i | x ) , (1) where ( y i | x ) is the probability of a token x i being included if y i = 1 or excluded if y i = 0 .", "We train our model using a policy gradient technique (Sutton et al., 1999).", "Unlike typical sequential reinforcement learning scenarios, our only performs one action for a given input, receiving the corresponding reward immediately, without transi-tioning through other intermediate states.", "Therefore, our setup is similar to a contextual multiarmed bandit problem (Langford and Zhang, 2008), where each \"arm\" corresponds to a particular label sequence y = ( y 0 , y 1 , ..., y n ) { 0 , 1 } n .", "However, in our scenario, the policy is generally allowed to access rewards for multiple possible actions via sampling, which is different from typical bandit settings where only one ( action, reward ) pair is available for each episode.", "The training objective is to maximize the expected reward assigned to a predicted label sequence y for a given input x , computed by the reward function R : J ( ) = E [ R ( x, y )] (3) The policy gradient theorem states that the gradient of this expectation can be expressed as follows (Sutton et al., 1999): J ( ) = E [ R ( x, y ) log ( y | x )] (4) Since the above expectation is intractable for a large dataset and the corresponding action space, this gradient is estimated by sampling: J ( ) = r s log ( y s | x ) , (5) where y s ( | x ) is a sample from the current policy at a given step, consisting of binary token labels y s = ( y s 0 , y s 1 , ..., y sn ) , and r s = R ( x, y s ) .", "As is commonly done when using policy gradients, we subtract a baseline from the reward for variance reduction.", "We instantiate the baseline as r a = R ( x, y a ) , the reward given to the the most likely label sequence y a according to the current policy.", "The gradient becomes: J ( ) = ( r s r a ) log ( y s | x ) (6) Accordingly, we train our model by minimizing the following loss function: 1269 = ( r a r s ) log ( y s | x ) .", "Using the baseline r a allows the intuitive interpretation that a sample y s is encouraged if its reward is higher than the current policy's prediction, i.e., when factor ( r a r s ) is negative, and discouraged otherwise.", "Prior work with a similar application of policy gradient (Dong et al., 2018; Laban et al., 2021) observed an advantage in sampling k times and taking the average loss over all samples rather than using a single sample.", "However, in our experiments, we observe that only using the sample with the maximum reward from a large number of samples works significantly better than taking the average or only sampling once.", "A large k improves the discovery of high-quality compressions if we only use a single sample or a very small k , we observe a higher tendency of models to converge on simple behaviors with low reward improvements, such as only extracting the firstL tokens of a sentence.", "The choice of k controls a trade-off: with a higher k , we spend more time computing the rewards of samples and less on model updates, given a limited wall-time constraint for training.", "We determine k in an unsupervised manner using a validation set (details in Section 5.2).", "is initialized as a transformer encoder model with a linear classification head.", "In particular, we use the 6-layer DistilRoBERTa model (Sanh et al., 2019) due to its efficiency and smaller size compared to other BERT-like models, while retaining good results on the GLUE benchmark 1 .", "During training, the whole model is fine-tuned.", "For each token in the input, our model will determine whether it should be kept or filtered.", "Figure 2 visualizes the design.", "This architecture produces summaries in an instantaneous, non-autoregressive fashion, allowing for fast prediction (see Section 5.6).", "We do not have direct access to ground-truth training data in our setup, so we consider a suite of reward functions that may correlate with different aspects of sentence compression quality.", "This reward function is intended to ensure grammatically correct and well-written sentences.", "We use a masked language model (LM) to estimate the fluency of a compressed sentence.", "In particular, we compute fluency as the average logit of a token y i in the compressed sentence y .", "We do this without masking y i to reduce the running time during training, as masking would require to re-encode the sentence for each token.", "Based on our experiments, this simplification still produces good estimates of fluency.", "We normalize R fluency by dividing it by an empirically set constant, to keep its values in a similar range compared to the other rewards.", "The con-stant is an observed minimum value from a sample dataset.", "We argue that a masked language model is more appropriate in our setup compared to a left-to-right (causal) language model when predicting or sampling a compressed sentence during training, the sentence is treated as a finished rather than an intermediate output, which is not captured by the auto-regressive inference of causal LMs.", "We confirm the advantage of a masked LM over a left-to-right LM in a comparison on a development set (Appendix A).", "We note the precedent for using language models to measure fluency: Zhou and Rush (2019) and Schumann et al. (2020) use language models trained on a summarization target domain, e.g., headlines.", "Laban et al. (2020) uses a generic causal language model to estimate fluency.", "Niu et al. (2019) use a masked language model to score candidate compressions.", "The similarity reward is intended to preserve the meaning of the source sentence in the compressed sentence.", "We experiment with several options to compute similarity, all using models from the sentence-transformers library 2 (Reimers and Gurevych, 2019): Bi-Encoder Similarity : A sentence encoder f separately computes embeddings for the source and the predicted summary.", "We calculate the cosine similarity between both embeddings: R sim ( x, y ) = cos ( f ( x ) , f ( y )) 2 https://www.sbert.net/ 1270 Cross-Encoder Similarity : Output of a cross-encoder model f sim measuring the semantic textual similarity between both sentences: R sim ( x, y ) = f sim ( x, y ) Cross-Encoder NLI: We also test a natural language inference (NLI) model f nli to estimate how well a compressed sentence retains source information.", "The intuition is that the source should imply information in the output: R nli ( x, y ) = f nli ( y | x ) Based on experiments on a development dataset, the bi-encoder similarity performs best in our setup.", "Because our model is non-sequential, we cannot easily employ a hard constraint to control the length of compressed sentences.", "Instead, we impose a soft length control using Gaussian reward functions.", "In particular, we either use a reward function for the length (token count) in a compressed sentence R len , or one for the compression ratio between the source and prediction, in terms of token counts, R cr .", "We choose one of these two depending on whether a consistent length or a consistent ratio is desired, which differs for different evaluation datasets.", "We set the distribution means of both rewards as the desired values for word count and compression ratio.", "We set the standard deviations as the mean times a factor s which we set to 0.4 for both reward functions (Equations 9, 10): R len = N ( L , ( s L ) 2 ) , (9) R cr = N ( cr , ( s cr ) 2 ) .", "The final reward function is an average of the reward functions R fluency , R sim , combined with either R len or R cr :", "In practice, when the downstream task is known, reward functions may be designed and calibrated based upon insights and domain expertise, e.g., an optimal summary length for a specific application or different language models corresponding to different summary styles.", "In this work, we only use publicly available and commonly-used off-the-shelf models to construct reward functions.", "This section presents a detailed analysis and evaluation results for our proposed model.", "We name our model SCRL (Sentence Compression with Reinforcement Learning) .", "We make all code, model outputs and data available 3 .", "We use two datasets for training: Newsroom (Grusky et al., 2018) and Gigaword (Rush et al., 2015).", "For Newsroom, we extract the first three sentences from each article, only keeping sentences with a number of tokens between 15 and 60.", "Newsroom was chosen due to the large size and a variety of un-preprocessed news articles from different sources.", "Ground-truth summaries are not included in the training data, thus the two datasets are treated as large unlabeled text collections.", "We train a model for short headline-like summaries on Gigaword to evaluate it on the Gigaword test set, which comes in a specific preprocessed format 4 .", "Training on Gigaword allows to expose the model to the same preprocessing, for a fair evaluation.", "We constructed a small labelled validation dataset for model development: we automatically identified sentence-summary pairs in Newsroom, also including title-summary pairs, by extracting cases where the tokenized summary is contained in a tokenized sentence, with preserved order.", "We manually filter a subset of these examples based on grammaticality and informativeness and obtain 280 examples.", "This dataset was only used during initial development to compare the different reward function variants discussed in Section 4.3.", "The evaluation includes five test sets key statistics are listed in Table 1.", "L src , L tgt are the token counts in source and target sentences and cr = L tgt /L src is the compression ratio.", "Following Schumann et al. (2020), we compare our models on Gigaword against baselines of comparable length brackets using ROUGE F1-scores 5 .", "For DUC2004 (Task 3 https://github.com/complementizer/ rl-sentence-compression 4 Lowercased, pre-tokenized, rare words and digits replaced with special tokens. 5 We only consider lengths similar to the ground-truth, i.e. 8-10 tokens. 1271 Testset Type Size L src L tgt cr Gigaword abs 1951 29.7 8.8 0.4 DUC2004 abs 500 32.9 11.9 0.41 Google ext 1000 27 11 0.45 Broadcast ext 1370 19.8 14.4 0.76 BNC ext 1629 27.9 19.3 0.72 Table 1: Overview of the evaluation datasets. The Type column indicates whether the ground-truth is extractive or abstractive. Size gives the number of sentences. 1), following prior work, we truncate model outputs to 75 characters and compute ROUGE recall scores.", "While Gigaword and DUC2004 contain abstractive ground-truth summaries, the remaining three datasets have token-level extractive ground-truth summaries.", "The ground-truth compressions in the Google sentence compression dataset (Fil-ippova and Altun, 2013) were automatically generated using grammatical constraints and distant supervision via headlines.", "The Broadcast and BNC datasets (Clarke and Lapata, 2008) contain manually created extractive sentence compressions which tend to be longer compared to the other evaluation datasets.", "Following previous work, we report a simple F1-score based on tokenized predicted and ground-truth summaries on the three extractive datasets, but also measure ROUGE F1 scores.", "We tune our approach in several phases.", "At first, we identify an optimal learning rate and batch size using a grid search with a fixed training duration.", "We compare different settings based on the average reward achieved on a unlabelled, held-out set of the training data.", "Next, we test different values of k (1, 5, 10, 50, 100), the number of samples per step, and pick the best k based on the average reward on the validation set.", "This method of hyperparameter tuning is fully unsupervised.", "Using learning rate 1 e 05 , batch size 4 and k = 100 identified in the previous runs, we next compare the different options for the similarity reward listed in Section 4.3 and pick the best (bi-encoder similarity) based on the F1-score on our labelled Newsroom-based validation set (see Appendix B).", "We initialize the encoder component of our model with the pretrained 6-layer DistilRoBERTa model (Sanh et al., 2019).", "The binary classifier module Name Train data Test Data Time SCRL-L8 Gigaword Gigaword 9 SCRL-L11 Newsroom DUC04, Google 9.5 SCRL-CR75 Newsroom Broadcast, BNC 10 Table 2: Overview of trained models and training time in hours.", "is initialized randomly.", "We train each model for 8,000 steps with a batch size of 4 on a Google Cloud virtual machine with one NVIDIA Tesla T4 GPU, using the AdamW optimizer (Loshchilov and Hutter, 2019).", "Our default reward combination contains masked-LM fluency and bi-encoder similarity combined with either R len or R cr .", "Table 2 gives an overview of the three models that are used in the evaluation.", "Note that the sample size of 100 is responsible for the long training durations.", "SCRL-L8 and SCRL-L11 are trained with R len whereas SCRL-CR75 is trained with R cr , with a compression ratio of 0 .", "75 .", "This is because the ground-truth summary lengths are approximated better by a fixed length rather than a fixed ratio in the Google and DUC2004 datasets, whereas a fixed ratio describes the Broadcast and BNC datasets better.", "We compare our model to the greedy stochastic hill climbing approach in Schumann et al. (2020) which obtained state-of-the-art ROUGE results for unsupervised baselines on the Gigaword and DUC2004 datasets.", "Because this method and SCRL do not have identical objective functions, we implement the hill climbing algorithm applied to our reward functions, which we will name HC throughout this work.", "This allows for a clearer comparison between RL and discrete search.", "HC optimizes R fluency , R sim under fixed length constraints instead of using R len and R cr .", "Different from Schumann et al. (2020), it runs for a fixed number of 2000 steps and restarts only when the search is stuck rather than in equal intervals (details in Appendix E).", "We analyze the performance of HC for different budgets to understand at what point search can surpass the learned policies.", "We also compare against Zhou and Rush (2019), Niu et al. (2019) and the RL-based method by Wang et al. (2018) on datasets where results are available.", "Table 3 shows the evaluation results on all used test datasets.", "Results of methods apart from SCRL and HC are taken from previous works.", "We com-1272 Dataset Model ROUGE F1 L d L o cr o Inf.", "pute ROUGE scores using the implementation from Google Research 6 .", "On Gigaword, SCRL outperforms all baselines, except Schumann et al. (2020) with a 10 token constraint in ROUGE-2.", "On DUC2004, SCRL remains behind the hill climbing methods, but outperforms other unsupervised baselines.", "On the Google dataset, SCRL obtains state-of-the-art results among unsupervised methods.", "On Broadcast and BNC, SCRL and HC obtain very similar scores, which are both higher than previously reported results.", "Figure 3 shows ROUGE-1 scores obtained by HC at different search budgets, compared to SCRL.", "The hill climbing strategy approaches or outperforms the trained model at different paces, depending on the dataset.", "Interestingly, HC still achieves higher rewards than SCRL relatively early during its search (see Appendix F), which is inconsistent with the evaluation results.", "Potential reasons for this disparity are disadvantages through the hard length constraints, a mismatch between the heuristic reward functions and evaluation metrics, and beneficial biases induced through our training framework.", "We compare the inference-time speed of SCRL with HC using different budgets of search steps 7 .", "The fastest batch size for both approaches is used.", "The Inference Time in Table 3 shows the average number of seconds per processed sentence, with the number of search steps set to T = 2000 for HC.", "SCRL is roughly 4000 faster than HC with T = 7 On a Google Colab Notebook with a Tesla P-100 GPU 1273 Figure 4: Distribution of summary lengths and compression ratios.", "2000 , and 200 faster when T is reduced to 100 , for example.", "We believe that such a speed-up with a preserved evaluation performance is a critical factor when considering real-world applications of sentence compression.", "The length and compression ratio of summaries produced by SCRL is distributed around the desired values, with peakier distributions than in ground-truth summaries (examples in Figure 4).", "HC produces exactly the desired value whenever possible, due to the enforced constraint for length or ratio.", "Figure 5 shows how SCRL and HC extract tokens from different relative positions within source sentences.", "SCRL has a higher tendency to extract early tokens.", "We hypothesize that this is a reliable high-reward strategy discovered during training, considering that a milder form of the lead-bias also shows in HC.", "Note that neither method is inherently biased in its design to prefer tokens from certain regions.", "Figure 6 shows how rewards and summary length develop throughout training.", "The rewards generally increase quickly in the first few hundred training steps and then continue to grow very slowly.", "Fluency starts to increase later than the other reward functions, which is likely related to our observation that it is more sensitive to small changes in a summary.", "Interestingly, the summary lengths develop differently depending on the length or compression setting SCRL-L8 and SCRL-L11 start with short summaries and increase the size over time whereas SCRL-CR75 starts with long summaries before settling on a shorter certain range.", "Our models learn a variety of behaviors to compress sentences, such as removing articles, auxiliary verbs, relative clauses and temporal expressions.", "Figure 7 shows some examples.", "Even though our models learn to produce grammatical sentences fairly well, grammatical errors do still appear, and are more common for the models with a short output length (SCRL-8, SCRL-11).", "In some cases, semantic errors occur where the original meaning is changed or made unintel-ligeble.", "Both SCRL and HC are susceptible of semantic and grammatical errors, as can be seen in some examples in Appendix G. A type of error that is specific to SCRL is the splitting or merging of tokens resulting from its operation on Byte Pair Encoding-based subword tokens (more details in Appendix C).", "To demonstrate that our approach is flexible for customization, we pick a simple example of re-1274", "programming model behavior using a hand-crafted reward function.", "We note that in some cases, the model unnecessarily keeps day references in compressed sentences, such as \"Thursday\" or \"yester-day\".", "We construct a simple reward function that returns zero if any day-like word from a small gazetteer appears in an output and a score of 1 otherwise.", "We fine-tune an existing model with this additional reward and observe that it successfully avoids including day-words that the previous model would include.", "Importantly, it additionally learned to remove other tokens attached to day-words, e.g. \"on\" in \"on Monday\", keeping the sentences grammatical.", "Table 4 shows some examples.", "Empirically, the new model's outputs contain words from the gazeteer in 1% of cases where they appear in the source, compared to 12% in the initial model.", "We argue that RL offers the following advantages over discrete search strategies for sentence compression and similar text editing or generation tasks.", "The necessary search and exploration is moved into the training stage, allowing fast inference independently of how efficient objectives are to compute.", "Furthermore, discrete search unnecessarily spends time navigating through low-quality outputs that a trained model can quickly learn to avoid.", "Limitations of our approach compared to the search-based approach are its lesser flexibility in terms of on-the-fly customization and a sensitivity to disparities between training data and the application domain.", "Furthermore, the trained models show a lower capability to optimize the selected objectives compared to search, though this does not have a negative impact on the evaluation in most cases.", "The fact that most of our training time is spent on estimating the quality of sampled compressions due to large sample size k , shows that our approach is somewhat similar to large-scale search strategies applied to a whole dataset, with the difference that the sampling behavior at each step changes over time and is informed by previous steps.", "This suggests that discrete search could support the RL training, similarly to the learning-from-search approach described by (Li et al., 2020).", "This work presents a simple and effective approach for learning sentence compression models based on objective functions rather than ground-truth examples.", "Because it is unsupervised, it is well-suited for creating customized applications even when no gold training data is available, allowing for task-specific tuning based on arbitrary sets of reward functions, which do not need to be differentiable.", "Importantly, our approach is very fast at inference time compared to alternative discrete search-based methods.", "We are interested in several future directions related to this work:", "1) systematic approaches to design reward functions for summarization,", "2) RL-based summarization models with length control on the fly,", "3) testing our approach on other languages, and", "4) the design of curricula for different reward functions as they might pose varying difficulties at different stages of the training.", "This work was funded by the Irish Research Council (IRC) under grant number EBPPG/2018/23, the Science Foundation Ireland (SFI) under grant number 12/RC/2289_P2 and the enterprise partner Aylien Ltd." ]
[ "abstain", "abstain", "abstain", "objective", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "method", "method", "method", "method", "other", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "other", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "result", "abstain", "objective", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other" ]
[ "Adversarial training has shown impressive success in learning bilingual dictionary without any parallel data by mapping monolingual embeddings to a shared space.", "However, recent work has shown superior performance for non-adversarial methods in more challenging language pairs.", "In this work, we revisit adversarial autoencoder for unsupervised word translation and propose two novel extensions to it that yield more stable training and improved results.", "Our method includes regularization terms to enforce cycle consistency and input reconstruction, and puts the target encoders as an adversary against the corresponding discriminator.", "Extensive experimentations with European, non-European and low-resource languages show that our method is more robust and achieves better performance than recently proposed adversarial and non-adversarial approaches.", "Learning cross-lingual word embeddings has been shown to be an effective way to transfer knowledge from one language to another for many key linguistic tasks including machine translation, named entity recognition, part-of-speech tagging, and parsing (Ruder et al., 2017).", "While earlier efforts solved the associated word alignment problem using large parallel corpora (Luong et al., 2015), broader applicability demands methods to relax this requirement since acquiring a large corpus of parallel data is not feasible in most scenarios.", "Recent methods instead use embeddings learned from monolingual data, and learn a linear mapping from one language to another with the underlying assumption that two embedding spaces exhibit similar geometric structures ( i.e., approximately isomorphic ).", "This allows the model to learn effective cross-lingual representations without expensive supervision (Artetxe et al., 2017).", "Given monolingual word embeddings of two languages, Mikolov et al. (2013a) show that a linear mapping can be learned from a seed dictionary of 5000 word pairs by minimizing the sum of squared Euclidean distances between the mapped vectors and the target vectors.", "Subsequent works (Xing et al., 2015; Artetxe et al., 2016, 2017; Smith et al., 2017) propose to improve the model by normalizing the embeddings, imposing an orthogonality constraint on the mapper, and modifying the objective function.", "While these methods assume some supervision in the form of a seed dictionary, recently fully unsupervised methods have shown competitive results.", "Zhang et al. (2017a,b) first reported encouraging results with adversarial training .", "Conneau et al. (2018) improved this approach with post-mapping refinements, showing impressive results for several language pairs.", "Their learned mapping was then successfully used to train a fully unsupervised neural machine translation system (Lample et al., 2018a,b).", "Although successful, adversarial training has been criticized for not being stable and failing to converge, inspiring researchers to propose non-adversarial methods more recently (Xu et al., 2018a; Hoshen and Wolf, 2018; Alvarez-Melis and Jaakkola, 2018; Artetxe et al., 2018b).", "In particular, Artetxe et al. (2018b) show that the adversarial methods of Conneau et al. (2018) and Zhang et al. (2017a,b) fail for many language pairs.", "In this paper, we revisit adversarial training and propose a number of key improvements that yield more robust training and improved mappings.", "Our main idea is to learn the cross-lingual mapping in a projected latent space and add more constraints to guide the unsupervised mapping in this space.", "We accomplish this by proposing a novel adversarial autoencoder framework (Makhzani et al., 2015), where adversarial mapping is done at the (latent) code space as opposed to the original embedding space (Figure 1).", "This gives the model the flexibility to automatically induce the required geometric structures in its latent code space that could potentially yield better mappings.", "Sgaard et al. (2018) recently find that the isomorphic assumption made by most existing methods does not hold in general even for two closely related languages like English and German.", "In their words approaches based on this assumption have important limitations .", "By mapping the latent vectors through adversarial training, our approach therefore departs from the isomorphic assumption.", "In our adversarial training, not only the mapper but also the target encoder is trained to fool the discriminator.", "This forces the discriminator to improve its discrimination skills, which in turn pushes the mapper to generate indistinguishable translation.", "To guide the mapping, we include two additional constraints.", "Our first constraint enforces cycle consistency so that code vectors after being translated from one language to another, and then translated back to their source space remain close to the original vectors.", "The second constraint ensures reconstruction of the original input word embeddings from the back-translated codes.", "This grounding step forces the model to retain word semantics during the mapping process.", "We conduct a series of experiments with six different language pairs (in both directions) comprising European, non-European, and low-resource languages from two different datasets.", "Our results show that our model is more robust and yields significant gains over Conneau et al. (2018) for all translation tasks in all evaluation measures.", "Our method also gives better initial mapping compared to other existing methods (Artetxe et al., 2018b).", "We also perform an extensive ablation study to understand the contribution of different components of our model.", "The study reveals that cycle consistency contributes the most, while adversarial training of the target encoder and post-cycle reconstruction also have significant effect.", "We have released our source code at https://ntunlpsg.github.io/ project/unsup-word-translation/ The remainder of this paper is organized as follows.", "After discussing related work in Section 2, we present our unsupervised word translation approach with adversarial autoencoder in Section 3. We describe our experimental setup in Section 4, and present our results with in-depth analysis in Section 5. Finally, we summarize our findings with possible future directions in Section 6. 2 Related Work In recent years a number of methods have been proposed to learn bilingual dictionary from monolingual word embeddings.", "1 Many of these methods use an initial seed dictionary.", "Mikolov et al. (2013a) show that a linear transformation can be learned from a seed dictionary of 5000 pairs by minimizing the squared Euclidean distance.", "In their view, the key reason behind the good performance of their model is the similarity of geometric arrangements in vector spaces of the embeddings of different languages.", "For translating a new source word, they map the corresponding word embedding to the target space using the learned mapping and find the nearest target word.", "In their approach, they found that simple linear mapping works better than non-linear mappings with multilayer neural networks.", "Xing et al. (2015) enforce the word vectors to be of unit length during the learning of the embeddings and modify the objective function for learning the mapping to maximize the cosine similarity instead of using Euclidean distance.", "To preserve length normalization after mapping, they enforce the orthogonality constraint on the mapper.", "Instead of learning a mapping from the source to the target embedding space, Faruqui and Dyer (2014) use a technique based on Canonical Correlation Analysis (CCA) to project both source and target embeddings to a common low-dimensional space, where the correlation of the word pairs in the seed dictionary is maximized.", "Artetxe et al. (2016) show that the above methods are variants of the same core optimization objective and propose a closed form solution for the mapper under orthogonality constraint.", "Smith et al. (2017) find that this solution is closely related to the orthogonal Procrustes solution.", "In their follow-up work, Artetxe et al. (2017) obtain competitive results using a seed dictionary of only 25 word pairs.", "They propose a self-learning framework that performs two steps iteratively until convergence.", "In the first step, they use the dictionary (starting with the seed) to learn a linear mapping, which is then used in the second step to induce a new dictionary.", "A more recent line of research attempts to eliminate the seed dictionary totally and learn the map-1 see (Ruder et al., 2017) for a nice survey ping in a purely unsupervised way.", "This was first proposed by Miceli Barone (2016), who initially used an adversarial network similar to Conneau et al. (2018), and found that the mapper (which is also the encoder) translates everything to a single embedding, known commonly as the mode collapse issue (Goodfellow, 2017).", "To preserve diversity in mapping, he used a decoder to reconstruct the source embedding from the mapped embedding , extending the framework to an adversarial autoencoder.", "His preliminary qualitative analysis shows encouraging results but not competitive with methods using bilingual seeds.", "He suspected issues with training and with the isomorphic assumption.", "In our work, we successfully address these issues with an improved model that also relaxes the isomorphic assumption.", "Our model uses two separate autoencoders, one for each language, which allows us to put more constraints to guide the mapping.", "We also distinguish the role of an encoder from the role of a mapper.", "The encoder projects embeddings to latent code vectors, which are then translated by the mapper.", "Zhang et al. (2017a) improved adversarial training with orthogonal parameterization and cycle consistency.", "To aid training, they incorporate additional techniques like noise injection which works as a regularizer.", "For selecting the best model, they rely on sharp drops of the discriminator accuracy.", "In their follow-up work (Zhang et al., 2017b), they minimize Earth-Mover's distance between the distribution of the transformed source embeddings and the distribution of the target embeddings.", "Conneau et al. (2018) show impressive results with adversarial training and refinement with the Procrustes solution.", "Instead of using the adversarial loss, Xu et al. (2018a) use Sinkhorn distance and adopt cycle consistency inspired by the CycleGAN (Zhu et al., 2017).", "We also incorporate cycle consistency along with the adversarial loss.", "However, while all these methods learn the mapping in the original embedding space, our approach learns it in the latent code space considering both the mapper and the target encoder as adversary.", "In addition, we use a post-cycle reconstruction to guide the mapping.", "A number of non-adversarial methods have also been proposed recently.", "Artetxe et al. (2018b) learn an initial dictionary by exploiting the structural similarity of the embeddings and use a robust self-learning algorithm to improve it iteratively.", "Hoshen and Wolf (2018) align the second moment of word distributions of the two languages using principal component analysis (PCA) and then refine the alignment iteratively using a variation of the Iterative Closest Point (ICP) method used in computer vision.", "Alvarez-Melis and Jaakkola (2018) cast the problem as an optimal transport problem and exploit the Gromov-Wasserstein distance which measures how similarities between pairs of words relate across languages.", "Let X = { x 1 , . . . , x n } and Y = { y 1 , . . . , y m } be two sets consisting of n and m word embeddings of d -dimensions for a source and a target language, respectively.", "We assume that X and Y are trained independently from monolingual corpora.", "Our aim is to learn a mapping f ( x ) in an unsupervised way ( i.e., no bilingual dictionary given) such that for every x i , f ( x ) corresponds to its translation in Y .", "Our overall approach follows the same sequence of steps as Conneau et al. (2018):", "(i) Induction of seed dictionary through adversarial training.", "(ii) Iterative refinement of the initial mapping through the Procrustes solution.", "(iii) Apply CSLS for nearest neighbor search.", "We propose a novel adversarial autoencoder model to learn the initial mapping for inducing a seed dictionary in step", "(i), and we adopt existing refinement methods for steps", "(ii) and", "(iii).", "Our proposed model (Figure 1) has two autoencoders , one for each language.", "Each autoencoder comprises an encoder EX (res. EY ) and a decoder DX (res. DY ).", "The encoders transform an input x (res. y ) into a latent code z x (res. z y ) from which the decoders try to reconstruct the original input.", "We use a linear encoder and l 2 reconstruction loss z x i = EX x i ; x i = DX z x i (1) L autoenc X ( EX , DX ) = 1 n n (cid:88) i =1 (cid:107) x i x i (cid:107) 2 (2) where EX R c d and DX R d c are the parameters of the encoder and the decoder for d dimensional word embedding and c -dimensional Figure 1: Our proposed adversarial autoencoder framework for unsupervised word translation.", "code vector.", "2 The encoder, decoder and the reconstruction loss for the other autoencoder (autoenc Y ) is similarly defined.", "Let q ( z x | x ) and q ( z y | y ) be the encoding distributions of the two autoencoders.", "We use adversarial training to find a mapping between q ( z x | x ) and q ( z y | y ) .", "This is in contrast with most existing methods ( e.g., Conneau et al. (2018); Artetxe et al. (2017)) that directly map the distribution of the source word embeddings p ( x ) to the distribution of the target p ( y ) .", "As Sgaard et al. (2018) pointed out, the isomorphism does not hold in general between the word embedding spaces of two languages.", "Mapping the latent codes gives our model more flexibility to induce the required semantic structures in its code space that could potentially yield more accurate mappings.", "As shown in Figure 1, we include two linear mappings G : Z x Z y and F : Z y Z x to project the code vectors (samples from q ( . | . ) ) from one language to the other.", "In addition, we have two language discriminators , LX and LY .", "The discriminators are trained to discriminate between the mapped codes and the encoded codes, while the mappers and encoders are jointly trained to fool their respective discriminator.", "This results in a three-player game, where the discriminator tries to identify the origin of a code, and the mapper and the encoder act together to prevent the discriminator to succeed by making the mapped vector and the encoded vector as similar as possible.", "Discriminator Loss Let LX and LY denote the parameters of the two discriminators, and WG and WF are the mapping weight matrices.", "The loss for the source discriminator LX can be written as 2 We also experimented with a non-linear encoder, but it did not work well.", "LLX ( LX | WF , EX ) = 1 m m (cid:88) j =1 log PLX ( src = 0 | F ( z y j )) 1 n n (cid:88) i =1 log PLX ( src = 1 | z x i ) (3) where PLX ( src | z ) is the probability according to LX to distinguish whether z is coming from the source encoder (src = 1 ) or from the target-to-source mapper F (src = 0 ).", "The discrimination loss LLY ( LY | WG , EY ) is similarly defined for the target discriminator LY using G and EY .", "Our discriminators have the same architecture as Conneau et al. (2018).", "It is a feed-forward network with two hidden layers of size 2048 and Leaky-ReLU activations.", "We apply dropout with a rate of 0.1 on the input to the discriminators.", "Instead of using 1 and 0, we also apply a smoothing coefficient ( s = 0 . 2 ) in the discriminator loss.", "Adversarial Loss The mappers and encoders are trained jointly with the following adversarial loss to fool their respective discriminators.", "L adv ( WF , EX | LX ) = 1 m m (cid:88) i =1 log PLX ( src = 1 | F ( z y j )) 1 n n (cid:88) i =1 log PLX ( src = 0 | z x i ) (4) The adversarial loss for mapper G and encoder EY is similarly defined.", "Note that we consider both the mapper and the target encoder as generators.", "This is in contrast to existing adversarial methods, which do not use any autoencoder in the target side.", "The mapper and the target encoder team up to fool the discriminator.", "This forces the discriminator to improve its skill and vice versa for the generators, forcing them to produce indistinguishable codes through better mapping.", "Cycle Consistency and Reconstruction The adversarial method introduced above maps a bag of source embeddings to a bag of target embeddings, and in theory, the mapper can match the target language distribution.", "However, mapping at the bag level is often insufficient to learn the individual word level mappings.", "In fact, there exist infinite number of possible mappings that can match the same target distribution.", "Thus to learn better mappings, we need to enforce more constraints to our objective.", "The first form of constraints we consider is cycle consistency to ensure that a source code z x translated to the target language code space, and translated back to the original space remains unchanged, i.e., z x G ( z x ) F ( G ( z x )) z x .", "Formally, the cycle consistency loss in one direction: L cyc ( WG , WF ) = 1 n n (cid:88) i =1 (cid:107) z x i F ( G ( z x i )) (cid:107) (5) The loss in the other direction ( z y F ( z y ) G ( F ( z y )) z y ) is similarly defined.", "We compute this post-cycle reconstruction loss for the source autoencoder as follows: L rec ( EX , DX , WG , WF ) = 1 n n (cid:88) i =1 (cid:107) x i DX ( F ( G ( z x i ))) (cid:107) 2 (6) The reconstruction loss at the target autoencoder is defined similarly.", "In addition to cycle consistency, we include another constraint to guide the mapping further.", "In particular, we ask the decoder of the respective autoencoder to reconstruct the original input from the back-translated code.", "Apart from improved mapping, both cycle consistency and reconstruction lead to more stable training in our experiments.", "Specifically, they help our training to converge and get around the mode collapse issue (Goodfellow, 2017).", "Since the model now has to translate the mapped code back to the source code and reconstruct the original word embedding, the generators cannot get away by mapping all source codes to a single target code.", "L src tar = L adv + 1 L cyc + 2 L rec (7)", "where 1 and 2 control the relative importance of the three loss components.", "Similarly we define the total loss for mapping in the opposite direction L tar src .", "The complete objective of our model is: L total = L src tar + L tar src (8) 3.2 Training and Dictionary Construction We present the training procedure of our model and the overall word translation process in Algorithm 1. We first pre-train the autoencoders separately on monolingual embeddings (Step 1).", "This pre-training is required to induce word semantics (and relations) in the latent code space.", "time with a random batch.", "Then we update the generators (the mapper and target encoder) on the adversarial loss.", "The mappers then go through two more updates, one for cycle consistency and another for post-cycle reconstruction.", "The autoencoders (encoder-decoder) in this stage get updated only on the post-cycle reconstruction loss.", "We also apply the orthogonalization update to the mappers following Conneau et al. (2018) with = 0 .", "01 .", "Our training setting is similar to Conneau et al. (2018), and we apply the same preand postprocessing steps.", "We use stochastic gradient descent (SGD) with a batch size of 32, a learning rate of 0.1, and a decay of 0.98.", "For selecting the best model, we use the unsupervised validation criterion proposed by Conneau et al. (2018), which correlates highly with the mapping quality.", "In this criterion, 10 , 000 most frequent source words along with their nearest neighbors in the target space are considered.", "The average cosine similarity between these pseudo translations is considered as the validation metric.", "The initial bilingual dictionary induced by adversarial training (or any other unsupervised method) is generally of lower quality than what could be achieved by a supervised method.", "Conneau et al. (2018) and Artetxe et al. (2018b) propose fine-tuning methods to refine the initial mappings.", "Similar to Conneau et al. (2018)), we fine-tune our initial mappings ( G and F ) by iteratively solving the Procrustes problem and applying a dictionary induction step.", "This method uses singular value decomposition or SVD of Z Ty Z x to find the optimal mappings G (similarly SVD( Z Tx Z y ) for F ) given the approximate alignment of words from the previous step.", "For generating synthetic dictionary in each iteration, we only consider the translation pairs that are mutual nearest neighbors.", "In our fine-tuning, we run five iterations of this process.", "For finding the nearest neighbors, we use the Cross-domain Similarity Local Scaling (CSLS) which works better in mitigating the hubness problem (Conneau et al., 2018).", "Following the tradition, we evaluate our model on word translation ( a.k.a. bilingual lexicon induction ) task, which measures the accuracy of the predicted dictionary to a gold standard dictionary.", "We evaluate our model on two different datasets.", "The first one is from Conneau et al. (2018), which consists of FastText monolingual embeddings of ( d =) 300 dimensions (Bojanowski et al., 2017) trained on Wikipedia monolingual corpus and gold dictionaries for 110 language pairs.", "3 To show the generality of different methods, we consider European , non-European and low-resource languages.", "In particular, we evaluate on English (En) from/to Spanish (Es), German (De), Italian (It), Arabic (Ar), Malay (Ms), and Hebrew (He).", "We also evaluate on the more challenging dataset of Dinu et al. (2015) and its subsequent extension by Artetxe et al. (2018a).", "We will refer to this dataset as Dinu-Artexe dataset.", "From this dataset, we choose to experiment on English 3 https://github.com/facebookresearch/ MUSE from/to Italian and Spanish.", "English and Italian embeddings were trained on WacKy corpora using CBOW (Mikolov et al., 2013b), while the Spanish embeddings were trained on WMT News Crawl.", "The CBOW vectors are also of 300 dimensions.", "We compare our method with the unsupervised models of Conneau et al. (2018), Artetxe et al. (2018b), Alvarez-Melis and Jaakkola (2018), Xu et al. (2018a), and Hoshen and Wolf (2018).", "To evaluate how our unsupervised method compares with methods that rely on a bilingual seed dictionary, we follow Conneau et al. (2018), and compute a supervised baseline that uses the Procrustes solution directly on the seed dictionary (5000 pairs) to learn the mapping function, and then uses CSLS to do the nearest neighbor search.", "We also compare with the supervised approaches of Artetxe et al. (2017, 2018a), which to our knowledge are the state-of-the-art supervised systems.", "For some of the baselines, results are reported from their papers, while for the rest we report results by running the publicly available codes on our machine.", "For training our model on European languages, the weight for cycle consistency ( 1 ) in Eq.", "7 was always set to 5, and the weight for post-cycle reconstruction ( 2 ) was set to 1. For non-European languages, we use different values of 1 and 2 for different language pairs.", "4 The dimension of the code vectors in our model was set to 350.", "We present our results on European languages on the datasets of Conneau et al. (2018) and Dinu et al. (2015) in Tables 1 and 3, while the results on non-European languages are shown in Table 2. Through experiments, our goal is to assess:", "(i) Does the unsupervised mapping method based on our proposed adversarial autoencoder model improve over the best existing adversarial method of Conneau et al. (2018) in terms of mapping accuracy and convergence (Section 5.1)?", "(ii) How does our unsupervised mapping method compare with other unsupervised and supervised approaches (Section 5.2)?", "4 We did not tune the values much, rather used our initial observation.", "Tuning values might yield even better results.", "Since our approach follows the same steps as Conneau et al. (2018), we first compare our proposed model with their model on European (Table 1), non-European and low-resource languages (Table 2) on their dataset.", "In the tables, we present the numbers that they reported in their paper (Con-neau et al. (2018) (paper)) as well as the results that we get by running their code on our machine (Conneau et al. (2018) (code)).", "For a fair comparison with respect to the quality of the learned mappings (or induced seed dictionary), here we only consider the results of our approach that use the refinement procedure of Conneau et al. (2018).", "In Table 1, we see that our Adversarial autoencoder + Conneau et al. (2018) Refinement outperforms Conneau et al. (2018) in all the six translation tasks involving European language pairs, yielding gains in the range 0.3 1.3%.", "Our method is also superior to theirs for the non-European and low-resource language pairs in Table 2. Here our method gives more gains ranging from 1.8 to 4.3%.", "Note specifically that Malay (Ms) is a low-resource language, and the FastText contains word vectors for only 155K Malay words.", "We found their model to be very fragile for En from/to Ms, and does not converge at all for Ms En.", "We ran their code 10 times for Ms En but failed every time.", "Compared to that, our method is more robust and converged most of the time we ran.", "et al., 2017) in Table 3, we see here also our method performs better than their method in all the four translation tasks involving European language pairs.", "In this dataset, our method shows more robustness compared to their method.", "For example, their method had difficulties in converging for En from/to Es translations; for En Es, it converges only 2 times out of 10 attempts, while for Es En it did not converge a single time in 10 attempts.", "Compared to that, our method was more robust, converging 4 times out of 10 attempts.", "In Section 5.3, we compare our model with Conneau et al. (2018) more rigorously by evaluating them with and without fine-tuning and measuring their performance on P@1, P@5, and P@10.", "In this section, we compare our model with other state-of-the-art methods that do not follow the same procedure as us and Conneau et al. (2018).", "For example, Artetxe et al. (2018b) do the initial mapping in the similarity space, then they apply a different self-learning method to fine-tune the embeddings, and perform a final refinement with symmetric re-weighting.", "Instead of mapping from source to target, they map both source and target embeddings to a common space.", "Let us first consider the results for European language pairs on the dataset of Conneau et al. (2018) in Table 1. Our Adversarial autoencoder + Conneau et al. (2018) Refinement performs better than most of the other methods on this dataset, achieving the highest accuracy for 4 out of 6 translation tasks.", "For De En, our result is very close to the best system of Artetxe et al. (2018b) with only 0.2% difference.", "On the dataset of Dinu et al. (2015); Artetxe et al. (2017) in Table 3, our Adversarial autoencoder + Conneau et al. (2018) Refinement performs better than other methods except Artetxe et al. (2018b).", "On average our method lags behind by about 2%.", "However, as mentioned, they follow a different refinement and mapping methods.", "For non-European and low-resource language pairs in Table 2, our Adversarial autoencoder + Conneau et al. (2018) Refinement exhibits better performance than others in one translation task, where the model of Artetxe et al. (2018b) performs better in the rest.", "One important thing to notice here is that other unsupervised models (apart from ours and Artetxe et al. (2018b)) fail to converge in one or more language pairs.", "We notice that the method of Artetxe et al. (2018b) gives better results than other baselines, even in some translation tasks they achieve the highest accuracy.", "To understand whether the improvements of their method are due to a better initial mapping or better post-processing, we conducted two additional experiments.", "In our first experiment, we use their method to induce the initial seed dictionary and then apply iterative Procrustes solution (same refinement procedure of Conneau et al. (2018)) for refinement.", "Table 4 shows the results.", "Surprisingly, on both datasets their initial mappings fail to produce any reasonable results.", "So we suspect that the main gain in (Artetxe et al., 2018b) comes from their fine-tuning method, which they call robust self learn-En-It En-Es Dinu-Artetxe Dataset ** ** ** ** Conneau Dataset 01.2 01.6 04.7 05.1 Table 4: Conneau et al. (2018) refinement applied to the initial mappings of Artetxe et al. (2018b).", "ing .", "In our second experiment, we use the initial dictionary induced by our adversarial training and then apply their refinement procedure.", "Here for most of the translation tasks, we achieve better results; see the model Adversarial autoencoder + Artetxe et al. (2018b) Refinement in Tables 1 3. These two experiments demonstrate that the quality of the initial dictionary induced by our model is far better than that of Artetxe et al. (2018b).", "We further analyze our model by dissecting it and measuring the contribution of each novel component that is proposed in this work.", "We achieve this by incrementally removing a new component from the model and evaluating it on different translation tasks.", "In order to better understand the contribution of each component, we evaluate each model by measuring its P@1 , P@5 , and P@10 with fine-tuning and without fine-tuning .", "In case of without fine-tuning , the models apply the CSLS neighbor search directly on the mappings learned from the adversarial training, i.e., no Procrustes solution based refinement is done after the adversarial training.", "This setup allows us to compare our model directly with the adversarial model of Conneau et al. (2018), putting the effect of fine-tuning aside.", "Table 5 presents the ablation results for En-Es, En-De, and En-It in both directions.", "The first row ( Conneau-18 ) presents the results of Conneau et al. (2018) that uses adversarial training to map the word embeddings .", "The next row shows the results of our full model.", "The subsequent rows incrementally detach one component from our model.", "For example, Enc.", "adv denotes the variant of our model where the target encoder is not trained on the adversarial loss ( EX in Eq. 4); Recon excludes the post-cycle reconstruction loss from Enc.", "adv , and Cycle excludes the cycle consistency from Recon .", "Thus, Cycle is a variant of our model that uses only adversarial loss to learn the mapping.", "However, it is important En Es Es En En De De En En It It En P@1 P@5 P@10 P@1 P@5 P@10 P@1 P@5 P@10 P@1 P@5 P@10 P@1 P@5 P@10 P@1 P@5 P@10 Without Fine-Tuning Conneau-18 65.3 73.8 80.6 66.7 78.3 80.8 61.5 70.1 78.2 60.3 70.2 77.0 64.8 75.3 79.4 63.8 77.1 81.8 Our (full) 71.8 81.1 85.7 72.7 81.5 83.8 64.9 74.4 81.8 63.1 71.3 79.8 68.2 78.9 83.7 67.5 77.6 82.1 Enc.", "As we compare our full model with the model of Conneau et al. (2018) in the without fine-tuning setting, we notice large improvements in all measures across all datasets: 5.1 7.3% in En Es, 3 6% in Es En, 3.4 4.3% in En De, 1 3% in De En, 3.4 4.3% in En It, and 0.3 3.7% in It En.", "These improvements demonstrate that our model finds a better mapping compared to Conneau et al. (2018).", "Among the three components, the cycle consistency is the most influential one across all languages.", "Training the target encoder adversarially also gives a significant boost.", "The reconstruction has less impact.", "If we compare the results of Cycle with Conneau-18 , we see sizeable gains for En-Es in both directions.", "This shows the benefits of mapping at the code level.", "Now let us turn our attention to the results with fine-tuning.", "Here also we see gains across all datasets for our model, although the gains are not as verbose as before (about 1% on average).", "However, this is not surprising as it has been shown that iterative fine-tuning with Procrustes solution is a robust method that can recover many errors made in the initial mapping (Conneau et al., 2018).", "Given a good enough initial mapping, the measures converge nearly to the same point even though the differences were comparatively more substantial initially; for example, notice that the scores are very similar for P@5 and P@10 measures after fine-tuning.", "We have proposed an adversarial autoencoder framework to learn the cross-lingual mapping of monolingual word embeddings of two languages in a completely unsupervised way.", "In contrast to the existing methods that directly map word embeddings, our method first learns to transform the embeddings into latent code vectors by pretraining an autoencoder.", "We apply adversarial training to map the distributions of the source and target code vectors.", "In our adversarial training, both the mapper and the target encoder are treated as generators that act jointly to fool the discriminator.", "To guide the mapping further, we include constraints for cycle consistency and post-cycle reconstruction.", "Through extensive experimentations on six different language pairs comprising European, non-European and low-resource languages from two different data sources, we demonstrate that our method outperforms the method of Conneau et al. (2018) for all translation tasks in all measures (P@ { 1,5,10 } ) across all settings (with and without fine-tuning).", "Comparison with other existing methods also shows that our method learns better mapping (not considering the fine-tuning).", "With an ablation study, we further demonstrated that the cycle consistency is the most important component followed by the adversarial training of target encoder and the post-cycle reconstruction.", "In future work, we plan to incorporate knowledge from the similarity space in our adversarial framework.", "The authors would like to thank the funding support from MOE Tier-1 (Grant M4011897.020)." ]
[ "abstain", "abstain", "objective", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "objective", "abstain", "abstain", "method", "result", "method", "objective", "abstain", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "objective", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "method", "method", "abstain", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "method", "method", "method", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "method", "abstain", "abstain", "abstain", "result", "objective", "abstain", "other" ]
[ "DocRED is a widely used dataset for document-level relation extraction.", "In the large-scale annotation, a recommend-revise scheme is adopted to reduce the workload.", "Within this scheme, annotators are provided with candidate relation instances from distant supervision, and they then manually supplement and remove relational facts based on the recommendations.", "However, when comparing DocRED with a subset relabeled from scratch, we find that this scheme results in a considerable amount of false negative samples and an obvious bias towards popular entities and relations.", "Furthermore, we observe that the models trained on DocRED have low recall on our relabeled dataset and inherit the same bias in the training data.", "Through the analysis of annotators' behaviors, we figure out the underlying reason for the problems above: the scheme actually discourages annotators from supplementing adequate instances in the revision phase.", "We appeal to future research to take into consideration the issues with the recommend-revise scheme when designing new models and annotation schemes.", "The relabeled dataset is released at https://github.", "com/AndrewZhe/Revisit-DocRED , to serve as a more reliable test set of document RE models.", "Relation Extraction (RE) is an important task which aims to identify relationships held between entities in a given piece of text.", "While most previous methods focus on extracting relations from a single sentence (Lin et al., 2016; Zhang et al., 2018), recent studies begin to explore RE at document level (Peng et al., 2017; Zeng et al., 2020; Nan et al., 2020; Huang et al., 2021; Zhang et al., 2021), which is more challenging as it often requires reasoning across multiple sentences.", "The rapid development of document-level RE in the past two years has benefited from the proposal of DocRED (Yao et al., 2019), the first large-scale and human-annotated dataset for this task.", "Noticeably, longer documents introduce an unprecedented difficulty in annotating the relation instances: as the total number of entities dramatically increases in accordance to text length, the expected number of entity pairs surges quadratically , intensively increasing the workload to check relationships between every pair.", "To address this problem, Yao et al. (2019) applies a recommend-revise process: in the recommendation phase, a small set of candidate relation instances is generated through distant supervision; then, annotators are required to revise the candidate set, removing the incorrect relation instances and supplementing the instances not identified in the recommendation phase.", "Shifting the construction process from scratch to an edit-based task, it seems that the recommend-revise scheme cuts down the effort of annotating by a large margin.", "However, whether the quality of the annotation maintains a reliable standard in practice remains in doubt.", "To what extent can the accuracy of annotation be sacrificed due to the automated recommendation?", "And, how does the provided recommendation affect the behaviours of the annotators in the revision phase?", "Moreover, what are the real effects on the models trained on a dataset annotated with this scheme?", "To answer these questions, we aim to provide a thorough comparison between careful annotations from scratch and the annotations under the recommend-revise scheme.", "We randomly select 96 documents from DocRED and ask two experts to relabel them from scratch independently.", "After annotating, the two experts come to a consensus of gold labels via discussion.", "This revised dataset is publicly available at https://github.com/ AndrewZhe/Revisit-DocRED , and we hope it can be used to evaluate the model's perfor-6241 mance on real data distribution 1 .", "With the help of these annotations, we discovered three sobering issues regarding the effects of the recommend-revise scheme: (1) A noticeable portion of relation instances is left out, and the distributional bias in the recommendation output is inherited, even after the revision process.", "It is not surprising that recommendations alone fail to recognize all the relation instances, since RE models are far from perfect.", "Ideally, these unidentified instances should be added by human annotators during the revision phase.", "However, it turns out that 95 .", "7% of these missing instances are still left out even after revision.", "Furthermore, while the recommendations from distant supervision favor instances associated with popular entities and relations in the source Knowledge Base (Wikidata), this bias is still maintained and inherited even after human revision, leaving less popular relations and entities to be neglected.", "(2) Worryingly, we find the models trained on DocRED have low recall on our relabeled dataset and they also inherit the same bias towards popular relations and entities.", "We train recent models on DocRED and test them with the dataset relabeled by us.", "We notice that all models have much lower recalls on our dataset than previously reported on DocRED due to the numerous false negatives in training data, and those models are also biased to popular entities and relations.", "Further investigation reveals that the models' bias comes from the training set by comparing different strategies of negative sampling.", "Since one straightforward real-world application of relation extraction is to acquire novel knowledge from text, a RE model would be much less useful if it has a low recall, or perform poorly on less popular entities and relations.", "impacts the behaviors of annotators, making them unlikely to supplement the instances left out.", "This is the underlying reason for the two concerns above.", "We argue that the revision process fails to reach its goal, since it puts the annotators in a dilemma: while they are supposed to add new instances left out by the recommendations, finding these missing instances may force the annotators 1 While we cannot guarantee that the relabeled data is totally error-free, we believe the quality is high enough to be approximated as a real distribution because each entity pair is examined by two annotators.", "to thoroughly check out the entities pair-by-pair, which is time-consuming and against the goal of this scheme.", "As a result, annotators can hardly make effective supplementation and would tend to perform the easier goal of validating existing relation instances .", "The major challenge for annotating document-level RE datasets comes from the quadratic number of potential entity pairs with regard to the total number of entities in a document.", "As reported by Yao et al. (2019), a document in DocRED contains 19.5 entities on average, thus rendering 360 entity pairs with potential relationships.", "Therefore, for the 5,053 documents to be annotated, around 1,823,000 entity pairs are to be checked.", "Such workload will be around 14 times more than TACRED (Zhang et al., 2017), the biggest human-labeled sentence-level RE dataset.", "Therefore, exhaustively labeling relations between each entity pair involves intensive workload and does not seem feasible for document-level RE datasets.", "To alleviate the huge burden of manual labeling, Yao et al. (2019) divides the annotation task into two steps: recommendation and revision .", "First, in the recommendation phase, Yao et al. (2019) takes advantage of Wikidata (Vrandecic and Krtzsch, 2014) and an off-the-shelf RE model to collect all the possible relations between any two entities in the same document.", "This process is automated and does not require human involvement.", "Then, during the revision phase, the relations that exist in Wikidata or are inferred by the RE model for a specific entity pair will be shown to the annotators.", "Rather than annotating each entity pair from scratch, the annotators are required to review the recommendations, remove the incorrect triples and supplement the missing ones.", "DocRED The Document-Level Relation Extraction Dataset (DocRED), introduced by Yao et al. (2019), is one of the largest and most widely used dataset for document-level relation extraction.", "DocRED consists of 5,053 English Wikipedia documents, each containing 19.5 entities on average.", "Every entity pair within a document may have one of the 96 types of relations or no relations, i.e., the additional no_relation label for negative instances.", "In order to explore the supplementation in the re-6242 vision phase and the influence of it on the released dataset, we acquire the original recommendations generated by distant supervision from the authors of DocRED.", "As we focus on the effect of missing instances, we do not consider the samples removed during the revision phase.", "The remaining annotations in the recommendations that are not removed later are denoted as D Recommend , and the annotations after human revision are denoted as D Revise .", "DocRED from scratch To analyze the effect of the recommend-revise scheme, we re-annotate a subset of the documents used in DocRED from scratch and compare it with D Recommend and D Revise .", "We randomly select 96 documents from the validation set of DocRED, and each document is assigned to two experts to be annotated independently.", "They are explicitly required to check every entity pair in the documents and decide the relationships entirely based on the original text with no recommendation.", "This turns out to be an extraordinarily difficult task where each document takes up half an hour for annotation on average.", "The inter-annotator Cohen's Kappa is 0.68 between our two experts, indicating a high annotation quality.", "After that, the two experts discuss the inconsistent instances together and reach an agreement on the final labels.", "As this paper focuses on the bias caused by false negatives in the recommend-revise scheme, we assume the labeled instances in DocRED are all correct.", "For the instances labeled in DocRED but not by our experts, we add them to our annotation.", "We denote this new annotation set as D Scratch .", "Table 1 shows the statistics and comparison of D Scratch , D Recommend and D Revise on the 96 randomly-selected documents in DocRED.", "Comparing D Recommend with D Scratch , it is noticeable that huge amounts of ground-truth annotation labels are left out.", "While D Recommend captures 1167 relation instances in the documents, a more careful, entity-by-entity examination as did in D Scratch would reveal that there are as much as 3308 relation instances within the same documents.", "This shocking fact reveals that almost two-thirds of the relation instances are missing and wrongly labeled as negative.", "Another unexpected fact is that annotators hardly added anything during the revision phase.", "The final version reports 1214 relation instances, with a mere increase of 47 (1.4%) cases in total, or 0.49 instances on average for each document.", "This suggests that while we had great hopes of our revision process to make things right, it is not working to a sensible extent: the majority of the unlabeled instances, which take up nearly two-thirds of the instances, simply remain out there as they were.", "Given the analysis above, another even more serious issue arises: since the changes introduced by the revision are so limited, the output after revision may still contain the same bias as in the recommendation.", "That is, if the recommendations contain a systematic flaw, the new dataset will probably keep on inheriting it.", "In this section, we verify that such biases largely exist in the recommendation phase and are thus inherited to the DocRED dataset.", "The recommendations of DocRED are collected from two sources: Wikidata and a relation extraction model.", "However, if we consider the facts reserved after revision by annotators, where wrongly labeled ones get removed, the majority of them are taken directly from Wikidata 2 .", "We suggest that as a collaborative knowledge base, the relation instances related to common entities and properties are more likely to be collected and added to Wikidata.", "In such cases, the recommendation from Wikidata will naturally favor popular entities and relations, while the less common ones would be left out.", "We validate this hypothesis in the following sections, where we investigate the bias of DocRED from the perspective of both relations and entities.", "To determine whether the data set has a preference for popular relationships, we divide the 96 relationships in DocRED into two categories using Wikidata statistics and then compute their distribution.", "Specifically, we acquire the List of top 100 properties by quantity of item pages that link to them from Wikidata's official website 3 and consider a relation as popular if it appears on this list.", "Among the 96 relationships in DocRED, 25 are in top 100, 2 See Appendix A for details.", "3 https://www.wikidata.org/wiki/ wikidata:Database_reports/List_of_properties/Top100 6243 # Instance # Pop Rel # Unpop Rel popularity max popularity min D Recommend 1167 659 (56.5%) 508 (43.5%) 294.4 85.2 D Revise 1214 676 (55.7%) 538 (44.3%) 291.5 84.4 D Scratch 3308 1615 (48.8%) 1693 (51.2%) 266.3 67.4 D Revise D Recommend 47 17 (36.2%) 30 (63.8%) 221.3 66.0 D Scratch D Recommend 2141 956 (44.7%) 1185 (55.3%) 251.0 57.7 D Scratch D Revise 2094 939 (44.8%) 1155 (55.2%) 251.7 57.5 Table 1: Statistics of datasets.", "including country , publication date , and so on.", "The center two columns of Table 1 illustrate the distribution of these two categories of relationships across multiple datasets.", "First, we can see that in the real distribution, i.e., in D Scratch , the percentages of these two types of relations are 48.8% and 51.2%, respectively, which is close to 1:1 with slightly fewer popular relations.", "However, the proportion of all instances belonging to the popular relationship reached 56.5% in recommendations, D Recommend , which is significantly higher than the 43.5% for unpopular ones.", "Further study of those instances that were mistakenly excluded during the recommendation phase, D Scratch D Recommend , reveals that cases involving unpopular relationships are more likely to be missing.", "This demonstrates that the recommendation phase in DocRED does have a systematic bias related to the popularity of relations.", "The instances supplemented during the revision phase, D Revise D Recommend , help to mitigate this bias marginally, whereas annotators label more instances belonging to unpopular relations.", "However, in comparison to D Scratch , which represents the real relation distribution, D Revise still prefers popular relations.", "This is because the annotators place an excessive amount of trust in the recommendations and do not add sufficient missing instances during the revision phase.", "According to the statistics in the Table 1, the recommendation's bias toward the relation is ultimately inherited by the dataset that passed manual inspection.", "We hypothesize that the instances involving very popular entities are more likely to appear in Wikidata recommendations, whereas instances related to extremely rare entities are more likely to be disregarded.", "To determine whether such bias exists, we analyze the popularity of entities engaged in relation instances across multiple data sets.", "Each named entity in DocRED is linked with a Wikidata item based on the literal matching of names or aliases 4 .", "The popularity of an entity is represented by how many times the matched item appears in a relation instance in Wikidata (either as head or tail); if an entity matches more than one Wikidata items, the highest count among the matched items is taken as its popularity.", "For those entities that cannot be linked to Wikidata, we assign a popularity of -1. For each relation instance, we compute two types of popularities.", "Since an instance contains a pair of entities (head and tail) usually with different popularities, we define popularity max to be the higher popularity of the pair of entities, and popularity min to be the lower one.", "We report the average popularity of relation instances in each dataset in Table 1. Comparing D Recommend and D Scratch , we find that the former's popularity max is 294.4, far more than the latter's 266.3.", "This means that instances containing popular entities will be more likely to be retained during the recommendation phase.", "Regarding those instances that were incorrectly excluded during the recommendation phase, D Scratch D Recommend , their popularity min is 57.7, which is less than the 67.4 in D Scratch .", "This demonstrates that instances involving uncommon entities are more likely to be ignored during the recommendation phase.", "This entity-related bias is apparent in the revised data set as well.", "The popularity max kept by D Revise remains larger than that of D Scratch , while the popularity min of D Scratch D Revise is also lower than that of D Scratch .", "This is mostly because the facts supplemented at the revision phase is too few to eliminate such bias.", "To investigate if RE models trained on such data will likewise learn the same bias, we train and select RE models on the recommend-scheme-labeled dataset, D TrainRevise and D ValidRevise and then assess the models' performance on the real data distribution, D Scratch .", "The construction process of D TrainRevise and D ValidRevise is the same as D Revise , while the former is actually the original train set and the latter is the validation set in DocRED excluding the 96 documents in D Revise .", "In those settings, we examine the performance of recent models: BiLSTM (Yao et al., 2019), GAIN-BERT base (Zeng et al., 2020), SSAN-Roberta large (Xu et al., 2021), ATLOP-Roberta large (Zhou et al., 2021) and DocuNet-Roberta large (Zhang et al., 2021).", "The last three models are the most competitive ones for DocRED currently, while the others are shown to make sure that our analysis can generalize to models of smaller sizes.", "Table 2 summarizes the evaluation results of five models on D Revise and D Scratch .", "All results were reported using micro-average F1-scores as in prior literature (Zeng et al., 2020; Zhou et al., 2021).", "Notably, we observe a significant decline in F1 for all the 5 models on D Scratch which is mainly due to the dramatic drop in the recall.", "The drop is the result of the bias in training data, i.e., the model trained on biased data lacks the generalization ability to extract relation instances that are systematically missed in the dataset.", "We will validate this point in the following section.", "To better understand the different performances on the two datasets, we analyze the model capability over different relations and entities.", "Not surprisingly, we find that models trained on D TrainRevise prefer popular entities and relations as well.", "Addi-0.0 0.1 0.2 0.3 0.4 Recall ATLOP DocuNet SSAN GAIN BiLSTM M o d e l Relation popular unpopular Figure 1: The recall of models on instances associated with popular and unpopular relations.", "tional experiments suggest that this may be because missing instances are considered as negative samples during training.", "Given that a substantial proportion of unlabelled instances are associated with unpopular entities and relations, the model will be forced to disregard those unpopular ones under the incorrect penalty for the missing instances.", "Relation Bias Figure 1 shows the recall of the models on the instances associated with popular and unpopular relations respectively.", "As is depicted, if an instance's relation is popular, it is almost twice as likely to be successfully extracted compared with an instance whose relation is not popular.", "This gap does not narrow with the improvement of the model's overall performance.", "The difference between the probability of successfully extracting popular and unpopular relations is 0.129 for the best model ATLOP, which is even greater than the 0.125 for BiLSTM.", "This indicates that all models trained on the original DocRED favor popular relations and ignore the unpopular ones.", "Entity Bias Figure 2 shows the model's recall curve as the popularity max of instances in D Scratch increases 5 .", "We divide all instances in D Scratch into 5 groups based on the popularity max in each instance, and we calculate the recall for each group independently.", "As seen in Figure 2, all the curves exhibit a clear rising trend, indicating that the probability of discovering an instance is positively correlated with its popularity max .", "Additionally, we can see that the middle of the ATLOP's and DocuNET's curves is nearly horizontal, which means that they are more sensitive to extremely popular or particularly rare entities.", "Previous works (Zeng et al., 2020; Zhou et al., 2021; Zhang et al., 2021) regard any instances that are not annotated with any relations as label no_relation , which means the missing instances are treated as negative samples during training and a model will be punished for predicting them as positive.", "We thus hypothesize the model's bias originates from the incorrect penalty for missing instances in the training process.", "To demonstrate this, we generate the negative samples in a different approach, using the instances manually eliminated during the revision step only.", "We denote such construction of negative samples as N Hum , and the method that treats all samples other than the positive instances as negative is called N All .", "Due to the fact that the sample generated by N Hum has been manually verified, there is no issue with false no_relation instances.", "We train the same models using D TrainRevise with negative samples constructed by N Hum and N All and compare models' preference for popular entities and relations.", "Figure 3 depicts the fraction of instances that correspond to the popular relationship among the instances accurately predicted by GAIN trained with D TrainRevise + N Hum and D TrainRevise + N All .", "Additionally, we mark the true distribution of the data in D Scratch .", "As can be seen, when trained with D TrainRevise + N Hum , GAIN can find more unpopular relation associated instances and the gap between the proportion of unpopular relation associated in model's prediction and D Scratch is narrowed down.", "Based on the entity popularity in each instance, we partition all instances in D Scratch into five cat-0 20 40 60 80 100 Fraction(%) N All N Hum D Scratch Unpopular Relation Popular Relation Figure 3: The proportion of instances associated with popular and unpopular relationships in the correct prediction of GAIN.", "egories and calculate the recall for each group independently.", "Figure 4 shows the improvement of GAIN's recall compared with the group which includes the instances with the most unpopular entities (0-20%).", "In comparison to N All , using N Hum to construct negative samples to train a model will dramatically lessen the rising trend of the model's recall as the entity's popularity grows.", "Finally, we move on to discuss another more implicit influence of the recommend-revise scheme on the annotators' aspect.", "As discussed in Section 4.1, while we expected the revision process to help supplement the instances left out, it turns out that an incredibly low number is added indeed.", "Given that the annotators are trained to accomplish the revision task, we wonder why they still fail in such a uniform manner.", "We would like to argue that it is the nature of the revision process that puts the annotators in a dilemma, where they have to choose between a huge effort and insufficiency of supplementation.", "Recall that there is a distinct difference in the settings of examining a labeled relationship and supplementing an unidentified relationship.", "For the former, annotators are required to find evidence for a recommended relation instance and remove it if there is conflicting or no evidence.", "This process only requires checking a single entity pair and collecting the information related to the two specific entities.", "However, this is not the case for supplementing a possible, unidentified relation instance, which can exist between any entity pair.", "There is 6246 [20,40) [40,60) [60,80) [80,100] Popularity min (%) 0.04 0.06 0.08 0.10 0.12 0.14 0.16 0.18 0.20 D e l t a R e c a ll [20,40) [40,60) [60,80) [80,100] Popularity max (%) N All N Hum Figure 4: The gain of recall in instances with different popularity min and popularity max , compared with the [0-20) group.", "no clear range of searching or indicating information; all they can do is to check pair-by-pair, just like what they do from scratch.", "This puts annotators in an awkward dilemma, especially when they understand the motivation of this scheme: if they are to be fully responsible for the missing instances at large, they will always have to complete the thorough pairwise checking one by one; however, this would make the whole process of the recommend-revise scheme meaningless in return, as it's just like a practice from scratch.", "The harsh requirements of supplementing push annotators to overly rely on the recommendation results and simply examine them.", "This is especially worth worrying about in real practice, where annotators are recruited to complete a certain number of annotations, and typically paid according to the estimated number of hours or the total number of instances they devote to the annotation(Draws et al., 2021).", "Under this dilemma, it is a natural result that they are especially unmotivated to carry out the exhaustive checking for supplementation in order to get a reasonable pay in the given time.", "In fact, we observe an interesting phenomenon that annotators largely tend to just pick some most obvious missing instances, convince themselves that they have accomplished the supplementation, and simply move on to the next document.", "This can be seen in Figure 5, where we compare the distributional characteristics of the successfully supplemented instances ( D Revise -D Recommend ) and all the missing instances in general ( D Scratch D Revise ).", "Sub-figure", "(a) shows the accumulative statistics of the position of the head entity's first appearance in the document.", "We can see that the instances added by annotators in DocRED exhibit an extremely obvious tendency to occur earlier in the text, where more than 70% added instances are in the first 3 sentences.", "In contrast, all missing relation instances as a whole are almost distributed in 1 3 5 7 9 11 >12", "every part of the document uniformly.", "This reveals the interesting fact that humans typically tend to pick up the relations where the entities in it are mentioned earlier in the document.", "Sub-figure", "(b) further compares the minimum distance between the mentions of the head and tail entities of one relation instance.", "We once again see the interesting fact that annotators have a strong tendency to add the most easily identifiable instances where the head and tail entities are quite close.", "Specifically, the proportion of entity pairs mentioned in just one single sentence (Interval=0) is around 20% for all missing facts, but is as high as 45% for the ones chosen by annotators to be supplemented.", "This tells us how annotators naturally avoid burdens of reading brought by longer intervals, which possibly indicates more complicated inference with multiple sentences.", "From these observations, we see that there exist clear patterns among the very few instances added by human annotators.", "This reveals a serious fact that annotators are intentional in pretending to be supplementing with the least possible effort.", "Given the consensus behavior of annotators and the very limited number of additional, it is most likely that the nature of the annotation task pushes the annotators to this embarrassing dilemma of adding and abandoning.", "Thus, we propose a call to the NLP community that researchers should always be aware that annotation schemes, like the recommend-revise scheme, can have a direct im-pact on the annotation workers, affecting their willingness and behaviors, and thus have a deeper influence on the collected data.", "We can summarize all these problems mentioned above in the annotation with a concrete case in Do-6247", "cRED shown in Figure 6. The figure depicts the annotations associated with the entity Michael Im-peri , as well as the relation that is added in revision.", "Let's first focus on the red edges, which indicate the relation triples that are neither recommended nor supplemented by human.", "Regrettably, half of the total 18 relation triples remain missing, and just one triple is added during revision (the green edge).", "Compared with black edges, which indicated correctly annotated instances, the red edges are more likely to be associated with less popular entities.", "For example, \"Sopranos\" [4] and \"Law & Order\" [7], two popular series with at least 100K+ comments on IMDB, and about 200 edges in Wikidata, are connected with \"Michael Imperioli\" [0] with the relation \"cast member\" in the annotation, but \"De-troit 1-8-7\" [17] and \"Mad Dogs\" [20], supposed to hold the same relation to \"Michael Imperioli\" [0], are missed.", "In the text, all these series appear in similar circumstances, and the only difference is the latter ones are not recommended to the annotators, essentially because of their less popularity (less than 10K comments on IMDB, and less than 50 edges in Wikidata).", "We can also see the effect on the popularity of relations in the connection between [7] and [9].", "\"present in work\" and \"char-acters\" should occur symmetrically according to the definitions, but the latter one is missed in the recommendation.", "Correspondingly, in Wikidata, the latter relation has 19057 links, which is less than the former's 82250 links.", "The last point to notice is the only green edge between Louis Fitch 6248 [15] and ABC [16], which is not recommended, but supplemented by annotators.", "Among all the missed instances in the recommendation, the annotators only supplement this one, which is easy to identify in the text due to both the head and tail entities being mentioned in the same sentence.", "This is consistent with our analysis of annotators' behavior above.", "With the advance of deep learning models, the annotation sometimes becomes the bottleneck for a machine learning system.", "Recently, analyzing the annotation quality has received increasing attention.", "Northcutt et al. (2021) collects and analyzes the label errors in the test sets of several popular benchmarks, showing label errors are ubiquitous and destabilize machine learning benchmarks.", "More specific to RE task, Alt et al. (2020) addresses the annotation problems in TACRED (Zhang et al., 2017), a popular sentence-level RE dataset.", "They find label errors account for 8% absolute F1 test error and more than 50% of the examples need to be relabeled.", "Stoica et al. (2021) expands this research to the whole dataset, resulting in a complete re-annotated version, Re-TACRED, and conducts thorough analysis on the models' performance.", "Our work differs from them in that we delve into the nature of document-level RE task, and especially explore how the error is systematically introduced into the dataset through recommend-revise scheme.", "Methodologies to solve incomplete annotations for information extraction tasks have been widely discussed in previous works.", "Different from classification tasks, information extraction requires annotators to actively retrieve positive samples from texts, instead of just assigning a label for a given text.", "The problem is also attributed to the use of distant supervision (Reiplinger et al., 2014) where the linked KG is not perfect.", "Some works apply general approaches like positive unlabeled learning (Xie et al., 2021; Peng et al., 2019) or inference learning (Roller et al., 2015).", "Task-specific models are also designed, like Partial CRF (Tsuboi et al., 2008) for NER (Yang et al., 2018), and novel paradigm for joint RE (Xie et al., 2021).", "However, none of them examine the distribution bias in the training data, and those methods are not validated in the context of the document-level RE task.", "transformer-based models.", "Graph-based models like Zeng et al. (2020) and Zhang et al. (2021) are designed to conduct relational reasoning over the document, and transformer-based models (Zhou et al., 2021; Xu et al., 2021) are good at recognizing long-distance dependencies.", "However, all previous models treat unlabeled samples in the dataset as negative samples, and do not concern the problems in annotations.", "We believe our analysis and re-annotated dataset will help future work focus more on the discrepancy between the annotation and real-world distribution, instead of just overfitting the dataset.", "In this paper, we show how the recommend-revise scheme for DocRED can cause bias and false negative issues in the annotated data.", "The flaws of dataset affect the model's recall on real data and also teach the model the same bias in training data.", "As this scheme cannot reduce the human labor essentially without the loss of annotation quality, more efficient strategies for annotation are to be explored.", "On the other hand, considering that building a reliable training set for document RE is extremely expensive, it is also a meaningful topic that how to alleviate the dataset shift problem (Moreno-Torres et al., 2012) by injecting appropriate inductive bias into the model's structure, instead of inheriting the bias in the training data.", "We believe the in-depth analysis provided in this paper can benefit future designs of document-level RE models, and our Scratch dataset can serve as a fairer test set.", "This work is supported in part by National Key R&D Program of China (No. 2020AAA0106600) and NSFC (62161160339).", "We would like to thank the anonymous reviewers and action editors for their helpful comments and suggestions; thank Weiye Chen for providing feedback on an early draft.", "For any correspondence, please contact Yansong Feng.", "This work focuses on quality checking and re-annotations of DocRED, a publicly available dataset constructed from Wikipedia pages.", "All source documents and types of relationships are provided and utilized in the original DocRED dataset, and no additional annotation rule that may 6249 involve unexamined ethical concerns was introduced.", "Annotators receive a competitive pay of 100 yuan per hour (more than 4 times the local minimum wage) under the approval of the institute, and both the annotation and discussion stage count towards the paid working time.", "Annotators are required to read the ACM Code of Ethics before annotating and report any document that violates the code.", "These documents are removed from the sampled documents.", "However, there may still be sentences or entities that are from Wikipedia pages with potentially improper content.", "The possible adoption of such content is not a decision of the authors, and all content in the dataset does not reflect the views or stances of the authors.", "The resulting re-annotations from the agreement of two expert annotators form a decent approximation of the gold labels, but may still not be the ground truth due to natural error rates.", "Further use of the dataset should be aware of the limitations and other possible issues, and we are not responsible for issues in further model training processes using our data." ]
[ "abstain", "abstain", "abstain", "result", "result", "abstain", "objective", "other", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "result", "abstain", "abstain", "abstain", "method", "other", "other", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method" ]
[ "Accurate assessment of the ability of embedding models to capture idiomaticity may require evaluation at token rather than type level, to account for degrees of idiomaticity and possible ambiguity between literal and idiomatic usages.", "However, most existing resources with annotation of idiomaticity include ratings only at type level.", "This paper presents the Noun Compound Type and Token Idiomaticity (NCTTI) dataset, with human annotations for 280 noun compounds in English and 180 in Portuguese at both type and token level.", "We compiled 8,725 and 5,091 token level annotations for English and Portuguese, respectively, which are strongly correlated with the corresponding scores obtained at type level.", "The NCTTI dataset is used to explore how vector space models reflect the variability of idiomaticity across sentences.", "Several experiments using state-of-the-art contextualised models suggest that their representations are not capturing the noun compounds idiomaticity as human annotators.", "This new multilingual resource also contains suggestions for paraphrases of the noun compounds both at type and token levels, with uses for lexical substitution or disambiguation in context.", "Multiword Expressions (MWEs) such as noun compounds (NCs), have been considered a challenge for NLP (Sag et al., 2002).", "This is partly due to the wide range of idiomaticity that they display, from more literal to idiomatic combinations ( olive oil vs. shrinking violet ).", "The task of identifying the degree of idiomaticity of MWEs has been investigated at type level, to determine the potential of an MWE to be idiomatic in general.", "Some of these approaches are based on the assumption that the * Equal contribution.", "distance between the representation of an MWE as a unit and the representation of the compositional combination of its components is an indication of the degree of idiomaticity: they are closer if the MWE is more compositional.", "Good performances are obtained even with non-contextualised word embeddings like word2vec (Mikolov et al., 2013), and vector operations like addition and multiplication (Mitchell and Lapata, 2010; Reddy et al., 2011; Cordeiro et al., 2019).", "Additionally, for some MWEs, there is a potential ambiguity between an idiomatic and a literal sense, like in the potentially idiomatic MWE brass ring which can be ambiguous between the more literal meaning a ring made of brass and the more idiomatic sense of a prize .", "Considering that these MWEs can have both idiomatic and literal senses, a related task of token-level identification evaluates whether in a particular context an MWE is idiomatic or not.", "For this task, models that incorporate the context in which an MWE occurs tend to be better equipped to distinguish idiomatic from literal occurrences (Sporleder and Li, 2009; King and Cook, 2018; Salton et al., 2016).", "Contextualised embedding models, like BERT (Devlin et al., 2019), brought significant advances to a variety of downstream tasks (e.g. Zhu et al. (2020) for machine translation and Jiang and de Marneffe (2019) for natural language inference).", "They also seem to benefit tasks like idiomaticity and metaphor identification (Gao et al., 2018), since their interpretation is often dependent on contextual clues.", "Nonetheless, previous work found that non-contextualised models seem to still bring informative clues for these tasks (King and Cook, 2018), and their combination with contextualised models could improve results (e.g. for metaphor identification (Mao et al., 2019)).", "This complementarity between non-contextualised and contextualised models may be an indication that enough core idiomatic information may already be available at type level.", "Moreover, type-based compositionality prediction measures that perform well with static embeddings may also perform well for token-based prediction with contextualised models.", "To address these questions, in this paper, we present the Noun Compound Type and Token Idiomaticity (NCTTI) dataset, containing 280 NCs in English and 180 in Portuguese, annotated with the degree of idiomaticity perceived by human annotators, at type and token level.", "1 NCTTI contains a total of 8,725 annotations in 840 different sentences in English, and 5,091 annotations in 540 sentences in Portuguese.", "Moreover, NCTTI has several paraphrases for each NC which are classified as either type level or token level equivalents.", "To control for the level of idiomaticity, the NCTTI dataset has a balanced amount of compositional, partly compositional and idiomatic items.", "As the importance of context to determine interpretation may be related to factors like the degree of idiomaticity, association strength or the frequency of an NC, we present an illustrative analysis of their impact for the performance of different models in capturing idiomaticity.", "We also examine how the performance obtained for human idiomaticity judgments per type differs from the performance obtained per token.", "Our contributions can be summarised as: (1) building the NCTTI dataset with information about type and token idiomaticity for NCs in two languages, (2) evaluating to what extent models are able to detect idiomaticity at type and token level, analysing different levels of contextualisation and (3) proposing two new measures of idiomaticity.", "Moreover, the paraphrases provided for each NC at type and token level make NCTTI a useful resource for enhancing paraphrase datasets (e.g. PPDB (Ganitkevitch et al., 2013)), for tasks involving lexical substitution (McCarthy and Navigli, 2007; Mi-halcea et al., 2010), or for improving the results of downstream tasks, such as text simplification (Paet-zold, 2016; Alva-Manchego et al., 2020).", "Such paraphrases may also be useful for improving the task of machine translation, avoiding the need for parallel MWE corpora (Zaninello and Birch, 2020).", "Section 2 gives an overview of existing idiomaticity datasets.", "Section 3 presents the NCTTI dataset and the annotations, and section 4 discusses 1 Type level annotations come from Cordeiro et al. (2019), the dataset used as source for the NCTTI.", "the evaluation of the performance of different word embeddings in detecting idiomaticity.", "Datasets with type-level annotations are available for NCs in English (Farahmand et al., 2015; Reddy et al., 2011; Ramisch et al., 2016; Kruszewski and Baroni, 2014), German (Roller et al., 2013; Schulte im Walde et al., 2016), French (Cordeiro et al., 2019) and Portuguese (Cordeiro et al., 2019).", "However, datasets with idiomatic information at token level are scarce, e.g., the VNC-Tokens (Cook et al., 2008), containing almost 3k annotations for 53 Verb-Noun Combinations in English.", "Regarding the use of contextualised embeddings to model idiomaticity, Nandakumar et al. (2019) compared different static and contextualised embeddings to predict the NCs compositionality, obtaining better results with static vectors learnt individually for each NC.", "Shwartz and Dagan (2019) train various classifiers initialised with static and contextualised embeddings for different compositional tasks, achieving the best results with BERT embeddings.", "Yu and Ettinger (2020), using partially idiomatic expressions of the BiRD dataset (Asaadi et al., 2019), show that contextualised embeddings from language models heavily rely on word content, missing additional information provided by compositional operations.", "In this paper we take advantage of the NCTTI dataset to observe whether vector representations obtained with different strategies correlate with human annotations at both type and token levels.", "This section describes the procedure to create the NCTTI dataset and its main characteristics.", "2 3.1 Source data We used as basis the English and Portuguese subsets of the NC Compositionality dataset (Cordeiro et al., 2019), which contain compositionality scores for 280 two-word NCs in English (90 of which came from Reddy et al. (2011)), and 180 in Portuguese, all of them labeled at type level: i.e., the annotators provided a compositionality value for a compound (from 0 fully idiomatic to 5, fully 2 The NCCTI dataset can be downloaded from the following url: https://github.com/marcospln/nctti . compositional) after reading various sentences with this NC.", "To obtain more fine-grained compatible token-level annotations about the impact of different contexts in the interpretation of NCs, we used the same original sentences as in the source dataset (three sentences per compound with the same sense were selected from Reddy et al. (2011) dataset).", "3 Language experts classified each noun compound regarding their semantic compositionality as idiomatic (e.g., gravy train ), partially idiomatic (e.g., grandfather clock ), or compositional (e.g., research project ).", "For English, this resulted in 103, 88, and 89 idiomatic, partially idiomatic, and compositional compounds.", "For Portuguese, each class has 60 compounds, as the selection had been balanced when the source dataset was created.", "We used the same protocol as Reddy et al. (2011) and Cordeiro et al. (2019), asking each participant to give 0 to 5 scores for an NC and its components in a specific sentence (e.g., glass ceiling in Women are continuing to slowly break through the glass ceiling of UK business [. . . ]).", "In particular, we asked participants for:", "(i) the contribution of the head to the meaning of the NC (e.g., is a glass ceiling literally a ceiling ?);", "(ii) the contribution of the modifier to the meaning of the NC (e.g., is a glass ceiling literally of glass ?); and", "(iii) the degree of compositionality of the compound (i.e., to what extent the meaning of the NC can be seen as a combination of its parts).", "Additionally, we asked for up to three synonyms of the NC in that particular sentence (e.g., synonyms at token level).", "We used Amazon Mechanical Turk to obtain the annotations for English, and a dedicated online platform for the questionnaire in Portuguese, 4 as we could not find a suitable number of annotators for this language in AMT.", "5 Taking this into account, the numbers of the Portuguese annotations are in general lower to those obtained for English.", "For each language, we have included the three sentences of every compound in the dataset (840 sentences in English, and 540 in Portuguese), which were randomly submitted to the annotators.", "For English, we compiled at least 10 annotations per sentence, resulting in 8,725 annotations (10.4 annotations per sentence on average).", "A total of 412 annotators have taken part in the process, and on average, each participant labeled 21 instances.", "For Portuguese we set the threshold in 5 annotations per sentence: we got 5,091 annotations by 33 participants, so that each sentence has a mean of 9.4 annotations and each annotator labeled on average 154 sentences.", "Inter-annotator agreement: we computed the inter-annotator agreements for two and three annotators with the largest number of sentences in common (Table 1).", "For English, we obtained Krippendorff's (Krippendorff, 2011) values of 0.30 for two annotators (199 sentences) and 0.22 for three annotators (76 sentences).", "The values for Portuguese were of 0.52 for two annotators (131 sentences) and 0.44 for three annotators (60 sen-tences).", "Overall, and using the divisions proposed by Landis and Koch (1977), the agreement results can be classified as fair' (for English), and mod-erate' (for Portuguese).", "we calculated the correlations (Spearman ) between the average compositionality scores of the NCTTI", "dataset and those of the original resource (NC Compositionality dataset).", "Table 2 contains the correlation results for each language and compositionality class.", "The strong to very strong significant correlations confirm the robustness between type-level and token-level human compositionality annotations for these two datasets.", "6 Idiomaticity values: with regards to the idiomaticity values of each class, Table 3 displays both the average scores and the standard deviation in both languages.", "As expected, for the whole compounds, partially idiomatic NCs are those with higher standard deviations, and their mean compositionality values are in the middle of the scale (2.34 and 2.46).", "In English, the results of both idiomatic and compositional compounds are more homogeneous, as they are clearly located on the margins of the scale ( < 1 and > 4 , respectively) with lower deviations.", "This is not the case in Portuguese, where the average values are > 1 and < 4 for idiomatic and compositional NCs, respectively, placing even the idiomatic cases closer towards the middle of the scale.", "With respect to the average values for the heads and modifiers, we can highlight the following observations: first, both head and modifier scores are consistently higher than the means for the whole compound in every scenario also suggesting at least a partial compositionality in their token occurrences.", "Second, for idiomatic NCs, the scores of the modifiers are higher than those of the heads, while for partially compositional NCs the results are the opposite.", "7 Finally, regarding the compositional level, the modifier values are higher in English, while in Portuguese the heads seem to contribute more to the meaning of the NC.", "Observing the variability across the annotations, we found some divergence in a few compounds (e.g., brass ring labeled as idiomatic for a compositional occurrence Three drawers, each with a brass ring pull, provide plenty of storage whatever you use it for.), which hints at possible interference from a salient meaning (Giora, 1999).", "However, further investigation is needed.", "Paraphrases: as mentioned, we asked the participants to provide synonyms or paraphrases for the noun compounds in each particular context.", "In this respect, it is worth noting that while some suggestions may be applicable across all the sentences for an NC (e.g. spun sugar for cotton candy , considered as a type level synonym), others are more dependent on context and differ for specific sentences (e.g. flight recorder and unknown process , for black box , which can be considered as token level para-phrases).", "We have classified the paraphrases as type or token level using the following procedure: to organise the large set of paraphrases provided by the annotators (see below), we performed an automatic classification as follows: we labeled as type level synonyms those paraphrases proposed for the three sentences of each compound, and those suggested for two sentences with a frequency > = 3 ; token level synonyms are those proposed only for one sentence with a frequency > = 2 .", "In English, 9,690 different paraphrases were proposed by the annotators (average 34.60 per NC), and 3,554 were suggested by at least 5 participants (average of 12.70 per NC).", "Out of them, 1,506 were classified as type level (5.4 synonyms per NC, on average), and 353 at token level (0.42 per sentence, 1.3 per NC).", "Overall, 118 NCs have token level synonyms for one sentence, 69 for two sentences, and 16 for the three sentences.", "For Portuguese, the annotators suggested a total of 6,579 paraphrases (314 by at least 5 participants Sentence Mean Paraphrase Keri enjoys music and has turned into a skilled disc jockey .", "1.2 record player Quality wedding disc jockey equipment comes at a cost.", "2.5 broadcaster Let one of our high energy disc jockeys entertain your next party.", "1.7 announcer Idiomaticity score at the type-level: 1.25.", "Most common (type-level) paraphrase: DJ .", "and 764 by > = 3 , average of 4.2 per NC).", "743 synonyms were proposed for the 180 compounds (an average of 4.1 per NC), being classified as type level.", "Concerning token level synonyms, we have collected 192 synonyms (1.1 per NC, on average).", "In this case the total number of annotations was lower, and the final resource contains 61 NCs with token level synonyms for one sentence, 38 for two sentences, and 6 compounds have token level synonyms for the three sentences.", "The collection of paraphrases included in the NCTTI make this dataset a valuable resource for different evaluations, such as lexical substitution tasks and assessments of the performance of embedding models to correctly identify contextualised synonyms of NCs with different degrees of idiomaticity.", "Table 4 shows an annotation example for the NC disc jockey , in English.", "It includes the three sentences together with the average idiomaticity score and both token-level and type-level paraphrases.", "This section displays some of the comparative analyses for the relevance of type and token annotation for idiomaticity detection.", "First, we adapt the type level compositionality prediction approaches used on static word vectors (Mitchell and Lapata, 2010) to contextualised models (Nandakumar et al., 2019), here computing the correlation also at token level.", "In particular, the assumption is that compositionality can be approximated as the distance between the representation for an NC and the representation for the compositional combination of its individual components.", "Then, we measure whether the vector representations reflect the variability of the human annotators, who capture different nuances of the NCs depending on the sentences in which they occur.", "Similarly, in a third experiment we use the standard deviations of the idiomaticity scores in the three contexts to observe how the interpretation of the NCs varies across sentences, and whether this correlates with the contextualised representations produced by various models.", "More specifically, we assume that, if models adequately incorporate contextual information, the standard deviations of the similarities between the NCs in different contexts should be correlated with those of the human annotators.", "We evaluate four contextualised models: three BERT variants, based on the Transformers architecture (Vaswani et al., 2017), and ELMo, which learns word vectors using bidirectional LSTMs (Peters et al., 2018).", "For English we used the ELMo small model provided by Peters et al. (2018), BERT-Large uncased (Devlin et al., 2019), DistilBERT (Sanh et al., 2019), based on BERT-Base and distilled on SQuAD dataset, and Sentence-BERT (Reimers and Gurevych, 2019), trained on BERT-Large and both MultiNLI and SNLI.", "8 For Portuguese we selected the ELMo pre-trained weights provided by Quinta de Castro et al. (2018) and the multilingual versions of the models used for English, namely mBERT (base cased), and both multilingual DistilBERT and Sentence-BERT (Reimers and Gurevych, 2020).", "As a static non-contextualised baseline we used GloVe (Penning-ton et al., 2014) (the English official models with 300 dimensions and trained on 840 billion tokens, and the equivalent Portuguese model released by Hartmann et al. (2017)).", "The vector representations were obtained with the flairNLP framework (Ak-bik et al., 2019) using the models provided by the transformers library (Wolf et al., 2020).", "The representations of NCs (and their sentences) were obtained by averaging the word (or subword, if adopted by the model) embeddings.", "We used the concatenation of the three layers for ELMo and of 8 https://www.nyu.edu/projects/bowman/ multinli/https://nlp.stanford.edu/projects/snli/ the last four hidden layers for the BERT models.", "prediction Unsupervised type idiomaticity identification with static non-contextualised word embeddings often assumes that the similarity between the NC embedding and the compositional embedding of the component words (e.g. police car vs. police and car ) is an indication of idiomaticity (Mitchell and Lapata, 2010): the more similar they are the more compositional the NC is.", "To approximate this with contextualised models, we calculate the cosine similarities between the contextualised vector of the NC in each sentence with two types of non-contextualised vectors.", "The first evaluates if even in the absence of an informative sentence context, each of the component words would be enough of a trigger to cue the NC meaning (e.g. eager for eager beaver ).", "This is implemented as the vector for the NC out of context, obtained by feeding the model only with the compound, dubbed NC out .", "9 The second non-contextualised vector evaluates if the representations for the individual words have enough information to reconstruct the meaning of the NC in the absence of context and of the collocated component.", "It is implemented as the sum of the individual vectors of the NC components, where each NC component is fed individually to the model as a sentence, referred to as NC out Comp .", "On each case, we calculate two Spearman correlations with human judgments: at token level, using all the sentences for each language; and at type level, comparing the average cosine similarities of each NC with their compositionality scores at type level.", "We also compute correlations between the similarities and frequency-based data, namely the NC raw frequency, and the PPMI (Church and Hanks, 1990) between its component words, to verify whether they have any impact in these measures of idiomaticity.", "The frequency data were obtained from ukWaC, with 2.25B tokens in English (Baroni et al., 2009), and brWaC, containing 2.7B tokens in Portuguese (Wagner Filho et al., 2018).", "The results by Cordeiro et al. (2019) suggested that if the two components of an NC are processed as a single token unit (for instance, by explic-9 This representation equivalent to the Avg Phrase used by Yu and Ettinger (2020).", "itly linking them with an underscore) the resulting static representation captures the NC idiomatic meaning.", "This is not surprising since by linking the two components we create a new word that would be treated by the model as completely independent of the preexisting component words.", "But such preprocessing may not be desirable or even feasible.", "In this sense the contextualised models would be a good promise, since we expected that by processing a sentence with an idiomatic NC, the context would be enough to lead the model into linking the component words and assigning the corresponding idiomatic meaning.", "Figuratively speaking, the contextualised models would put the underscore for us.", "Therefore, if contextualised models capture idiomaticity, the similarity between NC and NC out Comp (or NC out ) should have strong correlations with the idiomaticity scores of the NCs.", "Table 5 shows the significant correlations in English (top rows) and Portuguese (bottom).", "These results indicate at best weak ( NC out Comp ) to moderate ( NC out ) correlations between models' predictions and human judgments, both at type and token levels.", "Moreover, the correlations obtained are much smaller than those found by the static models used by Cordeiro et al. (2019).", "For English, the best correlations (0.37) were obtained by BERT, while ELMo and Sentence-BERT achieved the best performance in Portuguese (0.27 and 0.26, respec-tively).", "In both languages, the lower values were those of DistilBERT.", "It is worth noting that a direct comparison between the BERT models in both languages should not be done, as they are monolingual (for English) and multilingual (for Portuguese).", "For PPMI, only weak positive correlations were found for ELMo and DistilBERT, indicating that for them higher cosine values weakly imply NCs with stronger association scores.", "Moreover, weak to moderate negative correlations with frequency were found for the BERT models, suggesting that cosine similarity is higher for less frequent NCs.", "The differences between NC out and NC out Comp indicate the importance of some degree of contextualisation (also found by Yu and Ettinger (2020)), even if only as one component contextualising the other in NC out , which may not be retrievable from the combination of the context-independent vectors of the components ( NC out Comp ).", "This is in line with the original strategy used with static embeddings, which learns the distribution of the NCs pre-identified as single tokens in corpora and that resulted in significantly better correlations per type than any of the contextualised models (Cordeiro et al., 2019).", "To make a fairer comparison between both approaches, we injected into the BERT models single representations for the NCs, learnt from the referred ukWaC and brWaC corpora.", "We first annotated as single tokens in the corpus those NCs present in the dataset, and used attentive mimicking with one-token-approximation (Schick and Schutze, 2019, 2020b) to learn up to 500 contexts for each compound.", "After that, we injected these type level vectors into the BERT models using BERTRAM (Schick and Schutze, 2020a).", "For English, these new representations obtained lower results than the original BERT in NC out (e.g., 0.37 vs. 0.28 at type level), but higher in NC out Comp (0.16 vs. 0.33 at type level).", "For Portuguese, including single representations for the NCs in BERT improved the correlations in three of the four scenarios (except for NC out at token level), but the best results were almost identical to those of ELMo (see the full results in the bottom rows of Table 5).", "Regarding the results reported by Nandakumar et al. (2019), for English, our experiments yielded higher correlations for BERT and lower for ELMo ( 0 . 3 in both cases, depending on the setting), which may be due to differences in how the vectors are generated (e.g., the use of different input sentences, hidden layers or compositional operations).", "In sum, the results of these evaluations suggest that the use of a straightforward adaptation of a compositionality prediction approach that led to good performance with static models was not as successful with contextualised models.", "We analyse whether models are able to capture differences in idiomaticity perceived by human annotators across the sentences in which an NC occurs.", "That is, if an NC is found to be more idiomatic in one sentence than in others.", "For that, we created an annotator's vector for each sentence, combining the human scores to create a three dimensional vector representation, where the first dimension is the average NC compositionality, and the second and third are the average scores of the contributions of the head and of the modifier.", "For representing the sentence we obtain an embedding by averaging their (sub)words.", "We calculated the Euclidean distances between", "(i) the annotators' vectors and", "(ii) the cosine similarities between sentence embeddings of each of the possible combinations of the three sentences associated to each NC.", "Then, we measured the correlations between these values using Spearman .", "We aim to assess if annotations and models indicate the same relative differences.", "10 The results were averaged for the 280 (English) and 180 (Portuguese) NCs.", "Table 6 shows the results for the whole datasets and divided by compositionality level.", "As we compare Euclidean distances with cosine similarities negative values are actually positive correlations and vice versa.", "The average is close to 0 suggesting that the embedding models do not capture the nuances in idiomaticity perceived by the annotators between the different sentences per NC.", "We also analysed the similarity among the annotations for each NC in the three sentences, computing the standard deviations of the average compositionality scores given by the annotators.", "In contrast to the previous experiment, here we represent the human annotations using only the idiomaticity scores of the whole NCs and the models' output as the contextualised embedding of the NCs in each sentence.", "At token level most compounds (85.7% in English and 91.1% in Portuguese) have mean idiomaticity scores with less than 0.6 of standard deviation.", "Very few NCs have deviations higher than 1: five in English and four in Portuguese.", "Looking at the contexts in which they occur, the variability seems to be due to the different topics to which the sentences refer.", "For instance, the annotators have identified two senses of firing line : one, more idiomatic, referring to a position in which someone is criticised (mean score of 1.25), and a second one (partially compositional, with an average of 2.7) referring to a specific position in an armed conflict.", "In Portuguese, ceu aberto (open-air', lit. open-sky') was interpreted as less compositional (1.2) when describing urban settings (e.g., open-air shopping centers) than when referring to wild places (e.g., lobas que lutavam a c eu aberto , wolves fighting in the open'), with a mean idiomaticity score of 3.", "10 Spearman is not used here as a statistical test but as a measure to evaluate if the sentence comparisons with two different metrics yield the same relative differences.", "As there are only three sentences to compare, assumes only four values 0 .", "5 or 1 .", "To observe whether language models capture these differences across sentences, we calculated the cosine similarities between the NCs in the three sentences and the standard deviation of these three values.", "We then computed the Spearman correlations between these deviations obtained from the models' representations and those of the human annotations: all correlations were very low and not significant, suggesting that the vector representations do not capture the variability perceived by the annotators.", "Finally, we have also selected two NCs in English with a combination of idiomatic and compositional meanings ( brick wall , and gold mine ).", "In these examples, we found that for BERT (our best model) the cosine similarities between the idiomatic meanings were higher (0.83 in both cases) than between idiomatic and compositional senses (0.68 and 0.7, respectively), suggesting that they are somehow identifying the different senses.", "However, since the highest standard deviations were achieved with NCs representing the same sense in all contexts (e.g., big wig and grass root ), further analysis is needed.", "As neither the cosine similarities obtained with BERT-based models nor the standard deviations between them were correlated with the variation in the human scores, these analyses suggest that state-of-the-art contextualised models still do not model semantic compositionality as human annotators do.", "The experiments performed in this section have shown, on the one hand, some of the possibilities of a multilingual dataset labeled at type and token level; on the other hand, the results also suggest that capturing idiomaticity is a hard task for current language models, as only some of them show moderate correlations with human annotations in some scenarios.", "This paper presented the NCTTI, a dataset of NCs in English and Portuguese annotated at type and token level with human judgments about idiomaticity, and with suggestions of paraphrases.", "The very strong correlations found between type and token judgments confirm the robustness of the scores, while the paraphrases provide further validation of the interpretation of the NCs.", "Moreover, evaluations involving embedding models with different levels of contextualisation suggest that they are still far from providing accurate estimates of NC idiomaticity, at least using the measures proposed and analysed in the paper.", "MWEs are still a pain in the neck for NLP, and datasets like the NCTTI can contribute towards finding better representations for them and better measures for idiomaticity identification.", "Future work includes using these NCs as seeds in cross-lingual representations for enriching the dataset with NC equivalents in different languages.", "Besides, we also plan to enlarge the datasets including a subset of sentences with ambiguous NCs having idiomatic and compositional interpretations depending on the context.", "Aline Villavicencio and Carolina Scarton are funded by the EPSRC project MIA: Modeling Idiomaticity in Human and Artificial Language Processing (EP/T02450X/1).", "Marcos Garcia is funded by the Consellera de Cultura, Educacion e Orde-nacion Universitaria of the Galician Government (ERDF 2014-2020: Call ED431G 2019/04), and by a Ramon y Cajal grant (RYC2019-028473-I)." ]
[ "abstain", "abstain", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "objective", "objective", "other", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "abstain", "other", "other", "abstain", "other", "other", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "result", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other" ]
[ "In conversation, uptake happens when a speaker builds on the contribution of their interlocutor by, for example, acknowledging, repeating or reformulating what they have said.", "In education, teachers' uptake of student contributions has been linked to higher student achievement.", "Yet measuring and improving teachers' uptake at scale is challenging, as existing methods require expensive annotation by experts.", "We propose a framework for computationally measuring uptake, by (1) releasing a dataset of student-teacher exchanges extracted from US math classroom transcripts annotated for uptake by experts; (2) formalizing uptake as pointwise Jensen-Shannon Divergence ( PJSD ), estimated via next utterance classification; (3) conducting a linguistically-motivated comparison of different unsupervised measures and (4) correlating these measures with educational outcomes.", "We find that although repetition captures a significant part of uptake, PJSD outperforms repetition-based baselines, as it is capable of identifying a wider range of uptake phenomena like question answering and reformulation.", "We apply our uptake measure to three different educational datasets with outcome indicators.", "Unlike baseline measures, PJSD correlates significantly with instruction quality in all three, providing evidence for its generalizability and for its potential to serve as an automated professional development tool for teachers.", "1 1 Introduction Building on the interlocutor's contribution via, for example, acknowledgment, repetition or elaboration (Figure 1), is known as uptake and is key to a successful conversation.", "Uptake makes an interlocutor feel heard and fosters a collaborative interaction (Collins, 1982; Clark and Schaefer, 1989), 1 Code and annotated data: https://github.com/ ddemszky/conversational-uptake I added 30 to 70 Okay.", "which is especially important in contexts like education.", "Teachers' uptake of student ideas promotes dialogic instruction by amplifying student voices and giving them agency in the learning process, unlike monologic instruction where teachers lecture at students (Bakhtin, 1981; Wells, 1999; Nystrand et al., 1997).", "Despite extensive research showing the positive impact of uptake on student learning and achievement (Brophy, 1984; O'Connor and Michaels, 1993; Nystrand et al., 2003), measuring and improving teachers' uptake at scale is challenging as existing methods require manual annotation by experts and are prohibitively resource-intensive.", "We introduce a framework for computationally measuring uptake.", "First, we create and release a dataset of 2246 student-teacher exchanges extracted from US elementary math classroom transcripts, each annotated by three domain experts for teachers' uptake of student contributions.", "We take an unsupervised approach to measure uptake in order to encourage domain-transferability and account for the fact that large amounts of labeled data are not possible in many contexts due to data privacy reasons and/or limited resources.", "We conduct a careful analysis of the role of repetition in uptake by measuring utterance overlap and similarity.", "We find that the proportion of student words repeated by the teacher (%IN-T ) captures a large part of uptake, and that surprisingly, word-level similarity measures consistently outperform sentence-level similarity measures, including ones involving sophisticated neural models.", "To capture uptake phenomena beyond repetition and in particular those relevant to teaching (e.g. question answering), we formalize uptake as a measure of the reply's dependence on the source utterance.", "We quantify dependence via pointwise Jensen-Shannon divergence ( PJSD ), which captures how easily someone (e.g., a student) can distinguish the true reply from randomly sampled replies.", "We show that PJSD can be estimated via cross-entropy loss obtained from next utterance classification (NUC).", "We train a model by fine-tuning BERT-base (Devlin et al., 2019) via NUC on a large, combined dataset of student-teacher interactions and Switchboard (Godfrey and Holliman, 1997).", "We show that scores obtained from this model significantly outperform our baseline measures.", "Using dialog act annotations on Switchboard, we demonstrate that PJSD is indeed better at capturing phenomena such as reformulation, question answering and collaborative completion than %IN-T , our best-performing baseline.", "Our manual analysis also shows qualitative differences between the models: the examples where PJSD outperforms %IN-T are enriched by teacher prompts for elaboration, an exemplar for dialogic instruction (Nystrand et al., 1997).", "Finally, we find that our PJSD measure shows a significant linear correlation with outcomes such as student satisfaction and instruction quality across three different datasets of student-teacher interactions: the NCTE dataset (Kane et al., 2015), a one-on-one online tutoring dataset, and the SimTeacher dataset (Cohen et al., 2020).", "These results provide evidence for the generalizability of our PJSD measure and for its potential to serve as an automated tool to give feedback to teachers.", "Uptake has several linguistic and social functions.", "(1) It creates coherence between two utterances, helping structure the discourse (Halliday and Hasan, 1976; Grosz et al., 1977; Hobbs, 1979).", "(2) It is a mechanism for grounding , i.e. demonstrating understanding of the interlocutor's contribution by accepting it as part of the common ground (shared set of beliefs among interlocutors) (Clark and Schaefer, 1989).", "(3) It promotes collaboration with the interlocutor by sharing the floor with them and indicating what they have said is important (Bakhtin, 1981; Nystrand et al., 1997).", "There are multiple linguistic strategies for uptake, such as acknowledgment, collaborative completion, repetition, and question answering see Figure 1 for a non-exhaustive list.", "A speaker can use multiple strategies at the same time, for example, t 3 in Figure 1 includes both acknowledgment and repetition.", "Different strategies can represent lower or higher uptake depending on how effectively they achieve the aforementioned functions of uptake.", "For example, Tannen (1987) argues that repetition is a highly pervasive and effective strategy for ratifying listenership and building a coherent discourse.", "In education, high uptake has been defined as cases where the teacher follows up on the student's contribution via a question or elaboration (Collins, 1982; Nystrand et al., 1997).", "We build on this literature from discourse analysis and education to build our dataset, to develop our uptake measure and to compare the ability of different measures to capture key uptake strategies.", "Despite the substantial literature on the functions of uptake, we are not aware of a publicly available dataset labeled for this phenomenon.", "To address this, we recruit domain experts (math teachers and raters trained in classroom observation) to annotate a dataset of exchanges between students and teachers.", "The exchanges are sampled from transcripts of 45-60 minute long 4th and 5th grade elementary math classroom observations collected by the National Center for Teacher Effectiveness (NCTE) between 2010-2013 (Kane et al., 2015).", "The transcripts represent data from 317 teachers across 4 school districts in New England that serve largely low-income, historically marginalized students.", "Transcripts are fully anonymized: student and teacher names are replaced with terms like Student, Teacher or Mrs. H.", "2 2 Parents and teachers gave consent for the study (Harvard IRB #17768), and for de-identified data to be retained and used in future research.", "The transcripts were anonymized at the time they were created.", "Preparing utterance pairs.", "We prepare a dataset of utterance pairs ( S, T ) , where S is a student utterance and T is a subsequent teacher utterance.", "The concept of uptake presupposes that there is something to be taken up; in our case that the student utterance has substance.", "For example, short student utterances like yes or one-third do not present many opportunities for uptake.", "Based on our pilot annotations, these utterances are difficult for even expert annotators to label.", "Therefore, we only keep utterance pairs where S contains at least 5 tokens, excluding punctuation.", "We also remove all utterance pairs where the utterances contain an [Inaudible] marker, indicating low audio quality.", "Out of the remaining 55k ( S, T ) pairs, we sample 2246 for annotation.", "3 Annotation.", "Given that uptake is a subjective and heterogeneous construct, we relied heavily on domain-expertise and took several other quality assurance steps for the annotation.", "As a result, the annotation took six months to develop and complete, longer than most other annotations in NLP for a similar data size ( 2k examples).", "Our annotation framework for uptake is designed by experts in math quality instruction, including our collaborators, math teachers and raters for the Mathematical Quality Instruction (MQI) coding instrument, used to assess math instruction (Teach-ing Project, 2011).", "In the annotation interface, raters can see (1) the utterance pair ( S, T ) , (2) the lesson topic, which is manually labeled as part of the original dataset, and (3) two utterances immediately preceding ( S, T ) for context.", "Annotators are asked to first check whether ( S, T ) relates to math e.g. Can I go to the bathroom? is unrelated to math.", "If both S and T relate to math, raters are asked to select among three labels: low, mid and high, indicating the degree to which a teacher demonstrates that they are following what the student is saying or trying to say.", "The annotation framework is included in Appendix A. We recruited expert raters (with experience in teaching and classroom observation) whose demographics were representative of US K-12 teacher population.", "We followed standard practices in education for rater training and calibration.", "We conducted several pilot annotation rounds (5+ rounds 3 To enable potential analyses on the temporal dynamics of uptake, we randomly sampled 15 transcripts where we annotate all ( S,T ) pairs (constituting 29% of our annotations).", "The rest of the pairs are sampled from the remaining data.", "with a subset of raters, 2 rounds involving all 13 raters), quizzes for raters, thorough documentation with examples, and meetings with all raters.", "After training raters, we randomly assign each example to three raters.", "Post-processing and rater agreement.", "Table 1 includes a sample of our annotated data.", "Inter-rater agreement for uptake is Spearman = .", "474 (Fleiss = . 286 4 ), measured by (1) excluding examples where at least one rater indicated that the utterance pair does not relate to math 5 ; (2) converting rater's scores into numbers (low: 0, mid: 1, high: 2); (3) z-scoring each rater's scores; (4) computing a leave-out Spearman for each rater by correlating their judgments with the average judgments of the other two raters; and (5) taking the average of the leave-out correlations across raters.", "Our interrater agreement values comparable to those obtained in widely-used classroom observation protocols such as MQI and the Classroom Assessment Scoring System (CLASS) (Pianta et al., 2008) that include parallel measures to our uptake construct (see Kelly et al. (2020) for a summary).", "6 We obtain a single label for each example by averaging the z-scored judgments across raters.", "As we see in Table 1, examples labeled for high uptake tend to have overlap between S and T ; this is expected, since incorporating the previous utterance in some form is known to be an important aspect of uptake (Section 2).", "Therefore, we begin by carefully analyzing repetition and defer discussion of more complex uptake phenomena to Section 5.", "To accurately quantify repetition-based uptake, we evaluate a range of metrics and surprisingly find that word overlap based measures correlate significantly better with uptake annotations than more sophisticated, utterance-level similarity measures.", "7 4 We prefer to use correlations because kappa has undesirable properties (see Delgado and Tibau, 2019) and correlations are more interpretable and directly comparable to our models' results (see later sections).", "5 This step is motivated by widely used education observation protocols such as MQI, which also clearly separate onvs off-task instruction.", "6 High interrater variability especially when it comes to ratings of teacher quality are widely documented by gold standard studies in the field of education (see Cohen and Goldhaber (2016) for a summary).", "7 We focus on unsupervised methods that enable scalability and domain-generalizability; please see Appendix B for supervised baselines.", "We use several algorithms to better understand if wordor utterance-level similarity is a better measure of uptake.", "For each token-based algorithm, we experiment with several different choices for pre-processing as a way to get the best possible baselines to compare to.", "We include symbols for the set of choices yielding best performance : removing punctuation , removing stopwords using NLTK (Bird, 2006) , and stemming via NLTK's SnowballStemmer .", "LCS : Longest Common Subsequence.", "%IN-T : Fraction of tokens from S that are also in T (Miller and Beebe-Center, 1956).", "[ ] %IN-S : Fraction of tokens from T that are also in S .", "[ ] JACCARD : Jaccard similarity (Niwattanakul et al., 2013).", "[ ] BLEU : BLEU score (Papineni et al., 2002) for up to 4-grams.", "We use S as the reference and T as the hypothesis.", "[ ] Embedding-based similarity.", "For the word vector-based metrics, we use 300-dimensional GloVe vectors (Pennington et al., 2014) pretrained on 6B tokens from Wikipedia 2014 and the Gigaword 5 corpus (Parker et al., 2011).", "GLOVE [ UTT ]: Cosine similarity of utterance vectors representing S and T .", "Utterance vectors are obtained by averaging word vectors from S and from T .", "[ ] SENTENCE-BERT : Cosine similarity of utterance vectors representing S and T , obtained using a pre-trained Sentence-BERT model for English (Reimers and Gurevych, 2019).", "8 UNIVERSALSENTENCEENCODER : Inner product of utterance vectors representing S and T , obtained using a pre-trained Universal Sentence Encoder for English (Cer et al., 2018).", "We compute correlations between model scores and human labels via Spearman rank order correlation .", "We perform bootstrap sampling (for 1000 iterations) to compute 95% confidence intervals.", "The results are shown in Table 2.", "Overall, we find that token-based measures outperform utterance-based measures, with %IN-T ( = . 523 ), GLOVE [ ALIGNED ] ( = . 518 ) (a soft word overlap measure) and BLEU ( = . 510 ) performing the best.", "Even embedding-based algorithms that are computed at the utterance-level do not outperform %IN-T , a simple word overlap baseline.", "It is noteworthy that all measures have a significant correlation with human judgments.", "8 https://github.com/UKPLab/ sentence-transformers The surprisingly strong performance of %INT , GLOVE [ ALIGNED ] and BLEU provide further evidence that the extent to which T repeats words from S is important for uptake (Tannen, 1987), especially in the context of teaching.", "The fact that removing stopwords helps these measures suggests that the repetition of function words is less important for uptake; an interesting contrast to linguistic style coordination in which function words play a key role (Danescu-Niculescu-Mizil and Lee, 2011).", "Moreover, the amount of words T adds in addition to words from S also seems relatively irrelevant based on the lower performance of the measures that penalize T containing words that are not in S examples in Table 1 also support this result.", "Now we introduce our main uptake measure, used to capture a broader range of uptake phenomena beyond repetition including, e.g., acknowledgment and question answering (Section 2).", "We formalize uptake as dependence of T on S , captured by the Jensen-Shannon Divergence, which quantifies the extent to which we can tell whether T is a response to S or is it a random response ( T ).", "If we cannot tell the difference between T and T , we argue that there can be no uptake, as T fails all three functions of coherence, grounding and collaboration.", "We can formally define the dependence for a single teacher-student utterance pair ( s, t ) in terms of a pointwise variant of JSD ( PJSD ) as pJSD ( t, s ) = 1 2 ( log P ( Z = 1 M = t, s ) + E log ( 1 P ( Z = 1 M = T , s ))) + log ( 2 ) (1) where ( S, T ) is a teacher-student utterance pair, T is a randomly sampled teacher utterance that is independent of S , and M = ZT + ( 1 Z ) T is a mixture of the two with a binary indicator variable Z Bern ( p = 0 . 5 ) .", "This pointwise measure relates to the standard JSD for T S = s and T by taking expectations over the teacher utterance via E [ pJSD ( T, s ) S = s ] = JSD ( T S = s T ) .", "We consider the pointwise variant for the rest of the section, as we are interested in a measure of dependence between a specific ( t, s ) rather than one that is averaged over multiple teacher utterances.", "The definition of PJSD naturally suggests an estimator based on the next utterance classification task a task previously used in neighboring NLP areas like dialogue generation and discourse coherence.", "We fine-tune a pre-trained BERT-base model (Devlin et al., 2019) on a dataset of ( S, T ) pairs to predict if a specific ( s, t ) is a true pair or not (i.e., whether t came from T or T ).", "The objective function is cross-entropy loss, computed over the output of the final classification layer that takes in the last hidden state of t .", "Let Z be a binary indicator variable representing the model's prediction.", "Then, the cross entropy loss for identifying z is L ( t, s ) = log f ( t, s ) E log ( 1 f ( T , s )) (2) Which can be used directly as an estimator for the log-probability terms in Equation 1, pJSD ( t, s ) = 1 2 L ( t, s ) + log 2 .", "Standard variational arguments (Nowozin et al., 2016) show that any classifier f forms a lower bound on the JSD,", "Thus, our overall procedure is to fit f ( t, s ) by maximizing E [ pJSD ( t, s )] over our dataset and then use f ( t, s ) (a monotone function of pJSD ( t, s ) ) as our pointwise measure of dependence.", "Training data.", "We use ( S, T ) pairs from three sources to form our training data: the NCTE dataset (Kane et al., 2015) (Section 3), Switchboard (God-frey and Holliman, 1997) and a one-on-one online tutoring dataset (Section 6) we use a combination of datasets instead of one dataset in order to support the generalizability of the model.", "Filtering out examples with S < 5 tokens or [Inaudible] markers (Section 3), our resulting dataset consists of 259k ( S, T ) pairs.", "For each ( s, t ) pair, we randomly select 3 negative ( s, t ) pairs from the same source dataset, yielding 777k examples.", "9 Parameter settings.", "9 We do not split the data into training and validation sets, as we found that using predictions on the training data vs those on the test data as our uptake measure yield similar results, so we opted for maximizing training data size.", "120 tokens for S and T each (the rest is truncated), learning rate of 6.24e-5 with linear decay and the AdamW optimizer (Loshchilov and Hutter, 2017).", "Training took about 13hrs on a single TitanX GPU.", "Table 3 shows that the PJSD model ( = . 540 ) significantly outperforms %IN-T .", "Our rough estimate on the upper bound of rater agreement ( = . 539 , obtained from a pilot annotation where all 13 raters rated 70 examples) indicate that our best models' scores in a similar range as human agreement.", "10 Table 4 includes illustrative examples for model predictions.", "Our qualitative comparison of PJSD and %IN-T indicates that (1) the capability of PJSD to differentiate between more and less important words in terms of uptake (Examples 1 and 6) accounts for many cases where PJSD is more accurate than %IN-T , (2) neither model is able to capture rare and semantically deep forms of uptake (Exam-ple 3), (3) PJSD generally gives higher scores than %IN-T to coherent responses with limited word overlap (Example 5).", "Comparison of linguistic phenomena.", "To understand if there is a pattern explaining PJSD 's better performance, we quantify the occurence of different linguistic phenomena for examples where PJSD outperforms %IN-T .", "Concretely, we compute the residuals for each model, regressing the human labels on their predictions.", "Then, we take those examples where the difference between the two models' residuals is 1.5 standard deviations above the mean difference between their residuals.", "We label teacher utterances in these examples 10 Human agreement and model scores are not directly comparable.", "The human agreement values (as reported here for 13 raters and in Section 3 for 3 raters) are averaged leave-out estimates across raters (skewed downward).", "The models' scores represent correlations with an averaged human score, which smooths over the interrater variance of 3 raters.", "for four linguistic phenomena associated with uptake and good teaching (elaboration prompt, reformulation, collaborative completion, and answer to question), allowing multiple labels (e.g. elaboration prompt and completion often co-occur).", "11 As Table 5 shows, elaboration prompts, which are exemplars of high uptake in teaching (Nystrand et al., 1997) are significantly more likely to occur in this set suggesting that there is a qualitative difference between what these models capture that is relevant for teaching.", "We do not find a significant difference in the occurrence of reformulations, collaborative completions and answers between the two sets, possibly due to the small sample size (n=67).", "To see whether these differences are significant on a larger dataset, we now turn to the Switchboard dialogue corpus.", "Switchboard dialog acts.", "We take advantage of dialog act annotations on Switchboard (Jurafsky et al., 1997), to compare uptake phenomena captured by %IN-T and PJSD at a large scale.", "We identify five uptake phenomena labeled in Switchboard and map them to SWBD-DAMSL tags: acknowledgment, answer, collaborative completion, reformulation and repetition (see details in Appendix C).", "We estimate scores for %IN-T and PJSD for all utterance pairs ( S, T ) in Switchboard, filtering out ones where S < 5 tokens.", "We apply our PJSD model from Section 5.1, which was partially fine-tuned on Switchboard.", "Since both measures are 11 We label examples with above average uptake scores, as there is no trivial interpretation for uptake strategies labeled on low-uptake examples.", "bounded, we quantile-transform the distribution of each measure to have a uniform distribution.", "For each uptake phenomenon, we compute the difference ( ) between the median score from PJSD and the median score from %IN-T for all ( S, T ) pairs where T is labeled for that phenomenon.", "The results (Figure", "2) show that PJSD predicts significantly higher scores than %IN-T for all phenomena, especially for answers, reformulations, collaborative completions and acknowledgments.", "For repetition, is quite small, but still significant due to the large sample size.", "These findings corroborate our hypothesis that %IN-T and PJSD capture repetition similarly, but PJSD is able to better capture other uptake phenomena.", "To test the generalizability of our uptake measures and their link to instruction quality, we correlate PJSD and %IN-T with educational outcomes on three different datasets of student-teacher interactions (Table 6).", "NCTE dataset.", "We use all transcripts from the NCTE dataset (Kane et al., 2015) (Section", "3) with associated classroom observation scores based on the MQI coding instrument (Teaching Project, 2011).", "We select two items from MQI relevant to uptake as outcomes: (1) use of student math contributions and (2) overall quality of math instruction.", "Since these items are coded at a 7-minute segment-level, we take the average ratings across raters and segments for each transcript.", "Tutoring dataset.", "We use data from an educational technology company (same as in Chen et al., 2019), which provides on-demand text-based tutoring for math and science.", "With a mobile application, a student can take a picture of a problem Dataset Size Genre Topic Class size Outcome PJSD ( ) %IN-T ( ) NCTE 1.6k conv.", "or write it down, and is then connected to a professional tutor who guides the student to solve the problem.", "Similarly to Chen et al. (2019), we filter out short sessions where the tutors are unlikely to deliver meaningful tutoring.", "Specifically, we create a list of ( S, T ) pairs for all sessions, keeping pairs where S 5 tokens, and then remove sessions with fewer than ten ( S, T ) pairs.", "This results in 4604 sessions, representing 108 tutors and 1821 students.", "Each session is associated with two outcome measures: (1) student satisfaction scores (1-5 scale) and (2) a rating by the tutor manager based on an evaluation rubric (0-1 scale).", "SimTeacher dataset.", "We use a dataset collected by Cohen et al. (2020), via a mixed reality simulation platform in which novice teachers get to practice key classroom skills in a virtual classroom interface populated by student avatars.", "The avatars are controlled remotely by a trained actor; hence the term mixed reality.", "All pre-service teachers from a large public university complete a five-minute simulation session at multiple timepoints in their teacher preparation program, and are coached on how to better elicit students' thinking about a text.", "We use data from Fall 2019, with 338 sessions representing 117 teachers.", "Since all sessions are based on the same scenario (discussed text, leading questions, avatar scripts), this dataset uniquely allows us to answer the question: controlling for student avatar scripts, does a greater teacher uptake lead to better outcomes?", "For the outcome variable, we use their holistic quality of feedback measure (1-10 scale), annotated at the transcript-level by the original research team.", "12 12 This overall quality scale accounts for the extent to which teachers actively work to support student avatars' development of text-based responses, highlighting the importance of probing student responses (e.g. Where in the text did you see that?; What made you think this about the character?).", "As outcomes are linked to conversations, we first mean-aggregate uptake scores to the conversation-level.", "We then compute the correlation of uptake scores and outcomes using an ordinary least squares regression, controlling for the number of ( S, T ) pairs in each conversation.", "The results (Table 6) indicate that PJSD correlates with all of the outcome measures significantly.", "%IN-T also shows significant correlations for NCTE and for SimTeacher, but not for the tutoring dataset.", "We provide more details below.", "For NCTE and SimTeacher, we find that two measures show similar positive correlations with outcomes.", "These results provide further insight into our earlier findings from Section 5.2.", "They suggest that the teacher's repetition of student words, also known as revoicing in math education (Forman et al., 1997; O'Connor and Michaels, 1993), may be an especially important mediator of instruction quality in classroom contexts and other aspects of uptake are relatively less important.", "The significant correlation of PJSD with the outcome in case of SimTeacher is especially noteworthy because PJSD was not fine-tuned on this dataset (Section 5.1); this provides evidence for the adaptability of a pretrained model to other (similar) datasets.", "The gap between the two measures in case of the tutoring dataset is an interesting finding, possibly explained by the conversational setting: repetition may be an effective uptake strategy in multi-participant & spoken settings, ensuring that everyone has heard what the student said and is on the same page; whereas, in a written 1:1 teaching setting, repetition may not be necessary or effective as both participants are likely to assume that that their interlocutor has read their words.", "Our qualitative analysis suggests PJSD might be outperforming %IN-T because it is better able to pick up high student feedback (%IN-T < PJSD ) low student feedback ( PJSD < %IN-T ) S: if they're the same length i think T: that's right! all we need is the length, and that's enough.", "on cues related to teacher responsiveness (we include two examples in Table 7).", "To test this, we detect coarse-grained estimates of teacher uptake: teacher question marks (estimate of follow-up question) and teacher exclamation marks (estimate of approval).", "We then follow the same procedure as in Section 5.2 and find that dialogs where PJSD outperforms %IN-T , in terms of predicting student ratings, have a higher ratio of exchanges with teacher questions ( p < 0 . 05 , obtained from two-sample t-test) and teacher exclamation marks ( p < 0 . 01 ).", "To put these effect sizes from Table 6 (where significant) in the context of education interventions that are designed to increase student outcomes (typ-ically test scores), the coefficients we report here are considered average for an effective educational intervention (Kraft, 2020).", "Further, existing guidelines for educational interventions would classify uptake as a promising potential intervention, as it is highly scalable and easily quantified.", "Prior computational work on classroom discourse has employed supervised, feature-based classifiers to detect teachers' discourse moves relevant to student learning, such as authentic questions, elaborated feedback and uptake, treating these moves as binary variables (Samei et al., 2014; Donnelly et al., 2017; Kelly et al., 2018; Stone et al., 2019; Jensen et al., 2020).", "Our labeled dataset, unsupervised approach (involving a state-of-the art pre-trained model), and careful analysis across domains are novel contributions that will enable a fine-grained and domain-adaptable measure of uptake that can support researchers and teachers.", "Our work aligns closely with research on the computational study of conversations.", "For example, measures have been developed to study constructiveness (Niculae and Danescu-Niculescu-Mizil, 2016), politeness (Danescu-Niculescu-Mizil et al., 2013) and persuasion (Tan et al., 2016) in conversations.", "Perhaps most similar to our work, Zhang and Danescu-Niculescu-Mizil (2020) develop an unsupervised method to identify therapists' backward-and forward-looking utterances, with which they guide their conversations.", "We also draw on work measuring discourse coherence via embedding cosines (Xu et al., 2018; Ko et al., 2019), or via utterance classification (Xu et al., 2019; Iter et al., 2020), the latter of which is used also for building and evaluating dialog systems (Lowe et al., 2016; Wolf et al., 2019).", "Our work extends these two families of methods to human conversation and highlights the different linguistic phenomena they capture.", "Finally, our work shows the key role of coherence in the socially important task of studying uptake.", "We propose a framework for measuring uptake, a core conversational phenomenon with particularly high relevance in teaching contexts.", "We release an annotated dataset and develop and compare unsupervised measures of uptake, demonstrating significant correlation with educational outcomes across three datasets.", "This lays the groundwork (1) for scaling up teachers' professional development on uptake thereby enabling improvements to education, (2) for conducting analyses on uptake across domains and languages where labeled data does not exist and (3) for studying the effect of uptake on a wider range of socially relevant outcomes.", "We thank anonymous reviewers, Amelia Hardy, Aswhin Paranjape, Yiwei Luo for helpful feedback.", "We are grateful for the support of the Melvin and Joan Lane Stanford Graduate Fellowship (to D.D.).", "Our objective in building a dataset and a framework for measuring uptake is (1) to aid researchers studying conversations and teaching and (2) to (ulti-mately) support the professional development of educators by providing them with a scalable measure of a phenomenon that supports student learning.", "Our second objective is especially important, since existing forms of professional development aimed at improving uptake are highly resource intensive (involving classroom observations and manual eval-uation).", "This costliness has meant that teachers working in under-resourced school systems have thus far had limited access to quality professional development in this area.", "The dataset we release is sampled from transcripts collected by the National Center for Teacher Effectiveness (NCTE) (Kane et al., 2015) (Har-vard IRB #17768).", "These transcripts represent data from 317 teachers across 4 school districts in New England that serve largely low-income, historically marginalized students.", "The data was collected as part of a carefully designed study on teacher effectiveness, spanning three years between 2010 and 2013 and it was de-identified by the original research team, meaning that in the transcripts, student names are replaced with Student and teacher names are replaced with Teacher.", "Both parents and teachers gave consent for the de-identified data to be retained and used in future research.", "The collection process and representativeness of the data are all described in great detail in (Kane et al., 2015).", "Given that the dataset was collected a decade ago, there may be limitations to its use and ongoing relevance.", "That said, research in education reform has long attested to the fact that teaching practices have remained relatively constant over the past century (Cuban, 1993; Cohen and Mehta, 2017) and that there are strong socio-cultural pressures that maintain this (Cohen, 1988).", "The data was annotated by 13 raters, whose demographics are largely representative of teacher demographics in the US 13 .", "All raters have domain expertise, in that they are former or current math teachers and former or current raters for the Mathematical Quality Instruction (Teach-ing Project, 2011).", "The raters were trained for at least an hour each on the coding instrument and spent 8 hours on average on the annotation (over 13 https://nces.ed.gov/fastfacts/display. asp?id=28 the course of several weeks) and were compensated $16.5 / hr.", "In Section 6, we apply our data to to two educational datasets besides NCTE.", "We do not release either of these datasets.", "The SimTeacher dataset was collected by Cohen et al. (2020) (University of Virginia IRB #2918), for research and program improvement purposes.", "The participants in the study are mostly white (82%), female (90%), and middle class (71%), mirroring the broader teaching profession.", "As for the tutoring dataset, the data belongs to a private company; the students and tutors have given consent for their data to be used for research, with the goal of improving the company's services.", "The company works with a large number of tutors and students; we use data that represents 108 tutors and 1821 students.", "70% of tutors in the data are male, complementing the other datasets where the majority of teachers are female.", "The company does not share other demographic information about tutors and students.", "Similarly to other data-driven approaches, it is important to think carefully about the source of the training data when considering downstream use cases of our measure.", "Our unsupervised approach helps address this issue as it allows for training the model on data that is representative of the population that it is meant to serve." ]
[ "abstain", "abstain", "abstain", "objective", "result", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "objective", "method", "method", "result", "objective", "method", "result", "method", "result", "objective", "result", "result", "result", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "other", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "other", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "other", "result", "abstain", "abstain", "result", "result", "abstain", "other", "objective", "abstain", "other", "abstain", "method", "objective", "abstain", "objective", "objective", "abstain", "other", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method" ]
[ "Recently, the Transformer model (Vaswani et al., 2017) that is based solely on attention mechanisms, has advanced the state-of-the-art on various machine translation tasks.", "However, recent studies reveal that the lack of recurrence hinders its further improvement of translation capacity (Chen et al., 2018; De-hghani et al., 2019).", "In response to this problem, we propose to directly model recurrence for Transformer with an additional recurrence encoder.", "In addition to the standard recurrent neural network, we introduce a novel attentive recurrent network to leverage the strengths of both attention and recurrent networks.", "Experimental results on the widely-used WMT14 English German and WMT17 Chinese English translation tasks demonstrate the effectiveness of the proposed approach.", "Our studies also reveal that the proposed model benefits from a short-cut that bridges the source and target sequences with a single recurrent layer, which outperforms its deep counterpart.", "Recently, Transformer (Vaswani et al., 2017) a new network architecture based solely on attention mechanisms, has advanced the state-of-the-art on various translation tasks across language pairs.", "Compared with the conventional recurrent neural network (RNN) (Schuster and Paliwal, 1997) based model that leverages recurrence as the ba-sic building module (Sutskever et al., 2014; Bah-danau et al., 2015; Chen et al., 2018), Transformer replaces RNN with self-attention network (SAN) to model the dependencies among input elements.", "One appealing strength of SAN is that it breaks Zhaopeng Tu is the corresponding author of the paper.", "This work was conducted when Jie Hao and Baosong Yang were interning at Tencent AI Lab.", "down the sequential assumption to obtain the ability of highly parallel computation: input elements interact with each other simultaneously without regard to their distance.", "However, prior studies empirically show that the lack of recurrence modeling hinders Transformer from further improvement of translation quality (Dehghani et al., 2019).", "Modeling recurrence is crucial for capturing several essential properties of input sequence, such as structural representations (Tran et al., 2016) and positional encoding (Shaw et al., 2018), which are exactly the weaknesses of SAN (Tran et al., 2018).", "Recently, Chen et al. (2018) show that the representations learned by SAN-based and RNN-based encoders are complementary to each other, and merging them can improve translation performance for RNN-based NMT models.", "Starting from these findings, we propose to directly model recurrence for Transformer with an additional recurrence encoder.", "The recurrence encoder recurrently reads word embeddings of input sequence and outputs a sequence of hidden states, which serves as an additional information source to the Transformer decoder.", "In addition to the standard RNN, we propose to implement recurrence modeling with a novel attentive recurrent network (ARN), which combines advantages of both SAN and RNN.", "Instead of recurring over the individual symbols of sequences like RNN, the ARN recurrently revises its representations over a set of feature vectors, which are extracted by an attention model from the input sequence.", "Accordingly, ARN combines the strong global modeling capacity of SAN with the recurrent bias of RNN.", "We evaluate the proposed approach on widely-used WMT14 English German and WMT17 Chinese English translation tasks.", "Experimental results show that the additional recurrence encoder, implemented with either RNN or ARN, Source Multi-Head Attention Feed Forward Add & Norm Add & Norm Positional Encoding N Softmax Output Probabilities Recurrence Modeling Feed Forward Norm Add & Norm Source Embedding Multi-Head Attention Add & Norm N t h D ec od e r L a ye r S e l fA tt e n ti o n E n c od e r R ec u rr e n ce E n c od e r Source Embedding Source Multi-Head Attention Feed Forward Add & Norm Add & Norm Positional Encoding N Target Embedding Target Multi-Head Attention Feed Forward Add & Norm Add & Norm Positional Encoding N Multi-Head Attention Add & Norm Softmax Output Probabilities [AAAI2019] Recurrence Encoder Multi-Head Attention Add & Norm Feed Forward Add & Norm Multi-Head Attention Add & Norm Figure 1: The architecture of Transformer.", "In transformer, SELF-ATT ( ) computes attention over the input H n 1 e as follows: SELF-ATT ( H n 1 e ) = softmax ( QK (cid:62) d k ) V (3) where { Q , K , V } are query, key and value vectors that are transformed from the input representations H n 1 e .", "d k is the scaling factor where the d k is the dimension size of the query and key vectors.", "Formally, the output of the first sub-layer C ne and the second sub-layer H ne are sequentially calculated as: C ne = LN (cid:0) SELF-ATT ( H n 1 e ) + H n 1 e (cid:1) , (1) H ne = LN (cid:0) FFN ( C ne ) + C ne (cid:1) , (2) where SELF-ATT ( ) , LN ( ) , and FFN ( ) are respectively self-attention mechanism, layer normalization, and feed-forward network with ReLU activation in between.", "consistently improves translation performance, demonstrating the necessity of modeling recurrence for Transformer.", "Specifically, the ARN implementation outperforms its RNN counterpart, which confirms the strength of ARN.", "Further analyses reveal that our approach benefits from a short-cut that bridges the source and target sequences with shorter path.", "Among all the model variants, the implementation with shortest path performs best, in which the recurrence encoder is single layer and its output is only fed to the top decoder layer.", "It consistently outperforms its multiple deep counterparts, such as multiple-layer recurrence encoder and feeding the output of recurrence encoder to all the decoder layers.", "In addition, our approach indeed generates more informative encoder representations, especially representative on syntactic structure features, through conducting linguistic analyses on probing tasks (Conneau et al., 2018).", "Figure 1 shows the model architecture of Transformer.", "The encoder is composed of a stack of N identical layers, each of which has two sub-layers.", "The first sub-layer is a self-attention network, and the second one is a position-wise fully connected feed-forward network.", "A residual connection (He et al., 2016) is employed around each of two sublayers, followed by layer normalization (Ba et al., 2016).", "The decoder is also composed of a stack of N identical layers.", "In addition to two sub-layers in each decoder layer, the decoder inserts a third sublayer D nd to perform attention over the output of the encoder H Ne : C nd = LN (cid:0) SELF-ATT ( H n 1 d ) + H n 1 d (cid:1) , (4) D nd = LN (cid:0) ATT ( C nd , H Ne ) + C nd (cid:1) , (5) H nd = LN (cid:0) FFN ( D nd ) + D nd (cid:1) , (6) where ATT ( C nd , H Ne ) denotes attending the top encoder layer H Ne with C nd as query.", "The top layer of the decoder H Nd is used to generate the final output sequence.", "In this section, we first describe the architecture of the introduced recurrence encoder and elaborate two types of neural network that are used as recurrence encoder in this work.", "Then we introduce the integration of recurrence encoder into the Transformer.", "Specifically, two strategies are presented to fuse the representations produced by the recurrence encoder and the conventional encoder.", "Finally we present the short-cut connection between the recurrence encoder and the decoder that we found very effective to use the learned representation to improve the translation performance under the proposed architecture.", "(a) standard RNN, and", "(b) the proposed ARN.", "Figure 2 shows the architecture of the introduced recurrence encoder which reads word embeddings of source words and outputs a sequence of hidden states that embeds recurrent information.", "Similar to the Transformer encoder, it has a stack of N identical layers, each of which has two sub-layers.", "The first one is a recurrence modeling network and the second is a fully connected feed-forward network: C nr = LN ( REC ( H n 1 r ) + H n 1 r ) , (7) H nr = LN ( FFN ( C nr ) + C nr ) , (8) where REC ( ) is the function of recurrence modeling.", "Note that at the bottom layer of the recurrence encoder ( N =1), we do not employ a residual connection on the recurrence sub-layer (i.e. Equation 7), which releases the constraint that C 1 r should share the same length with input embeddings sequence E in 1 .", "This offers a more flexible choice of the recurrence functions.", "There are many possible ways to implement the general idea of recurrence modeling REC ( ) .", "The aim of this paper is not to explore this whole space but simply to show that some fairly straightforward implementations work well.", "In this work, we investigate two representative implementations, namely RNN and its variation attentive current network that combines advantages of both RNN and attention models, as shown in Figure 3. 1 The input of the lowest layer in the recurrence encoder is the word embeddings of input sequence E in .", "Recurrent Neural Network (RNN) An intuitive choice of recurrence modeling is RNN, which is a standard network to model sequence orders.", "In this work, we use a bidirectional RNN (BiRNN), which is widely applied in RNN-based NMT models (Bahdanau et al., 2015; Chen et al., 2018).", "Each hidden state in the output representations H n RNN = { h n 1 , . . . , h nJ } is calculated as h nj = (cid:2) h j ; h j (cid:3) , (9) h j = f ( h j 1 , h n 1 j ) , (10) h j = f ( h j +1 , h n 1 j ) , (11) where f ( ) and f ( ) are the activation functions of forward and backward RNN respectively, which can be implemented as LSTM (Hochreiter and Schmidhuber, 1997) or GRU (Cho et al., 2014).", "h n 0 is the initial state of RNN, which is the mean of H n 1 RNN .", "H 0 RNN represents the word embeddings of the input sequence.", "Attentive Recurrent Network (ARN) We can also extend RNN by recurring over a set of feature vectors extracted with an attention model, which allows the model to learn a compact, abstractive feature vectors over the input sequence.", "Specifi-cally, the ARN performs T recurrent steps on the attentive output of the input representation H n 1 r : h nt = f ( h nt 1 , c nt ) , (12) c nt = ATT ( h nt 1 , H n 1 r ) .", "The output representations H n ARN = { h n 1 , . . . , h nT } are fed to the subsequent modules.", "Analogous to Equations 9-11, ARN can be extended to the bidirectional variant, i.e. BiARN, Softmax Output Probabilities Multi-Head Attention Add & Norm [AAAI2019] Integration Multi-Head Attention Add & Norm Feed Forward Add & Norm Multi-Head Attention Add & Norm Multi-Head Attention Add & Norm Multi-Head Attention Add & Norm Multi-Head Attention Add & Norm Gated Sum Softmax Output Probabilities Feed Forward Add & Norm Target Embedding Target PosEnc Target Embedding Target Output of Transformer Encoder Output of Recurrence Encoder PosEnc Output of Transformer Encoder Output of Recurrence Encoder N N", "except that the input is the attentive context vector c nt rather than the individual representation vector of the input sequence.", "Note that, the number of recurrence step T is allowed to be unequal to the length of input sequence J .", "In contrast to RNN which recurs over the individual symbols of the input sequences, ARN recurrently revises its representations of all symbols in the sequence with an attention model.", "Since the output of recurrence encoder unnecessarily shares the same length with that of Transformer encoder (e.g. when ARN is used as recurrence function), combination strategy on the encoder side, such as concatenating the outputs of both encoders (Chen et al., 2018), is not an universal solution in this scenario.", "Accordingly, we feed the information of the additional recurrence encoder into the decoder of Transformer.", "Specifi-cally, we serve an additional attention layer R nd as the fourth sub-layer in each decoder block to perform attention over the output of the recurrence encoder H Nr .", "As shown in Figure 4, we present two strategies to integrate R nd , namely gated sum and stack , which differ at how R nd interacts with the output of attention over the Transformer encoder, i.e., D nd in Equation 5.", "Gated Sum The first strategy combines the outputs of the two attention sub-layers in a gating fusion (Figure", "4(a)), in which the outputs of both encoders are attended simultaneously: R nd = LN (cid:0) ATT ( C nd , H Nr ) + C nd (cid:1) , (14) (cid:98) D nd = n D nd + (1 n ) R nd , (15) H nd = LN (cid:0) FFN ( (cid:98) D nd ) + (cid:98) D nd (cid:1) , (16) where n is an interpolation weight calculated by a logistic sigmoid function: n = sigmoid ( D nd , R nd ) (17) As seen, the output of self-attention layer C nd serves as a query to attend the outputs of both encoders (Equations 5 and 14), and the outputs of both attention models { D nd , R nd } are combined via a gated sum (Equation 15), which is subsequently fed to the feed-forward layer (Equation 16).", "Stack We can also arrange the sub-layers in a stack (Figure", "4(b)), in which the outputs of both encoders are attended sequentially: R nd = LN (cid:0) ATT ( D nd , H Nr ) + D nd (cid:1) , (18) H nd = LN (cid:0) FFN ( R nd ) + R nd (cid:1) , (19) The decoder first attends the output of Transformer encoder, and the attention output D nd serves as the query to attend the output of recurrence encoder (Equation 18).", "The introduced recurrence encoder provides an additional computation path ranging from the input sequence to the output sequence.", "Chung et al. (2017) and Shen et al. (2019) have shown that a shortcut for gradient back-propagation benefits language modeling.", "Inspired from them, we use a shorter path to transform the learned recurrence.", "We call this the short-cut effect .", "Among all the model variants, we implement shortest path as: the recurrence encoder is single layer and its output is only fed to the top decoder layer while the first N 1 decoder layers perform the same as the standard Transformer (e.g. Equations 4-6).", "Accordingly, the computation path is E in H r R Nd H Nd , then the decoder uses H Nd to make a target word prediction.", "It is much simpler than that of the conventional Transformer, which transfers information learned from input sequences across multiple stacking encoder and decoder layers.", "We expect it outperforms its multiple deep counterparts, such as multiple-layer recurrence encoder and feeding the output of recurrence encoder to all the decoder layers.", "Improving Transformer Encoder From the perspective of representation learning, there has been an increasing amount of work on improving the representation power of SAN encoder.", "Baw-den et al. (2018) and Voita et al. (2018) exploit external context for SAN encoder, while Yang et al. (2019) leverage the intermediate representations to contextualize the transformations in SAN.", "A number of recent efforts have explored ways to improve multi-head SAN by encouraging individual attention heads to extract distinct information (Strubell et al., 2018; Li et al., 2018).", "Concerning multi-layer SAN encoder, Dou et al. (2018, 2019) and Wang et al. (2018) propose to aggregate the multi-layer representations, and De-hghani et al. (2019) recurrently refine these representations.", "Our approach is complementary to theirs, since they focus on improving the representation power of SAN encoder, while we aim to complement SAN encoder with an additional recurrence encoder.", "Along the direction of modeling recurrence for SAN, Vaswani et al. (2017) and Shaw et al. (2018) inject absolute position encoding and relative positional encoding to consider the position information respectively.", "Shen et al. (2018) introduce a directional self-attention network (DiSAN), which allows each token to attend to previous (or following) tokens only.", "Both studies verify the necessity of modeling recurrence for SAN.", "We reimplemented these approaches on top of Transformer, and experimental results show that our approach outperforms them by explicitly augmenting Transformer with an additional recurrence encoder.", "It should be emphasized that our approach is complementary to theirs, and combining them together is expected to further improve performance, which we leave for future work.", "Closely related to our work, Chen et al. (2018) propose to combine SAN encoder with an additional RNN encoder.", "The main differences between our work and theirs are: 1) we enhance the state-of-the-art Transformer with recurrence information, while Chen et al. (2018) augment RNN-based models with SAN encoder.", "To this end, we propose a novel attentive recurrent network to implement the additional recurrence encoder in Transformer.", "We re-implemented the approach proposed by Chen et al. (2018) on top of Transformer.", "Experimental results indicate the superiority of our approach, which confirms our claim.", "In addition, we elaborately design the integration strategy to effectively feed the recurrence information to the decoder, and empirically show that the proposed model benefits from the short-cut effect.", "Comparison to Reviewer Network Attentive recurrent network are inspired by the reviewer network, which is proposed by Yang et al. (2016) for the image caption generation task.", "There are two key differences which reflect how we have generalized from the original model.", "First, we perform attention steps over the source embeddings instead of the encoder representations.", "The main reason is that the Transformer encoder is implemented as multiple layers, and higher layers generally encode global information, as indicated by Peters et al. (2018).", "Second, we feed the feature vectors together with the original encoder representations to the decoder.", "In image caption generation, the source side (i.e. image) contains much more information than the target side (i.e. caption) (Tu et al., 2017).", "Therefore, they aim at learning a compact and abstractive representation from the source information, which serves as the only input to the decoder.", "In this work, we focus on leveraging the attention model to better learn the recurrence, which we expect to complement the Transformer model.", "In our preliminary experiments, attending over the encoder representations does not improve performance, while feeding the feature vectors only to the decoder seriously harms performance.", "We conducted experiments on the widely-used WMT14 English-to-German (4.6M sentence pairs, En De) and WMT17 Chinese-to-English (20.6M sentence pairs, Zh En) translation tasks.", "All the data had been tokenized and segmented into subword symbols using byte-pair encoding (Sennrich et al., 2016) with 32K merge operations 2 .", "We used case-sensitive NIST BLEU score (Papineni et al., 2002) as the evaluation metric, and bootstrap resampling (Koehn et al., 2003) for statistical significance test.", "We implemented the proposed approaches on top of the Transformer model (Vaswani et al., 2017).", "Both in our model and related model of Subsection 5.3, the RNN is implemented with GRU (Cho et al., 2014) for fair comparison.", "We followed the configurations in Vaswani et al. (2017), and reproduced their reported results on the En De task.", "We initialized parameters of the proposed models by the pre-trained baseline model.", "We have tested both Base and Big models, which differ at hidden size (512 vs. 1024), filter size (2048 vs. 4096), and number of attention heads (8 vs. 16).", "In consideration of computation cost, we studied model variations with Base model on En De task, and evaluated overall performances with both Base and Big models on both En De and Zh En translation tasks.", "In this subsection, we conducted ablation studies to evaluate the different implementations of the proposed model, e.g., recurrence encoder and integration strategy, under the proposed architecture.", "Effect of Recurrence Modeling We first investigated the effect of recurrence encoder implementations, as listed in Table 1. We observed that introducing an additional recurrence encoder improves translation performance in all cases.", "2 https://github.com/rsennrich/subword-nmt Model Rec.", "Among all model variations, BIARN outperforms its BIRNN counterpart.", "Concerning BIARN models, reducing the layers consistently improves performance.", "Specifi-cally, the 1-Layer BIARN achieves the best performances in both translation quality and training speed.", "This confirms the claim that the proposed approach benefits from a short-cut on gradient back-propagation.", "Accordingly, we adopted 1-Layer BIARN as the default setting in the following experiments.", "Effect of Integration Strategies We then tested the effect of different integration strategies, as showed in Table 2. We have two observations.", "First, feeding only to the top decoder layer consistently outperforms feeding to all decoder layers with different integration strategies.", "This empirically reconfirms the short-cut effect.", "Second, the stack strategy marginally outperforms its gated sum counterpart.", "Therefore, in the following experiments, we adopted the Stack + Top model in Table 2 as defaulting setting.", "Performances across Languages Finally, we evaluated the proposed approach on the widely used WMT17 Zh En and WMT14 En De data, as listed in Table 3.", "To make the evaluation convincing, we reviewed the prior reported systems, and built strong", "baselines which outperform the reported results on the same data.", "As seen in Table 3, modeling recurrence consistently improves translation performance across model variations (BASE and BIG models) and language pairs (Zh En and En De), demonstrating the effectiveness and universality of our approach.", "Comparison with Previous Work In order to directly compare our approach with the previous work on modeling recurrence, we re-implemented their approaches on top of the TRANSFORMERBASE in WMT14 En De translation task.", "For relative position encoding, we used unique edge representations per layer and head with clipping distance k = 16 .", "For the DiSAN strategy, we applied a mask to the TRANSFORMER encoder, which constrains the SAN to focus on forward or backward elements.", "For the multi-column encoder, we re-implemented the additional encoder with six RNN layers.", "Table 4 lists the results.", "As seen, all the recurrence enhanced approaches achieve improvements over the baseline model TRANSFORMERBASE , which demonstrates the necessity of modeling recurrence for TRANSFORMER .", "Among these approaches, our approach ( i.e., 1-Layer BIARN Encoder) achieves the best performance.", "Effect of Recurrent Steps To verify the recurrence effect on the proposed model, we conducted experiments with different recurrent steps on single-layer BIARN model.", "As shown in Figure 5, the BLEU score typically goes up with the increase of the recurrent steps, while the trend does not hold when T > 8 .", "This finding is consistent with Yang et al. (2016), which indicates that conducting too many recurrent steps fails to generate a compact representation.", "This is exactly one of the ARN's strengths.", "Linguistic Analyses In this section, we conducted 10 probing tasks 3 to study what linguistic properties are captured by the encoders (Conneau et al., 2018).", "A probing task is a classification problem that focuses on simple linguistic properties of sentences.", "SeLen' predicts the length of sentences in terms of number of words.", "WC' tests whether it is possible to recover information about the original words given its sentence embedding.", "TrDep' checks whether an encoder infers the hierarchical structure of sentences.", "In ToCo' task, sentences should be classified in terms of the sequence of top constituents immediately below the sentence node.", "BShif' tests whether two consecutive tokens within the sentence have been inverted.", "Tense' asks for the tense of the main-clause verb.", "SubN' focuses on the number of the main clause's subject.", "ObjN' tests for the number of the direct object of the main clause.", "In SoMo', some sentences are modified by replacing a random noun or verb with another one and the classi-fier should tell whether a sentence has been modified.", "CoIn' contains sentences made of two coordinate clauses.", "Half of sentences are inverted the order of the clauses and the task is to tell whether a sentence is intact or modified.", "We used the pre-trained encoders of model variations in Table 1 to generate the sentence representations of input, which are used to carry out probing tasks.", "For the TRANSFORMER-BASE model, the mean of the encoder top layer representations is used as the sentence representation.", "For the proposed models, which have two encoders, two sentence representations are generated from the same way in base model.", "To make full use of the learned representations, we combined these two sentence representations via a gate as the final sentence representation to conduct the experiments.", "Table 5 lists the results.", "Clearly, the proposed models significantly improve the classification accuracies, although there is still considerable difference among different variants.", "More specifically, Concerning surface properties, among the ARN variants, multi-layer ARN inversely decreases the accuracies, while 1-layer ARN consistently improves the accuracies.", "Considering the related results presented in Table 1 (Row 3-5), we believe that ARN benefits from the shallow structure.", "ARN tends to capture deeper linguistic properties, both syntactic and semantic.", "Especially, among these probing tasks, TrDep' and Toco' tasks are related to syntactic structure modeling.", "As expected, TRANSFORMER augmented with an additional encoders outperforms the baseline model, which demonstrates that the proposed models successfully model the syntactic structure.", "In this work, we propose to directly model recurrence for Transformer with an additional recurrence encoder.", "We implement the recurrence encoder with a novel attentive recurrent network as well as RNN.", "The recurrence encoder is used to generate recurrence representations for the input sequence.", "To effectively feed the recurrence representations to the decoder to guide the output sequence generation, we study two strategies to integrate the recurrence encoder into the Transformer.", "To evaluate the effectiveness of the proposed model, we conduct experiments on large-scale WMT14 EN DE and WMT17 ZH EN datasets.", "Experimental results on two language pairs show that the proposed model achieves significant improvements over the baseline TRANSFORMER .", "Linguistic analyses on probing tasks further show that our model indeed generates more informative representations, especially representative on syntactic structure features.", "Ziyi Dou, Zhaopeng Tu, Xing Wang, Shuming Shi, and Tong Zhang.", "2018.", "Exploiting deep representations for neural machine translation.", "In EMNLP .", "Ziyi Dou, Zhaopeng Tu, Xing Wang, Longyue Wang, Shuming Shi, and Tong Zhang.", "2019.", "Dynamic layer aggregation for neural machine translation.", "In AAAI .", "Hany Hassan, Anthony Aue, Chang Chen, Vishal Chowdhary, Jonathan Clark, Christian Feder-mann, Xuedong Huang, Marcin Junczys-Dowmunt, William Lewis, Mu Li, et al. 2018.", "Achieving human parity on automatic chinese to english news translation.", "arXiv preprint arXiv:1803.05567 .", "Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun.", "2016.", "Deep residual learning for image recognition.", "In CVPR .", "Sepp Hochreiter and Jurgen Schmidhuber.", "1997.", "Long short-term memory.", "Neural computation , 9(8):17351780.", "Philipp Koehn, Franz Josef Och, and Daniel Marcu.", "2003.", "Statistical phrase-based translation.", "In ACL .", "Jian Li, Zhaopeng Tu, Baosong Yang, Michael R. Lyu, and Tong Zhang.", "2018.", "Multi-Head Attention with Disagreement Regularization.", "In EMNLP .", "Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu.", "2002.", "Bleu: a method for automatic evaluation of machine translation.", "In ACL .", "Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer.", "2018.", "Deep contextualized word representations.", "In NAACL .", "Mike Schuster and Kuldip K Paliwal.", "1997.", "Bidirectional recurrent neural networks.", "IEEE Transactions on Signal Processing , 45(11):26732681.", "Rico Sennrich, Barry Haddow, and Alexandra Birch.", "2016.", "Neural machine translation of rare words with subword units.", "In ACL .", "Peter Shaw, Jakob Uszkoreit, and Ashish Vaswani.", "2018.", "Self-Attention with Relative Position Representations.", "In NAACL .", "Tao Shen, Tianyi Zhou, Guodong Long, Jing Jiang, Shirui Pan, and Chengqi Zhang.", "2018.", "DiSAN: directional self-attention network for RNN/CNN-free language understanding.", "In AAAI .", "Yikang Shen, Shawn Tan, Alessandro Sordoni, and Aaron Courville.", "2019.", "Ordered neurons: Integrating tree structures into recurrent neural networks.", "In ICLR .", "Emma Strubell, Patrick Verga, Daniel Andor, David Weiss, and Andrew McCallum.", "2018.", "Linguistically-Informed Self-Attention for Semantic Role Labeling.", "In EMNLP .", "Future work includes validating the proposed model in other tasks, such as reading comprehension, language inference, and sentence classification.", "Another promising direction is to directly augment Transformer encoder on recurrence modeling without the additional encoder.", "J.Z. was supported by the National Institute of General Medical Sciences of the National Institute of Health under award number R01GM126558.", "We thank the anonymous reviewers for their insightful comments." ]
[ "abstain", "abstain", "objective", "objective", "abstain", "objective", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "abstain", "abstain", "objective", "other", "other", "other", "other", "abstain", "abstain", "result", "abstain", "abstain", "method", "other", "other", "other", "other", "other", "other", "other", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "other", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "other", "method", "other", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "other", "other", "other", "other", "objective", "other", "other", "other", "objective", "objective", "abstain", "abstain", "objective", "abstain", "abstain", "objective", "other", "abstain", "objective", "other", "abstain", "other", "other", "method", "abstain", "method", "abstain", "method", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "method", "objective", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other" ]
[ "We introduce a novel transition system for discontinuous constituency parsing.", "Instead of storing subtrees in a stack i.e. a data structure with linear-time sequential access the proposed system uses a set of parsing items, with constant-time random access .", "This change makes it possible to construct any discontinuous constituency tree in exactly 4 n 2 transitions for a sentence of length n , whereas existing systems need a quadratic number of transitions to derive some structures.", "At each parsing step, the parser considers every item in the set to be combined with a focus item and to construct a new constituent in a bottom-up fashion.", "The parsing strategy is based on the assumption that most syntactic structures can be parsed incrementally and that the set the memory of the parser remains reasonably small on average.", "Moreover, we introduce a dynamic oracle for the new transition system, and present the first experiments in discontinuous constituency parsing using a dynamic oracle.", "Our parser obtains state-of-the-art results on three English and German discontinuous treebanks.", "Discontinuous constituency trees extend standard constituency trees by allowing crossing branches to represent long distance dependencies, such as the wh -extraction in Figure 1. Discontinuous constituency trees can be seen as derivations of Linear Context-Free Rewriting Systems (LCFRS, Vijay-Shanker et al., 1987), a class of formal grammars more expressive than context-free grammars, which makes them much harder to parse.", "In particular, exact CKY-style LCFRS parsing has an O ( n 3 f ) time complexity where f is the fan-out of the grammar (Kallmeyer, 2010).", "A natural alternative to grammar-based chart parsing is transition-based parsing, that usually relies on fast approximate decoding methods such as greedy search or beam search.", "Transition-based discontinuous parsers construct discontinuous constituents by reordering terminals with the SWAP action (Versley, 2014a,b; Maier, 2015; Maier and Lichte, 2016; Stanojevic and Garrido Alhama, 2017), or by using a split stack and the GAP action to combine two non-adjacent constituents (Coavoux and Crabbe, 2017a; Coavoux et al., 2019).", "These proposals represent the memory of the parser (i.e. the tree fragments being constructed) with data structures with linear-time sequential access (either a stack, or a stack coupled with a double-ended queue).", "As a result, these systems need to perform at least n actions to construct a new constituent from two subtrees separated by n intervening subtrees.", "Our proposal aims at avoiding this cost when constructing discontinuous constituents.", "We design a novel transition system in which a discontinuous constituent is constructed in a single step, without the use of reordering actions such as SWAP .", "The main innovation is that the memory of the parser is not represented by a stack, Initial configuration ( , null , 0 , ) : 0 Goal configuration ( , { 0 , 1 , . . . , n 1 } , n, C ) : 4 n 2 Structural actions Input Output Precondition SHIFT ( S, s f , i, C ) : j ( S { s f } , { i } , i + 1 , C ) : j + 1 i < n , j is even COMBINEs ( S, s f , i, C ) : j ( S s, s f s, i, C ) : j + 1 s S , j is even Labelling actions LABEL-X ( S, s f , i, C ) : j ( S, s f , i, C { ( X, s f ) } ) : j + 1 j is odd NO-LABEL ( S, s f , i, C ) : j ( S, s f , i, C ) : j + 1 i (cid:54) = n or S (cid:54) = , j is odd Table 1: Set-based transition system description.", "as is usual in shift-reduce systems, but by an unordered random-access set.", "The parser considers every constituent in the current memory to construct a new constituent in a bottom-up fashion, and thus instantly models interactions between parsing items that are not adjacent.", "As such, we describe a left-to-right parsing model that deviates from the standard stack-buffer setting, a legacy from pushdown automata and classical parsing algorithms for context-free grammars.", "Our contributions are summarized as follows: We design a novel transition system for discontinuous constituency parsing, based on a memory represented by a set of items, and that derives any tree in exactly 4 n 2 steps for a sentence of length n ; we introduce the first dynamic oracle for discontinuous constituency parsing; we present an empirical evaluation of the transition system and dynamic oracle on two German and one English discontinuous treebanks.", "The code of our parser is released as an open-source project at https://gitlab.com/ mcoavoux/discoparset .", "System overview We propose to represent the memory of the parser by", "(i) a set of parsing items and", "(ii) a single focus item .", "Figure 2 (lower part) illustrates a configuration in our system.", "The parser constructs a tree with two main actions: shift the next token to make it the new focus item ( SHIFT ), or combine any item in the set with the focus item to make a new constituent bottom-up ( COMBINE action).", "Since the memory is not an ordered data structure, the parser considers equally every pending parsing item, and thus constructs a discontinuous constituent in a single step, thereby making it able to construct any discontinuous tree in O ( n ) transitions.", "The use of an unordered random-access data structure to represent the memory of the parser also leads to a major change for the scoring system (Figure 2).", "Stack-based systems use a local view of a parsing configuration to extract features and score actions: features only rely on the few topmost elements on the stack and buffer.", "The score of each transition depends on the totality of this local view.", "In constrast, we consider equally every item in the set, and therefore rely on a global Even action Set ( S ) Focus ( s f ) Buffer Odd action {} none So what 's a parent to do ?", "view of the memory (Section 3).", "However, we score each possible combinations independently: the score of a single combination only depends on the two constituents that are combined, regardless of the rest of the set.", "Definitions We first define an instantiated (dis-continuous) constituent ( X, s ) as a nonterminal label X associated with a set of token indexes s .", "We call min( s ) the left-index of c and max( s ) its right-index .", "For example in Figure 1, the two VPs are respectively represented by (VP, { 1, 6 } ) and (VP, { 1, 5, 6 } ), and they have the same right index (6) and left index (1).", "A parsing configuration is a quadruple ( S, s f , i, C ) where: S is a set of sets of indexes and represents the memory of the parser; s f is a set of indexes called the focus item , and satisfies max( s f ) = i 1 ; i is the index of the next token in the buffer; C is a set of instantiated constituents.", "Each new constituent is constructed bottom-up from the focus item and another item in the set S .", "Transition set Our proposed transition system is based on the following types of actions: SHIFT constructs a singleton containing the next token in the buffer and assigns it as the new focus item.", "The former focus item is added to S .", "COMBINEs computes the union between the focus item s f and another item s from the set S , to form the new focus item s s f .", "LABEL-X instantiates a new constituent ( X, s f ) whose yield is the set of indexes in the focus item s f .", "NO-LABEL has no effect; its semantics is that the current focus set is not a constituent.", "Following Cross and Huang (2016b), transitions are divided into structural actions ( SHIFT , COMBINEs ) and labelling actions ( LABEL-X, NOLABEL ).", "The parser may only perform a structural action on an even step and a labelling action on an odd step.", "For our system, this distinction has the crucial advantage of keeping the number of possible actions low at each parsing step, compared to a system that would perform a COMBINE action and a labelling action in a single REDUCEs -X action.", "1 Table 1 presents each action as a deduction rule associated with preconditions.", "In Table 2, we describe how to derive the tree from Figure 1. 2.2 Oracles Training a transition-based parser requires an oracle, i.e. a function that determines what the best action is in a specific parsing configuration to serve as a training signal.", "We first describe a static oracle that provides a canonical derivation for a given gold tree.", "We then introduce a dynamic oracle that determines what the best action is in any parsing configuration.", "1 In such a case, we would need to score | S | | N | + 1 actions, where N is the set of nonterminals, instead of | S | +1 actions for our system.", "Our transition system exhibits a fair amount of spurious ambiguity , the ambiguity exhibited by the existence of many possible derivations for a single tree.", "Indeed, since we use an unordered memory, an n -ary constituent (and more generally a tree) can be constructed by many different transition sequences.", "For example, the set { 0, 1, 2 } might be constructed by combining { 0 } and { 1 } first, and the result with { 2 } ; or { 1 } and { 2 } first, and the result with { 0 } ; or { 0 } and { 2 } first, and the result with { 1 } .", "Following Cohen et al. (2012), we eliminate spurious ambiguity by selecting a canonical derivation for a gold tree.", "In particular, we design the static oracle", "(i) to apply COMBINE as soon as possible in order to minimize the size of the memory", "(ii) to combine preferably with the most recent set in the memory when several combinations are possible.", "The first choice is motivated by properties of our system: when the memory is smaller, there are fewer choices, therefore decisions are simpler and less expensive to score.", "Parsers are usually trained to predict the gold sequence of actions, using a static oracle.", "The limitation of this method is that the parser only sees a tiny portion of the search space at train time and only trains on gold input (i.e. configurations obtained after performing gold actions).", "At test time, it is in a different situation due to error propagation: it must predict what the best actions are in configurations from which the gold tree is probably no longer reachable.", "To alleviate this limitation, Goldberg and Nivre (2012) proposed to train a parser with a dynamic oracle , an oracle that is defined for any parsing configuration and outputs the set of best actions to perform.", "In contrast, a static oracle is deterministic and is only defined for gold configurations.", "Dynamic oracles were proposed for a wide range of dependency parsing transition systems (Goldberg and Nivre, 2013; Gomez-Rodrguez et al., 2014; Gomez-Rodrguez and Fernandez-Gonzalez, 2015), and later adapted to constituency parsing (Coavoux and Crabbe, 2016; Cross and Huang, 2016b; Fernandez-Gonzalez and Gomez-Rodrguez, 2018b,a).", "In the remainder of this section, we introduce a dynamic oracle for our proposed transition system.", "It can be seen as an extension of the oracle of Cross and Huang (2016b) to the case of discontinuous parsing.", "Preliminary definitions For a parsing configuration c , the relation c (cid:96) c (cid:48) holds iff c (cid:48) can be derived from c by a single transition.", "We note (cid:96) the reflexive and transitive closure of (cid:96) .", "An instantiated constituent ( X, s ) is reachable from a configuration c = ( S, s f , i, C ) iff there exists c (cid:48) = ( S (cid:48) , s (cid:48) f , i (cid:48) , C (cid:48) ) such that ( X, s ) C (cid:48) and c (cid:96) c (cid:48) .", "Similarly, a set of constituents t (possibly a full discontinuous constituency tree) is reachable iff there exists a configuration c (cid:48) = ( S (cid:48) , s (cid:48) f , i (cid:48) , C (cid:48) ) such that t C (cid:48) and c (cid:96) c (cid:48) .", "We note reach ( c, t ) the set of constituents that are", "(i) in the gold set of constituents t", "(ii) reachable from c .", "We define a total order (cid:22) on index sets: s (cid:22) s (cid:48) max( s ) < max( s (cid:48) ) , or max( s ) = max( s (cid:48) ) and s s (cid:48) .", "This order naturally extends to the constituents of a tree: ( X, s ) (cid:22) ( X (cid:48) , s (cid:48) ) iff s (cid:22) s (cid:48) .", "If ( X, s ) precedes ( X (cid:48) , s (cid:48) ) , then ( X, s ) must be constructed before ( X (cid:48) , s (cid:48) ) .", "Indeed, since the right-index of the focus item is non-decreasing during a derivation (as per the transition definitions), constituents are constructed in the order of their right-index (first condition).", "Moreover, since the algorithm is bottom-up, a constituent must be constructed before its parent (second condition).", "From a configuration c = ( S, s f , i, C ) at an odd step, a constituent ( X, s g ) / C is reachable iff both the following properties hold: 1. max( s f ) max( s g ) ; 2. s S { s f } , ( s s g ) or ( s s g = ) .", "Condition 1 is necessary because the parser can only construct new constituents ( X, s ) such that s f (cid:22) s .", "Condition 2 makes sure that s g can be constructed from a union of elements from S { s f } , potentially augmented with terminals from the bufffer: { i, i + 1 , . . . , max( s g ) } .", "Following Cross and Huang (2016b), we define next ( c, t ) as the smallest reachable gold constituent from a configuration c .", "Formally: next ( c, t ) = argmin (cid:22) reach ( c, t ) .", "Oracle algorithm We first define the oracle o for the odd step of a configuration c = ( S, s f , i, C ) : o odd ( c, t ) = (cid:26) { LABEL-X } if ( X, s f ) t , { NO-LABEL } otherwise.", "For even steps, assuming next ( c, t ) = ( X, s g ) , we define the oracle as follows: o even ( c, t ) = { COMBs | ( s f s ) s g } if max( s g ) = max( s f ) , { COMBs | ( s f s ) s g } { SH } if max( s g ) > max( s f ) .", "We provide a proof of the correctness of the oracle in Appendix A. 3 A Neural Network based on Constituent Boundaries We first present an encoder that computes context-aware representations of tokens (Section 3.1).", "We then discuss how to compute the representation of a set of tokens (Section 3.2).", "We describe the action scorer (Section 3.3), the POS tagging component (Section 3.4), and the objective function (Sec-tion 3.5).", "As in recent proposals in dependency and constituency parsing (Cross and Huang, 2016a; Kiperwasser and Goldberg, 2016), our scoring system is based on a sentence transducer that constructs a context-aware representation for each token.", "Given a sequence of tokens x n 1 = ( x 1 , . . . , x n ) , we first run a single-layer character bi-LSTM encoder c to obtain a character-aware embedding c ( x i ) for each token.", "We represent a token x i as the concatenation of a standard word embedding e ( x i ) and the character-aware embedding: w x i = [ c ( x i ); e ( x i )] .", "Then, we run a 2-layer bi-LSTM transducer over the sequence of token representations: ( h (1)1 , . . . , h (1) n ) = bi-LSTM ( w x 1 , . . . , w x n ) , ( h (2)1 , . . . , h (2) n ) = bi-LSTM ( h (1)1 , . . . , h (1) n ) .", "The parser uses the context-aware token representations h (2) i to construct vector representations of sets or constituents.", "An open issue in neural discontinuous parsing is the representation of discontinuous constituents.", "In projective constituency parsing, it has become standard to use the boundaries of constituents (Hall et al., 2014; Crabbe, 2015; Durrett and Klein, 2015), an approach that proved very successful with bi-LSTM token representations (Cross and Huang, 2016b; Stern et al., 2017).", "Although constituent boundary features improves discontinuous parsing (Coavoux and Crabbe, 2017a), relying only on the left-index and the right-index of a constituent has the limitation of ignoring gaps inside a constituent.", "For example, since the two VPs in Figure 1 have the same right-index and left-index, they would have the same representations.", "It may also happen that constituents with identical right-index and left-index do not have the same labels.", "We represent a (possibly partial) constituent with the yield s , by computing 4 indexes from s : (min( s ) , max( s ) , min( s ) , max( s )) .", "The set s represents the gap in s , i.e. the tokens between min( s ) and max( s ) that are not in the yield of s : s = { i | min( s ) < i < max( s ) and i / s } .", "For an index set that does not contain a gap, we have s = .", "To handle this case, we use a parameter vector h nil , randomly initialized and learned jointly with the network, to embed max( ) = min( ) = nil .", "For example, the constituents (VP, { 1, 6 } ) and (VP, { 1, 5, 6 } ) will be respectively vectorized as: r ( { 1 , 6 } ) = [ h (2)1 ; h (2)6 ; h (2)2 ; h (2)5 ] , r ( { 1 , 5 , 6 } ) = [ h (2)1 ; h (2)6 ; h (2)2 ; h (2)4 ] .", "This representation method makes sure that two distinct index sets have distinct representations, as long as they have at most one gap each.", "This property no longer holds if one index sets has more than one gap.", "For each type of action structural or labelling we use a feedforward network with two hidden layers.", "Structural actions At structural steps, for a configuration c = ( S, s f , i, C ) , we need to compute the score of | S | COMBINE actions and possibly a SHIFT action.", "In our approach, the score of a combines action only depends on s and s f and is independent of the rest of the configuration (i.e. other items in the set).", "We first construct input matrix M as follows: M = (cid:18) r ( s 1 ) r ( s n ) r ( { i } ) r ( s f ) r ( s f ) r ( s f ) (cid:19) .", "Each of the first n columns of matrix M represents the input for a COMBINE action, whereas the last column is the input for the SHIFT action.", "We then compute the score of each structural action: P ( | c ) = Softmax ( FF s ( M )) , where FF s is a feedforward network with two hidden layers, a tanh activation and a single output unit.", "In other words, it outputs a single scalar for each column vector of matrix M .", "This part of the network can be seen as an attention mechanism, where the focus item is the query, and the context is formed by the items in the set and the first element in the buffer.", "Labelling actions We compute the probabilities of labelling actions as follows: P ( | s f ) = Softmax ( FF l ( r ( s f ))) , where FF l is a feedforward network with two hidden layers activated with the tanh function, and | N | + 1 output units, where N is the set of nonterminals.", "Following Coavoux and Crabbe (2017b), we use the first layer of the bi-LSTM transducer as input to a Part-of-Speech (POS) tagger that is learned jointly with the parser.", "For a sentence x n 1 , we compute the probability of a sequence of POS tags t n 1 = ( t 1 , . . . , t n ) as follows: P ( t n 1 | x n 1 ) = n (cid:89) i =1 Softmax ( W ( t ) h (1) i + b ( t ) ) t i , where W ( t ) and b ( t ) are parameters.", "In the static oracle setting, for a single sentence x n 1 , we optimize the sum of the log-likelihood of gold POS-tags t n 1 and the log-likelihood of gold parsing actions a n 1 :", "L = L t + L p , L t = n (cid:88) i =1 log P ( t i | x n 1 ) , L p = 4 n 2 (cid:88) i =1 log P ( a i | a i 1 1 , x n 1 ) .", "We optimize this objective by alternating a stochastic step for the tagging objective and a stochastic step for the parsing objective, as is standard in multitask learning (Caruana, 1997).", "In the dynamic oracle setting, instead of optimizing the likelihood of the gold actions (assum-ing all previous actions were gold), we optimize the likelihood of the best actions, as computed by the dynamic oracle, from a configuration sampled from the space of all possible configurations.", "In practice, before each epoch, we sample each sentence from the training corpus with probability p and we use the current (non-averaged) parameters to parse the sentence and generate a sequence of configurations.", "Instead of selecting the highest-scoring action at each parsing step, as in a normal inference step, we sample an action using the softmax distribution computed by the parser, as done by Ballesteros et al. (2016).", "Then, we use the dynamic oracle to calculate the best action from each of these configurations.", "In case there are several best actions, we deterministically choose a single action by favoring a COMBINE over a SHIFT (to bias the model towards a small mem-ory), and to COMBINE with the item with the highest right-index (to avoid spurious discontinuity in partial constituents).", "We train the parser on these sequences of potentially non-gold configuration-action pairs.", "We carried out experiments to assess the adequacy of our system and the effect of training with the dynamic oracle.", "We present the three discontinuous constituency treebanks that we used (Sec-tion 4.1), our experimental protocol (Section 4.2), then we discuss the results (Section 4.3) and the efficiency of the parser (Section 4.4).", "We perform experiments on three discontinuous constituency corpora.", "The discontinuous Penn Treebank was introduced by Evang and Kallmeyer (2011) who converted the long distance dependencies encoded by indexed traces in the original Penn treebank (Marcus et al., 1993) to discontinuous constituents.", "We used the standard split (sec-tions 2-21 for training, 22 for development and 23 for test).", "The Tiger corpus (Brants et al., 2004) and the Negra corpus (Skut et al., 1997) are both German treebanks natively annotated with discontinuous constituents.", "We used the SPMRL split for the Tiger corpus (Seddah et al., 2013), and the split of Dubey and Keller (2003) for the Negra corpus.", "We implemented our parser in Python using the Pytorch library (Paszke et al., 2017).", "We trained each model with the ASGD algorithm (Polyak and Juditsky, 1992) for 100 epochs.", "Training a single model takes approximately a week with a GPU.", "We evaluate a model every 4 epochs on the validation set and select the best performing model according to the validation F-score.", "We refer the reader to Table 5 of Appendix B for the full list of hyperparameters.", "We evaluate models with the dedicated module of discodop 2 (van Cranenburgh et al., 2016).", "We use the standard evaluation parameters ( proper.prm ), that ignore punctuations and root symbols.", "We report two evaluation metrics: a standard Fscore (F) and an Fscore computed only on discontinuous constituents (Disc. F), which provides a more qualitative evaluation of the ability of the parser to recover long distance dependencies.", "2 https://github.com/andreasvc/ disco-dop 4.3 Results Effect of Dynamic Oracle We present parsing results on the development sets of each corpus in Table 3. The effect of the oracle is in line with other published results in projective constituency parsing (Coavoux and Crabbe, 2016; Cross and Huang, 2016b) and dependency parsing (Goldberg and Nivre, 2012; Gomez-Rodrguez et al., 2014): the dynamic oracle improves the generalization capability of the parser.", "External comparisons In Table 4, we compare our parser to other transition-based parsers (Maier, 2015; Coavoux and Crabbe, 2017a; Stanojevic and Garrido Alhama, 2017; Coavoux et al., 2019), the pseudo-projective parser of Versley (2016), grammar-based chart parsers (Evang and Kallmeyer, 2011; van Cranenburgh et al., 2016; Gebhardt, 2018) and parsers based on dependency parsing (Fern andez-Gonz alez and Martins, 2015; Corro et al., 2017).", "Note that some of them only report results in a gold POS tag setting (the parser has access to gold POS tags and use them as fea-tures), a setting that is much easier than ours.", "Our parser matches the state of the art of Coavoux et al. (2019).", "This promising result shows that it is feasible to design accurate transition systems without an ordered memory.", "Our transition system derives a tree for a sentence of n words in exactly 4 n 2 transitions.", "Indeed, there must be n SHIFT actions, and n 1 COMBINE actions.", "Each of these 2 n 1 transitions must be followed by a single labelling action.", "The statistical model responsible for choosing which action to perform at each parsing step needs to score | S | + 1 actions for a structural step and | N | +1 actions for a labelling step (where N is the set of possible nonterminals).", "Since in the worst case, | S | contains n 1 singletons, the parser has an O ( n ( | N | + n )) time complexity.", "In practice, the memory of the parser S remains relatively small on average.", "We report in Figure 3 the distribution of the size of S across configurations when parsing the development sets of three corpora.", "For the German treebanks, the memory contains 7 or fewer elements for more than 99 percents of configurations.", "For the Penn treebank, the memory is slighlty larger, with 98 percents of configuration with 11 or fewer items.", "We report empirical runtimes in Table 6 of Appendix C. Our parser compares decently with other transition-based parsers, despite being written in Python.", "Existing transition systems for discontinuous constituency parsing rely on three main strategies for constructing discontinuous constituents: a swap-based", "swap-based strategy, a split-stack strategy, and the use of non-local transitions.", "Swap-based systems Swap-based transition systems are based on the idea that any discontinuous constituency tree can be transformed into a projective tree by reordering terminals.", "They reorder terminals by swapping them with a dedicated action ( SWAP ), commonly used in dependency parsing (Nivre, 2009).", "The first proposals in transition-based discontinuous constituency parsing used the SWAP action on top of an easy-first parser (Versley, 2014a,b).", "Subsequent proposals relied on a shift-reduce system (Maier, 2015; Maier and Lichte, 2016) or a shift-promote-adjoin system (Stanojevic and Garrido Alhama, 2017).", "The main limitation of swap-based system is that they tend to require a large number of transitions to derive certain trees.", "The choice of an oracle that minimizes derivation lengths has a substantially positive effect on parsing (Maier and Lichte, 2016; Stanojevic and Garrido Alhama, 2017).", "Split-stack systems The second parsing strategy constructs discontinuous constituents by allowing the parser to reduce pairs of items that are not adjacent in the stack.", "In practice, Coavoux and Crabbe (2017a) split the usual stack of shift-reduce parsers into two data structures (a stack and a double-ended queue), in order to give the parser access to two focus items: the respective tops of the stack and the dequeue, that may or may not be adjacent.", "A dedicated action, GAP , pushes the top of the stack onto the bottom of the queue to make the next item in the stack available for a reduction.", "The split stack associated with the GAP action can be interpreted as a linear-access memory: it is possible to access the i th element in the stack, but it requires i operations.", "Non-local transitions Non-local transitions generalize standard parsing actions to nonadjacent elements in the parsing configurations.", "Maier and Lichte (2016) introduced a non-local transition SKIPSHIFTi which applies SHIFT to the i th element in the buffer.", "Non-local transitions are also widely used in non-projective dependency parsing (Attardi, 2006; Qi and Manning, 2017; Fernandez-Gonzalez and Gomez-Rodrguez, 2018).", "The key difference between these systems and ours is that we use an unordered memory.", "As a result, the semantics of the COMBINEs action we introduce in Section 2 is independent from a specific position in the stack or the buffer.", "A system with an action such as SKIPSHIFTi needs to learn parameters with every possible i , and will only learn parameters with the SKIPSHIFTi actions that are required to derive the training set.", "In contrast, we use the same parameters to score each possible COMBINEs action.", "We have presented a novel transition system that dispenses with the use of a stack, i.e. a memory with linear sequential access.", "Instead, the memory of the parser is represented by an unordered data structure with random-access: a set.", "We have designed a dynamic oracle for the resulting system and shown their empirical potential with state-of-the-art results on discontinuous constituency parsing of one English and two German treebanks.", "Finally, we plan to adapt our system to non-projective dependency parsing and semantic graph parsing.", "We thank Caio Corro, Giorgio Satta, Marco Da-monte, as well as NAACL anonymous reviewers for feedback and suggestions.", "We gratefully acknowledge the support of Huawei Technologies." ]
[ "objective", "abstain", "abstain", "abstain", "abstain", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "other", "abstain", "abstain", "abstain", "other", "other", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "method", "method", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "method", "method", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "other", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "abstain", "other", "objective", "objective", "abstain", "result", "abstain", "other", "other" ]
[ "The current state-of-the-art generative models for open-domain question answering (ODQA) have focused on generating direct answers from unstructured textual information.", "However, a large amount of world's knowledge is stored in structured databases, and need to be accessed using query languages such as SQL.", "Furthermore, query languages can answer questions that require complex reasoning, as well as offering full explainability.", "In this paper, we propose a hybrid framework that takes both textual and tabular evidence as input and generates either direct answers or SQL queries depending on which form could better answer the question.", "The generated SQL queries can then be executed on the associated databases to obtain the final answers.", "To the best of our knowledge, this is the first paper that applies Text2SQL to ODQA tasks.", "Empirically, we demonstrate that on several ODQA datasets, the hybrid methods consistently outperforms the baseline models that only take homogeneous input by a large margin.", "Specifically we achieve state-of-the-art performance on OpenSQuAD dataset using a T5base model.", "In a detailed analysis, we demonstrate that the being able to generate structural SQL queries can always bring gains, especially for those questions that requires complex reasoning.", "Open-domain question answering (ODQA) is a task to answer factoid questions without a pre-specified domain.", "Recently, generative models (Roberts et al., 2020; Lewis et al., 2020; Min et al., 2020; Izacard and Grave, 2020) have achieved the state-of-the-art performance on many ODQA tasks.", "These approaches all share the common pipeline where the first stage is retrieving evidence from the free-form text in Wikipedia.", "However, a large amount of world's knowledge is not stored as plain text but in structured databases, and need to be accessed using query languages such as SQL.", "Furthermore, query languages can answer questions that require complex reasoning, as well as offering full explainability.", "In practice, an ideal ODQA model should be able to retrieve evidence from both unstructured textual and structured tabular information sources, as some questions are better answered by tabular evidence from databases.", "For example, the current state-of-the-art ODQA models struggle on questions that involve aggregation operations such as counting or averaging.", "One line of research on accessing databases, although not open domain, is translating natural language questions into SQL queries (Zhong et al., 2017; Xu et al., 2017; Yu et al., 2018c; Guo et al., 2019; Wang et al., 2018a, 2020; Yu et al., 2018a; Guo and Gao, 2019; Choi et al., 2020).", "These methods all rely on knowing the associated table for each question in advance, and hence are not trivially applicable to the open-domain setting, where the relevant evidence might come from millions of tables.", "In this paper, we provide a solution to the aforementioned problem by empowering the current generative ODQA models with the Text2SQL ability.", "More specifically, we propose a dual reader-parser (DUREPA ) framework that can take both textual and tabular data as input, and generate either direct answers or SQL queries based on the context 1 .", "If the model chooses to generate a SQL query, we can then execute the query on the corresponding database to get the final answer.", "Overall, our framework consists of three stages: retrieval, joint ranking and dual reading-parsing.", "First we retrieve supporting candidates of both textual and tabular types, followed by a joint reranker that predicts how relevant each supporting candidate is to 1 Our code is available at https://github.com/ AlexanderYogurt/Hybrid-Open-QA the question, and finally we use a fusion-in-decoder model (Izacard and Grave, 2020) for our reader-parser, which takes all the reranked candidates in addition to the question to generate direct answers or SQL queries.", "To evaluate the effectiveness of our DUREPA , we construct a hybrid dataset that combines SQuAD (Rajpurkar et al., 2016) and WikiSQL (Zhong et al., 2017) questions.", "We also conduct experiments on NaturalQuestions (NQ) (Kwiatkowski et al., 2019) and OTT-QA (Chen et al., 2020a) to evaluate DuRePa performance.", "As textual and tabular open-domain knowledge, we used textual and tabular data from Wikipedia via Wikidumps (from Dec. 21, 2016) and Wikitables (Bhagavatula et al., 2015).", "We study the model performance on different kinds of questions, where some of them only need one supporting evidence type while others need both textual and tabular evidence.", "On all question types, DUREPA performs significantly better than baseline models that were trained on a single evidence type.", "We also demonstrate that DUREPA can generate human-interpretable SQLs that answer questions requiring complex reasoning, such as calculations and superlatives.", "Our highlighted contributions are as follows: We propose a multi-modal framework that incorporates hybrid knowledge sources with the Text2SQL ability for ODQA tasks.", "To the best of our knowledge, this is the first work that investigates Text2SQL in the ODQA setting.", "We propose a simple but effective generative approach that takes both textual and tabular evidence and generates either direct answers or SQL queries, automatically determined by the context.", "With that, we achieve the state-of-the-art performance on OpenSQuAD using a T5base model.", "We conduct comprehensive experiments to demonstrate the benefits of Text2SQL for ODQA tasks.", "We show that interpretable SQL generation can effectively answer questions that require complex reasoning in the ODQA setting.", "Open Domain Question Answering ODQA has been extensively studied recently including extractive models (Chen et al., 2017; Clark and Gardner, 2018; Wang et al., 2019; Min et al., 2019; Yang et al., 2019) that predict spans from evidence passages, and generative models (Raffel et al., 2020;", "Roberts et al., 2020; Min et al., 2020; Lewis et al., 2020; Izacard and Grave, 2020) that directly generate the answers.", "Wang et al. (2018b,c); Nogueira and Cho (2019) proposed to rerank the retrieved passages to get higher top-n recall.", "Table Parsing Text2SQL is a task to translate natural questions to executable SQL queries.", "Brad et al. (2017) proposed SENLIDB dataset which only contains 29 tables and lacks annotation in their training set.", "Recently, with datasets like WikiSQL (Zhong et al., 2017), Spider (Yu et al., 2018c) and CoSQL (Yu et al., 2019) being introduced, many works have shown promising progress on these dataset (Yu et al., 2018b; He et al., 2019; Hwang et al., 2019; Min et al., 2019; Wang et al., 2020; Choi et al., 2020; Guo et al., 2019; Lyu et al., 2020; Zhang et al., 2019; Zhong et al., 2020; Shi et al., 2020).", "Another line of work proposes to reason over tables without generating logical forms (Neelakantan et al., 2015; Lu et al., 2016; Herzig et al., 2020; Yin et al., 2020).", "However, they are all closed-domain and each question is given the associated table.", "Hybrid QA Chen et al. (2020a) also proposed an open-domain QA problem with textual and tabular evidence.", "Unlike our problem, they generate an answer directly from the tabular evidence instead of generating an SQL query.", "In addition, they assume some contextual information about table is available during retrieval stage (e.g. their fusion-retriever is pretrained using hyperlinks between tables and paragraphs), whereas we don't use any link information between tables and passages.", "Moreover, Chen et al. (2020b) proposed a closed-domain hybrid QA dataset where each table is linked to on average 44 passages.", "Different from ours, their purpose is to study multi-hop reasoning over both forms of information, and each question is still given the associated table.", "In this section, we describe our method for hybrid open-domain question answering.", "It mainly consists of three components: (1) a retrieval system; (2) a joint reranker and (3) a dual Seq2Seq model that uses fusion-in-decoder (Izacard and Grave, 2020) to generate direct answer or SQL query.", "For the hybrid open-domain setting, we build two separate search indices one for textual input and another for tabular input.", "For paragraphs, we split them into passages of at most 100 words.", "For tables, we flattened each table into passages by concatenating cell values along each row.", "If the flattened table exceeds 100 words, we split it into a separate passage, respecting row boundaries.", "The column headers are concatenated to each tabular passage.", "Some examples of flattened tables are given in the Appendix A.1.", "Given a natural language question, the retrieval system retrieves 100 textual and 100 tabular passages as the support candidates from the textual and tabular indices, respectively, using BM25 (Robert-son et al., 1995) ranking function.", "The purpose of our reranking model is to produce a score s i of how relevant a candidate (either an unstructured passage or table) is to a question.", "Specifically, the reranker input is the concatenation of question , a retrieved candidate-content , and its corresponding title if available 2 , separated by special tokens shown in Figure 1.", "The candidate content can be either the unstructured 2 Wikipedia passages have page titles, and tables have table titles.", "text or flattened table.", "We use BERT base model in this paper.", "Following Nogueira and Cho (2019), we finetune the BERT (Devlin et al., 2019) model using the following loss: L = \u0000 X i 2 I pos log( s i ) \u0000 X i 2 I neg log(1 \u0000 s i ) .", "The I pos is sampled from all relevant BM25 candidates, and the set I neg is sampled from all non-relevant BM25 candidates.", "Different from Nogueira and Cho (2019), during training, for each question, we sample 64 candidates including one positive candidate and 63 negative candidates, that is, |I pos | = 1 and |I neg | = 63 .", "If none of the 200 candidates is relevant, we skip the question.", "During inference, we use the hybrid reranker to assign a score to each of the 200 candidates, and choose the top 50 candidates as the input to the next module the reader-parser model.", "For the top 50 candidates, we choose them from the joint pool of all candidates, according to the scores assigned by the reranker.", "Our dual reader-parser model is based on the fusion-in-decoder (FID) proposed in Izacard and Grave (2020), and is initialized using the pretrained T5 (Raffel et al., 2020) model.", "The overall pipeline of the reader-parser is shown in Figure 1.", "Each retrieved candidate is represented by its title and content, in the following formats: Textual Candidate We represent each textual candidate as the concatenation of the passage title and content, appended by special tokens [text title] and [text content] respectively.", "Tabular Candidate In order to represent a structured table as a passage, we first flatten each table into the following format: each flattened table starts with the complete header names and then followed by rows.", "Figure 1 presents an example for this conversion.", "Finally, a tabular candidate is the concatenation of the table title and content flattened as a passage, appended by special tokens [table title] and [table content] respectively.", "We use the table ID as the title so that it can be copied to the generated SQL queries by the model.", "Prefix of the Targets During training, we also add special tokens answer: or sql: to a targeted sentence depending on whether it is a plain text or a SQL query.", "For those questions that have both textual answer and SQL query annotations (for example, WikiSQL questions), we create two training examples for each question.", "During inference, the generated outputs will also contain these two special prefixes, indicating which output type the model has generated.", "Dual Reader-Parser Our generative Seq2Seq model has reader-parser duality.", "During inference, the model reads the question and all the candidates, and produces k outputs using beam search.", "Each output can be either a final answer or an intermediate SQL query.", "Depending on the context, the types and order of the outputs are automatically determined by the model itself.", "All the generated SQL queries will then be executed to produce the final answers.", "In this paper, we fix k = 3 and always generate three outputs for each question.", "In this section, we report the performance of the proposed method on several hybrid open-domain QA datasets.", "In this section, we describe all the datasets we use in our experiments.", "First we summarize the statistics of the open-domain QA datasets we use in Table 1.", "OpenSQuAD is an open-domain QA dataset constructed from the original SQuAD-v1.1 (Ra-jpurkar et al., 2016), which was designed for the reading comprehension task, consisting of 100,000+ questions posed by annotators on a set of Wikipedia articles, where the answer to each question is a span from the corresponding paragraph.", "OpenNQ is an open-domain QA datasets constructed from the NaturalQuestions (Kwiatkowski et al., 2019), which was desgined for the end-to-end question answering task.", "The questions were from real google search queries and the answers were from Wikipedia articles annotated by humans.", "OTT-QA (Chen et al., 2020a) is a large-scale open table-and-text question answering dataset for evaluating open QA over both tabular and textual data.", "The questions were constructed through de-contextualization from HybridQA (Chen et al., 2020b) with additional 2200 new questions mainly used in dev/test set.", "OTT-QA also provides its own corpus which contains over 5 million passages and around 400k tables.", "OpenWikiSQL is an open-domain Text2SQL QA dataset constructed from the original WikiSQL (Zhong et al., 2017).", "WikiSQL is a dataset of 80,654 annotated questions and SQL queries distributed across 24,241 tables from Wikipedia.", "Mix-SQuWiki is the union of OpenSQuAD and OpenWikiSQL datasets.", "WikiSQL-both is a subset of OpenWikiSQL evaluation data that contains the questions that can be answered by both textual and tabular evidences.", "The purpose of this dataset is to study when both types of evidence are possible to answer a question, whether the hybrid model can still choose the better one.", "We select these questions in a weakly-supervised way by only keeping a question if the Model Evidence Corpus Type OpenSQuAD OpenNQ OTT-QA OpenWikiSQL FiD(T5base ) Text-only 53.4 48.2 -FiD(T5large ) Text-only 56.7 51.4 -IR+CR Text+Table w/o SQL -14.4 -FR+CR Text+Table w/o SQL -28.1 3 Unified Model Text+NQ Table w/o SQL -54.6 4 -Ours FI D+ Text-only 56.4 45.2 14.5 13.9 FI D+ Table-only w/o SQL 2.5 14.3 4.1 30.3 DUREPA Table-only with SQL 2.7 14.8 4.7 40.2 FI D+ Text+Table w/o SQL 56.4 46.7 15.0 30.9 DUREPA Text+Table with SQL 57.0 48.0 15.8 42.6 Table 2: Comparison to the state-of-the-art on open-domain QA datasets.", "groundtruth answer is contained in both textual and tabular BM25 candidates.", "For example in Figure 1, the answer Richard Marquand can be found in both types of passages.", "We filter out some trivial cases where the answer shows up in more than half of the candidates.", "5 Wikipedia Passages and Tables For the textual evidences, we process the Wikipedia 2016 dump and split the articles into overlapping passages of 100 words following (Wang et al., 2019).", "To create the tabular evidences, we combine 1.6M Wikipedia tables (Bhagavatula et al., 2015) and all the 24,241 WikiSQL tables, and flatten and split each table into passages not exceeding 100 words, in the same format mentioned in the previous section.", "We use these two collections as the evidence sources for all the QA datasets except for OTT-QA, where we use its own textual and tabular collections.", "Retriever and Reranker.", "We conduct BM25 retrieval using Elasticsearch 7.7 6 with the default settings.", "And we use a BERT reranker initialized with pretrained BERTbase-uncased model.", "Dual Reader and Parser with fusion-in-decoder.", "Similar to (Izacard and Grave, 2020), we initialize the fusion-in-decoders with the pretrained T5 model (Raffel et al., 2020).", "We only explore T5-base model in this paper, which has 220M parameters.", "For both reranker and FiD models, we use Adam optimizer (Kingma and Ba, 2014) with a maximum learning rate of 10 \u0000 4 and a dropout rate of 10%.", "The learning rate linearly warms up to 10 \u0000 4 and then linearly anneals to zero.", "We train models for 10k gradient steps with a batch size of 32, and save a checkpoint every 1k steps.", "For the FiD model, when there are multiple answers for one question, we randomly sample one answer from the list.", "For the FiD model, during inference, we generate 3 answers for each question using beam search with beam size 3.", "We present the end-to-end results on the open-domain QA task comparing with the baseline methods as show in Table 2.", "We build models with 5 different settings based on the source evidence modality as well as the format of model prediction.", "Specifically, we consider single modality settings with only textual evidence or tabular evidence and the hybrid setting with both textual and tabular evidence available.", "For tabular evidence, the models either predict direct answer text or generate structure SQL queries.", "Note we also consider a baseline model, FI D+ , a FiD model that only generates direct answer text, but can make use of both textual and tabular evidence.", "3 Chen et al. (2020a) uses a fusion-retriever to retrieved table-passages blocks as evidences.", "To construct the fusion blocks, they train a GPT-2 model using extra hyperlink information to link table cell to passages.", "In contrast, we do not use any hyperlink information.", "4 Oguz et al. (2020) uses tables provided by NQ training data (less than 500k in total), whereas we use all the tables extracted from Wikipedia dumps (around 1.6M in total).", "First, in the single modality setting, we observe that for OpenSQuAD, OpenNQ and OTT-QA datasets, textual QA model is performing significantly better than tabular QA models, while for OpenWikiSQL, it is the opposite.", "This is expected due to the nature of the construction process of those datasets.", "In the hybrid setting, the hybrid models outperform single modality models consistently across all these datasets.", "This indicates hybrid models are more robust and flexible when dealing with questions of various types in practice.", "Comparing DUREPA with FI D+ , we observe that having the ability to generate structural queries is always beneficial even for extractive questions like SQuAD and NQ.", "And for WikiSQL-type questions, the gain of SQL generation is significant.", "On OpenSQuAD dataset, our DUREPA model using hybrid evidences achieves a new state-of-the-art EM score of 57.0.", "It is worth noting that the previous best score was attained by FiD using T5large model, while our model is using T5-base , which has much fewer parameters.", "On NQ dataset, FI D+ with text-only evidences has lower EM score compared with FiD-base, despite having the same underlying model and inputs.", "We suspect that this is because (1) we truncate all passages into at most 150 word pieces while in FiD paper they keep 250 word pieces, so the actual input (top-100 passages) to our FiD model is much less than that in the FiD paper; and (2) we use BM25 to retrieve the initial pool of candidates instead of trained embedding-based neural retrieval model(Karpukhin et al., 2020; Izacard and Grave, 2020).", "Nevertheless, the DUREPA model with hybrid evidences still improve the EM by 2.8 points compared to FI D+ using only text inputs.", "On OTT-QA questions, our full model also outperforms the IR+CR baseline by 1.4 points.", "The FR+CR model is using a different setting where they use hyperlinks between tables and passages to train the fusion-retriever (FR), so the result is not directly comparable to ours.", "We provide more analysis on OTT-QA in the Appendix.", "On OpenWikiSQL dataset, enabling SQL generation brings more than 10 points improvement on the EM scores.", "This is because many questions therein require complex reasoning like COUNT, AVERAGE or SUM on the table evidences.", "We provide more in-depth analysis in Section 5.2 including some complex reasoning examples in Table 7.", "In this section, we investigate the performance of the BM25 retriever and the BERT reranker using top-k recalls as our evaluation metric.", "During both training and inference, for each question, the textual and tabular passages are reranked jointly using a single reranker.", "On the Mix-SQuWiki dataset, we report the reranking results on SQuAD questions in Table 3.", "The result on WikiSQL questions is in Table 9 in Appendix.", "To provide better insights on the reranker's performance, we show the topk recalls on textual, tabular and hybrid evidences separately.", "From Table 3, on both textual and tabular candidates, recall@25 of the ranker is even higher than recall@100 of the BM25 retriever.", "This suggest that during inference, instead of providing 100 BM25 candidates to the fusion-in-decoder (FiD), only 25 reranked candidates would suffice.", "In Table 9 and 10 in Appendix, we observe similar trend with top-25 recalls comparable to top-100 recalls on both WikiSQL and NQ questions.", "Finally, across all datasets, the recalls on hybrid inputs are almost the same as or even better than the best recalls on individual textual or tabular inputs, meaning that the reranker is able to jointly rank both types of candidates and provide better evidences to the next component the dual reader-parser.", "SQL prediction helps with complex reasoning.", "In Table 4, we compare the top-1 EM execution accuracy of DUREPA and FI D+ on OpenWikiSQL.", "If DUREPA generated a SQL, we execute the SQL to obtain its answer prediction.", "If the ground-truth answer is a list (e.g., What are the names of Simpsons episodes aired in 2008?), we use set-equivalence to evaluate accuracy.", "DUREPA outperforms FI D+ on the test set in most of the settings.", "We also compare their performance under a breakdown of different categories based on the ground-truth SQL query.", "DUREPA achieved close to 3x and 5x improvements on WikiSQL questions that have superlative (MAX/MIN) and calculation (SUM/AVG) operations, respectively.", "For COUNT queries, FI D+ often predicted either 0 or 1.", "Thus, these results support our hypothesis that the SQL generation helps in complex reasoning and explainability for tabular question answering.", "Using hybrid evidence types leads to better performance.", "Shown in Table 5 is the model performance on the Mix-SQuWiki questions.", "As the baseline models, if we only use a single evidence type, the best top-1 EM is 34.0, achieved by the model FI D+ using only textual candidates.", "However, if we use both evidence types, the hybrid model DUREPA attains a significantly better top-1 EM of 47.9, which implies that including both textual and tabular evidences leads a better model performance on Mix-SQuWiki.", "Furthermore, we observe that the model DUREPA has a better top-1 EM compared to FI D+, suggesting that the answers for some of these questions need to be obtained by executing SQL queries instead of generated directly.", "In Table 7, we samples some questions on which the model DUREPA predicts the correct answers but the model FI D+ fails.", "What if the questions can be answered by both textual and tabular evidences?", "Table 6 shows the model performance on WikiSQL-both dataset.", "Recall that all these questions in the dataset can be answered by both type of evidence.", "First of all, the DUREPA model using tabular evidences behaves better than the FI D+ model using textual evidences.", "This implies on WikiSQL questions, using tabular information leads to better answers.", "Next, when using only one type of evidence, both DUREPA and FI D+ models behave significantly worse than their hybrid counterparts.", "This indicates that the hybrid model can again figure out which evidence type should be used to provide the correct final answer.", "Our experiments consistently show that the proposed framework DUREPA brings significant improvement on answering questions using hybrid types of evidence.", "Especially on the questions that can be answered by both supporting evidence types, our multi-modal method still shows clear advantage over models using single-type knowledge, implying that our approach could figure out the most relevant evidence to answer a question.", "We also demonstrate that the dual reader-parser is essential to the good performance of DUREPA ; the ability of generating both direct answers and structural SQL queries help DUREPA perform much better than FI D+ and other baselines on questions that require complex reasoning like counting or averaging.", "We believe that our methods can be improved in two aspects.", "First, our general framework Fig. 1 can be improved by a better retrieval system.", "For example, instead of using BM25, we can use more powerful neural retrieval models (Karpukhin et al., 2020).", "On the hybrid evidence, one can also use an entity linking module to link the entities between the tables and passages (Chen et al., 2020a) and utilize the structure information for better multi-Model Evidence Corpus Type % of SQL Answers Acc of SQL Answers (%) % of Direct Answers Acc of Direct Answers (%) EM (Overall) FI D+ Text-only 0.0 -100.0 34.0 34.0 FI D+ Table-only w/o SQL 0.0 -100.0 19.3 19.3 DUREPA Table-only with SQL 53.9 42.5 46.1 8.4 26.8 FI D+ Text+Table w/o SQL 0.0 -100.0 40.0 40.0 DUREPA Text+Table with SQL 33.5 44.1 66.5 49.8 47.9 Table 5: Detailed results on Mix-SQuWiki dataset under various settings.", "queries is a very powerful and necessary feature for answering questions that require complex reasoning.", "reasoning.", "Given the limited Text2SQL data and the difficulty of obtaining such SQL supervision, two interesting future work include (1) getting SQL annotations more efficiently and (2) adapting weakly-supervised approaches like discrete EM (Min et al., 2019) for model training." ]
[ "abstain", "abstain", "abstain", "objective", "abstain", "objective", "objective", "result", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "objective", "method", "abstain", "other", "method", "method", "method", "method", "abstain", "objective", "objective", "objective", "objective", "result", "objective", "result", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "method", "other", "other", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "method", "other", "abstain", "abstain", "abstain" ]
[ "It has been shown that machine translation models usually generate poor translations for named entities that are infrequent in the training corpus.", "Earlier named entity translation methods mainly focus on phonetic transliteration, which ignores the sentence context for translation and is limited in domain and language coverage.", "To address this limitation, we propose DEEP, a DE noising E ntity P retraining method that leverages large amounts of monolingual data and a knowledge base to improve named entity translation accuracy within sentences.", "Besides, we investigate a multi-task learning strategy that finetunes a pre-trained neural machine translation model on both entity-augmented monolingual data and parallel data to further improve entity translation.", "Experimental results on three language pairs demonstrate that DEEP results in significant improvements over strong denoising auto-encoding baselines, with a gain of up to 1.3 BLEU and up to 9.2 entity accuracy points for English-Russian translation.", "1 1 Introduction Proper translation of named entities is critically important for accurately conveying the content of text in a number of domains, such as news or encyclopedic text (Knight and Graehl, 1998; Al-Onaizan and Knight, 2002a,b).", "In addition, a growing number of new named entities (e.g., person name, location) appear every day, therefore many of these entities may not exist in the parallel data traditionally used to train MT systems.", "As a result, even state-of-the-art MT systems struggle with entity translation.", "For example, Laubli et al. (2020) note that a Chinese-English news translation system that had allegedly reached human parity still lagged far behind human translators on entity translations, and this prob-1 Code/data/models are released at https://github.", "Because of this, there have been a number of methods proposed specifically to address the problem of translating entities.", "As noted by Liu (2015), earlier studies on named entity translation largely focused on rule-based methods (Wan and Verspoor, 1998), statistical alignment methods (Huang et al., 2003, 2004) and Web mining methods (Huang et al., 2005; Wu and Chang, 2007; Yang et al., 2009).", "However, these methods have two main issues.", "First, as they generally translate a single named entity without any context in a sentence, it makes it di-cult to resolve ambiguity in entities using context.", "In addition, the translation of entities is often performed in a two-step process of entity recognition then translation, which complicates the translation pipeline and can result in cascading errors (Huang et al., 2003, 2004; Chen et al., 2013).", "In this paper, we focus on a simple yet eective method that improves named entity translation within context.", "Specifically, we do so by devising a data augmentation method that leverages two data sources: monolingual data from the target language and entity information from a knowledge base (KB).", "Our method also adopts a procedure of pre-training and finetuning neural machine translation (NMT) models that is used by many recent works (Luong and Manning, 2015; Neubig and Hu, 2018; Song et al., 2019; Liu et al., 2020).", "In particular, pre-training methods that use monolingual data to improve translation for low-resource and medium-resource languages mainly rely on a denoising auto-encoding objective that attempt to reconstruct parts of text (Song et al., 2019) or the whole sentences (Liu et al., 2020) from noised input sentences without particularly distinguishing named entities and other functional words in the sentences.", "In contrast, our method exploits an entity linker to identify entity spans in the monolingual sentences and link them to a KB that contains mul-1753 Krasnodar (Q3646) Language Label Description English Krasnodar capitalofKrasnodarregion(Krai)inSouthernRussia Russian , : : Language Label ...", "tilingual translations of these entities (such as Wikidata (Vrandei and Krtzsch, 2014)).", "We then generate noised sentences by replacing the extracted entity spans with their translations in the knowledge base and pre-train our NMT models to reconstruct the original sentences from the noised sentences.", "To further improve the entity translation accuracy and avoid forgetting the knowledge learned from pre-training, we also examine a multi-task learning strategy that finetunes the NMT model using both the denoising task on the monolingual data and the translation task on the parallel data.", "In the experiments on English-Russian, English-Ukrainian, and English-Nepali translations, DEEP outperforms the strong denoising auto-encoding baseline with respect to entity translation accuracy, and obtains comparable or slightly better overall translation accuracy as measured by BLEU.", "A fine-grained analysis shows that our multi-task finetuning strategy improves the translation accuracy of the entities that do not exist in the finetuning data.", "Given a set of monolingual text segments for pretraining, i.e., D , a sequence-to-sequence denoising auto-encoder is pre-trained to reconstruct a text segment from its noised version corrupted by a noise function () .", "Formally, the DAE objective is defined as follows: LDAE (D , ) = (cid:213) D log ( | ( ) ; ) , (1) where denotes the model's learnable parameters.", "For notation simplicity, we drop in the rest of the sections.", "This formulation encompasses several dierent previous works in data augmentation for MT, such as monolingual data copying (Currey et al., 2017), where () is the identity function, back translation (Sennrich et al., 2016), where () is a backwards translation model, as well as heuristic noise functions (Song et al., 2019; Lewis et al., 2020; Liu et al., 2020) that randomly sample noise according to manually devised heuristics.", "In particular, as our baseline we focus on the mBART method (Liu et al., 2020), a popular method with two types of heuristic noise functions being used sequentially on each text segment.", "The first noise function randomly masks spans of text in each sentence.", "Specifically, a span length is first randomly sampled from a Poisson distribution ( = 0 . 35) and the beginning location for a span in is also randomly sampled.", "The selected span of text is replaced by a mask token.", "This process repeats until 35% of words in the sentence are masked.", "The second noise function is to permute the sentence order in each text segment with a probability.", "Our method adopts a procedure of pre-training and finetuning for neural machine translation.", "First, we apply an entity linker to identify entities in a monolingual corpus and link them to a knowledge base (3.1).", "We then utilize entity translations in the knowledge base to create noisy code-switched data for pre-training (3.2).", "Finally, we examine a multi-task learning strategy to further improve the translation of low-frequency entities (3.3).", "The goal of this part is to identify entities in each monolingual segment and obtain their translations.", "To this end, we use Wikidata (Vrandei and Krtzsch, 2014) a public multilingual knowledge base that covers 94M entities.", "2 Each entity is represented in surface forms from dierent languages in which a Wikipedia article exists.", "Therefore, linking an entity mention in a target-language segment to an entity in Wikidata allows us to obtain the multilingual translations of the entity, that is, , KB : = surface ( , KB ) , where denotes a set of multilingual surface forms of .", "We can define the translate operation as: = lookup ( , ) which simply looks for the surface form of in the source language .", "Note that this strategy relies on the fact that translations in higher-resource languages are included in , which we adopt by using English in our experiments.", "In general, however, does not universally cover all the languages of interest.", "For entity recognition and linking, we use SLING (Ringgaard et al., 2017), 3 which builds an entity linker for arbitrary languages available in Wikipedia.", "After obtaining entity translations from the KB, we attempt to explicitly incorporate these translations into the monolingual sentences for pre-training.", "To do so, we design an entity-based noise function that takes in a sentence and the KB, i.e., ( , KB ) .", "First, we replace all detected entity spans in the sentence by their translations from the KB: replace ( , KB ) = swap ( , , ) , (2) 2 Dump June 14, 2021.", "where the swap() function swaps occurrences of one entity span in with its translation in the source language.", "For example, in the second box of Figure 1, the named entities , and in Russian are replaced by Krasnodar, Saratov, and Ulyanovsk in English.", "After the replacement, we create a noised code-switched segment which explicitly includes the translations of named entities in the context of the target language.", "For some segments that contain fewer entities, their code-switched segments may be similar to them, which potentially results in a easier denoising task.", "Therefore, we further add noise to these code-switched segments.", "To do so, if the word count of the replaced entity spans is less than a fraction (35%) of the word count in the segment, we randomly mask the other non-entity words to ensure that about 35% of the words are either replaced or masked in the noised segment.", "Finally, we follow Liu et al. (2020) to randomly permute the sentence order in .", "We then train a sequence-to-sequence model to reconstruct the original sentence from its noised code-switched sentence as follows: LDEEP (D , KB ) = (cid:213) D log ( | ( , KB )) 3.3 Multi-task Finetuning After pre-training, we continue finetuning the pre-trained model on a parallel corpus ( , ) D for machine translation.", "To avoid forgetting the entity information learned from the pre-training stage, we examine a multitask learning strategy to train the model by both the pre-training objective on the monolingual data and the translation objective on the parallel data.", "Since monolingual segments are longer text sequences than sentences in D and the size of D is usually larger than that of D , simply concatenating both data for multi-task finetuning leads to bias toward denoising longer sequences rather than actually translating sentences.", "To balance the two tasks, in each epoch we randomly sample a subset of monolingual segments D (cid:48) from D , where the total subword count of D (cid:48) equals to that of D , i.e., (cid:205) D (cid:48) | | = (cid:205) ( , )D max (| | , | |) .", "We 1755 Lang.", "where the pre-training objective L Pre-train is either DAE or DEEP with DEEP having an additional input of a knowledge base.", "Notice that with the sampling strategy for the monolingual data, we double the batch size in the multi-task finetuning setting with respect to that in the single-task finetuning setting.", "Therefore, we make sure that the models are finetuned on the same amount of parallel data in both the single-task and multi-task settings, and the gains from the multi-task setting sorely come from the additional task on the monolingual data.", "To distinguish the tasks during finetuning, we replace the start token ( [BOS] ) in a source sentence or a noised segment by the corresponding task tokens for the translation or the denoising task ( [MT] , [DAE] or [DEEP] ).", "We initialize these task embeddings by the start token embedding and append them to the word embedding matrix of the encoder.", "Pre-training Data: We conduct our experiments on three language pairs: English-Russian, English-Ukrainian and English-Nepali.", "We use Wikipedia articles as the monolingual data for pre-training and report the data statistics in Table 1.", "We tokenize the text using the same sentencepiece model as Liu et al. (2020), and train on sequences of 512 subwords.", "Finetuning & Test Data: We use the news commentary data from the English-Russian translation task in WMT18 (Specia et al., 2018) for finetuning and evaluate the performance on the WMT18 test data from the news domain.", "For English-Ukrainian, we use the TED Talk transcripts from July 2020 in the OPUS repository (Tiedemann, 2012) for finetuning and testing.", "For English-Nepali translation, Lang.", "we use the FLORES dataset in Guzmn et al. (2019) and follow the paper's setting to finetune on parallel data in the OPUS repository.", "Table 2 shows the data statistics of the parallel data for finetuning.", "Notice that from the last four columns of Table 2, the entities in the pre-training data cover at least 87% of the entity types and 91% of the entity counts in both finetuning and test data except the En-Ne pair.", "Architecture: We use a standard sequence-to-sequence Transformer model (Vaswani et al., 2017) with 12 layers each for the encoder and decoder.", "We use a hidden unit size of 512 and 12 attention heads.", "Following Liu et al. (2020), we add an additional layer-normalization layer on top of both the encoder and decoder to stabilize training at FP16 precision.", "We use the same sentencepiece model and the vocabulary from Liu et al. (2020).", "Random MT : We include a comparison with a randomly initialized model without pre-training and finetune the model for each translation task.", "DAE MT : We pre-train a model by DAE using the two noise functions in Liu et al. (2020) and finetune the model for each translation task.", "DEEP MT : We pre-train a model using our proposed DEEP objective and finetune the model on the translation task.", "DAE DAE+MT : We pre-train a model by the DAE objective and finetune the model for both the DAE task and translation task.", "DEEP DEEP+MT : We pre-train a model by the DEEP objective and finetune the model for both the DEEP task and translation task.", "batch of 64 text segments, each of which has 512 subwords.", "We use the Adam optimizer ( =1e-6, 2 =0.98) and a polynomial learning rate decay scheduling with a maximum step at 500K.", "All models are pre-trained on one TPUv3 (128GB) for about 12 hours for 50K steps.", "4 We apply the noise function on the monolingual data on the fly for each epoch, and this takes only a few minutes by multiprocessing in Fairseq (Ott et al., 2019).", "We then reset the learning rate scheduler and continue finetuning our pre-trained models on the MT parallel data for 40K steps.", "Single-task (multi-task) finetuning takes about 16 (32) hours on 2 RTX 3090 GPUs.", "We set the maximum number of tokens in each batch to 65,536 in the single task setting and double the batch size in the multi-task setting to ensure that models in both settings are trained on an equal amount of parallel data, and thus any performance gain can only be attributed to monolingual data during finetuning.", "We use 2,500 warm-up steps to reach a maximum learning rate of 3e-5, and use 0.3 dropout and 0.2 label smoothing.", "After training, we use beam search with a beam size of 5 and report the results in sacreBLEU (Post, 2018) following the same evaluation in Liu et al. (2020).", "In Table 3, we compare all methods in terms of BLEU (Papineni et al., 2002) and chrF (Popovi, 2015) on the test data for three language pairs.", "First, we find that all pre-training methods significantly outperform the random baseline.", "In particular, our DEEP method obtains a gain of 3.5 BLEU points in the single task setting for the low-resource En-Ne translation.", "Second, we compute statistical significance of the BLEU and chrF scores with bootstrap resampling (Koehn, 2004), and we observe significant improvements with the multi-task finetuning strategy over the single-task finetuning for En-Ru and En-Ne.", "Our DEEP method outperforms the DAE method for En-Ru translation by 1.3 BLEU points in the multi-task setting.", "It is also worth noting that DEEP obtains higher BLEU points than DAE at the beginning of the multi-task finetuning process, however the gap between both methods decreases as the finetuning proceeds for longer steps (See Appendix A).", "One possible reason is that models trained by DEEP benefit from the entity trans-4 As we show in Figure 4, models pre-trained for 50K steps provide a reasonably good initialization.", "lations in the pre-training data and obtain a good initialization for translation at the beginning of the finetuning stage.", "As the multi-task finetuning proceeds, the models trained by both DAE and DEEP rely more on the translation task than the denoising task for translating a whole sentence.", "Thus the nuance of the entity translations might not be clearly evaluated according to BLEU or chrF.", "Since corpus-level metrics like BLEU or chrF might not necessarily reveal the subtlety of named entity translations, in the section we perform a fine-grained evaluation by the entity translation accuracy which counts the proportion of entities correctly translated in the hypotheses.", "Specifically, we first use SLING to extract entities for each pair of a reference and a hypothesis.", "We then count the translation accuracy of an entity as the proportion of correctly mentioning the right entity in the hypotheses, followed by macro-averaging to obtain the average entity translation accuracy.", "We also show the accuracy scores in Table 3.", "First, our method in both singleand multi-task settings significantly outperformed the other baselines.", "In particular, the gains from DEEP are much clear for the En-Uk and En-Ru translations.", "One possible reason is that Russian or Ukrainian entities extracted from the pre-training data have a relatively higher coverage of the entities in both the finetuning and test data as reported in Table 2.", "However, SLING might not detect as many entities in Nepali as in the other languages.", "We believe that future advances on entity linking in low-resource languages could potentially improve the performance of DEEP further.", "We leave this as our future work.", "In this section, we further analyze the eect on dierent categories of entities using our method.", "Performance of Entity Groups over Finetuning: The model is exposed to some entities more often than others at dierent stages: pre-training, finetuning and testing, which raises a question: how is the entity translation aected by the exposure during each stage?", "To answer this question, we divide the entities appearing in the test data into three groups: PFT : entities appearing in the pre-training, finetuning, and test data.", "FT", "We show the English-to-Russian entity translation accuracy scores for each group over finetuning steps in Figure 2.", "Overall, accuracies are higher for the entities that appear in the finetuning data ( PFT , FT ), which is due to the exposure to the finetuning data.", "Our proposed method consistently outperformed baseline counterparts in both singleand multi-task settings.", "The dierences in accuracy are particularly large at earlier finetuning steps, which indicates the utility of our method in lower-resource settings with little finetuning data.", "The eect of multi-task finetuning is most notable for entities in PT .", "Multi-task finetuning continuously exposes the model to the pre-training data, which as a result prevents the model from forgetting the learned entity translations from PT .", "Performance according to Entity Frequency: We further analyze the entity translation accuracy scores using entity frequencies in each group introduced above.", "This provides a more fine-grained perspective on how frequent or rare entities are translated .", "To do so, we take Russian hypotheses from a checkpoint with 40K steps of finetuning, bin the set of entities in three data ( i.e. PFT , PT , FT ) according to frequencies in each of the data.", "We then calculate the entity translation accuracy within each bin by comparing them against reference entities in the respective sentences.", "Figure 3 shows the accuracy gain of each pre-training methodologies from Random MT ( i.e. no pre-training) on test data, grouped by the entity frequency bins in pre-training and finetuning data.", "Note that leftmost column and the bottom row represent PT , FT , respectively.", "As observed earlier, the proposed method improves more over most frequency bins, with greater dierences on entities that are less frequent in finetuning data.", "This tendency is observed more significantly for the multi-task variant ( DEEP DEEP + MT ), where the gains are mostly from entities that never appeared in finetuning data ( i.e. leftmost column).", "Multi-task learning with DEEP therefore prevents the model from forgetting the entity translations learned at pre-training time.", "Analytical results on Ukrainian and Nepali are in Appendix B. 1758 0 ( 1 , 2 ] ( 2 , 6 ] ( 6 , 23 ] ( 23 , 1878 ] Freq.", "Finetuning Data Size vs Entity Translation: While DEEP primarily focuses on a low-resource setting, the evaluation with more resources can highlight potential use in broader scenarios.", "To this end, we expand the finetuning data for English-Russian translation with an additional 4 million sentence pairs from ParaCrawl (Ban et al., 2020), a parallel data collected from web pages.", "Although web pages might contain news text, ParaCrawl data covers more general domains.", "We finetune models on the combined data and evaluate with BLEU and entity translation accuracy.", "Table 4 shows the comparisons across dierent finetuning data sizes.", "When the model is initialized with pre-training methods, we observed decreased BLEU points and increased entity translation accuracy scores.", "This is partly due to the discrepancy of domains between our finetuning data (news) and ParaCrawl.", "Regardless, DEEP is consistently equal to or better than DAE in all tested settings.", "Pre-training Steps vs Entity Translation: Since DEEP leverages entity-augmented monolingual data, the model trained by DEEP revisits more entities in dierent context as the pre-training 0 25 50 100 150 200 Pre-training Steps (x 1000) 15 16 17 18 19 BLEUBLEU Entity Translation Accuracy 30 32 34 36 38 40 E n t i t y T r a n s l a t i o n A cc u r a c y Figure 4: English-to-Russian BLEU and Entity translation accuracy scores after finetuning from variable pretraining steps.", "proceeds.", "To analyze the eciency of learning entity translation during pre-training, we focus on the question: how many pre-training steps are needed for named entity translation?", "To examine this question, we take the saved checkpoints trained by DEEP from various pre-training steps, and apply the single-task finetuning strategy on the checkpoints for another 40K steps.", "We plot the entity translation accuracy and BLEU on the test data in Figure 4.", "We find that the checkpoint at 25K steps has already achieved a comparable entity translation accuracy with respect to the checkpoint at 150K steps.", "This shows that DEEP is ecient to learn the entity translations as early as in 25K steps.", "Besides, both the BLEU and entity translation accuracy keep improving as the pre-training steps increase to 200K steps.", "In this section, we select two examples that contain entities appearing only in the pre-training and testing data.", "The first example contains three location names.", "We find that the model trained by the single-task DAE predicts the wrong places which provide the wrong information in the translated sentence.", "In addition, the model trained by the multi-task DAE just copies the English named entities (i.e., Krasnodar, Saratov and Ulyanovsk) to the target sentence without actual translation.", "In contrast, our method predicts the correct translation for Krasnodar in both single-task and multi-task setting, while the multi-task DEEP translates all entities correctly.", "In the second example, although our method in the single-task setting predicts wrong for all the entities, the model generates partially correct translations such as for and @-@ for -.", "Notice that DEEP in the multi-task setting translates the correct entities asphalt and Kras-noarmeyskiy which convey the key information in this sentence.", "In contrast, the translation produced by the multi-task DAE method literally means (Barnaul), (new) (myth) (at) Krasnoarmey Prospekt, (grow) Krasnoarmeski., which is incomprehensible due to the entity translation errors.", "Named Entity Translation has been extensively studied for decades (Arbabi et al., 1994; Knight and Graehl, 1998).", "Earlier studies focus on rule-based methods using phoneme or grapheme (Wan and Verspoor, 1998; Al-Onaizan and Knight, 2002b), statistical methods that align entities in parallel corpus (Huang et al., 2003, 2004; Zhang et al., 2005) and Web mining methods built on top of a search engine (Huang et al., 2005; Wu and Chang, 2007; Yang et al., 2009).", "Recently, Finch et al. (2016); Hadj Ameur et al. (2017); Grundkiewicz and Heafield (2018) used NMT to transliterate named entities without any sentence context .", "Another line of research (Ugawa et al., 2018; Li et al., 2018; Torregrosa et al., 2020; Modrzejewski et al., 2020; Zhou et al., 2020) only performs entity recognition and uses entity tags (e.g., person) which are not directly informative to the translation task, in contrast to the entity translations obtained by entity linking in our work.", "Besides, these methods modify model architecture to integrate entity tag embeddings or knowledge graph entity embeddings (Moussallem et al., 2019), which also require extracting entity information for both training and test data.", "In contrast, we focus on data augmentation methods to improve name entity translation within context , so our method is easily applicable to any architectures and test data without preprocessing.", "Pre-training of Neural Machine Translation has been shown eective by many recent works (Con-neau and Lample, 2019; Song et al., 2019; Liu et al., 2020; Lin et al., 2020), where dierent pre-training objectives are proposed to leverage monolingual data for translation.", "These methods adopt a denoising auto-encoding framework, which encompasses several dierent works in data augmentation on monolingual data for MT (Lambert et al., 2011; Currey et al., 2017; Sennrich et al., 2016; Hu et al., 2019).", "However, named entity translations during pre-training is under-explored.", "We fill this gap by integrating named entity recognition and linking to the pre-training of NMT.", "Moreover, while recent 1760 work shows that continue finetuning a pre-trained encoder with the pre-training objective improves language understanding tasks (Gururangan et al., 2020), this finetuning paradigm has not been explored for pre-training of a sequence-to-sequence model.", "Besides, previous works on multi-task learning for MT focus on language modeling (Gulcehre et al., 2015; Zhang and Zong, 2016; Domhan and Hieber, 2017; Zhou et al., 2019), while we examine a multi-task finetuning strategy with an entity-based denoising task in this work and demonstrate substantial improvements for named entity translations.", "In this paper, we propose an entity-based pretraining method for neural machine translation.", "Our method improves named entity translation accuracy as well as BLEU score over strong denoising auto-encoding baselines in both single-task and multi-task setting.", "Despite the eectiveness, several challenging questions remain open.", "First, recent works on integrating knowledge graphs (Zhao et al., 2020a,b) in NMT have shown promising results for translation.", "Our method links entities to a multilingual knowledge base which contains rich information of the entities such as entity description, relation links, and alias.", "How to leverage these richer data sources to resolve entity ambiguity deserves further investigation.", "Second, finetuning pre-trained models on in-domain text data is a potential way to improve entity translations across domains.", "This work was supported in part by a grant from the Singapore Defence Science and Technology Agency." ]
[ "abstain", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "objective", "abstain", "method", "result", "abstain", "result", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "method", "abstain", "method", "abstain", "result", "result", "result", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "method", "abstain", "method", "abstain", "result", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "result", "method", "abstain", "other", "abstain", "method", "method", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "method", "abstain", "abstain", "other", "other", "other", "abstain", "other", "objective", "other", "other", "other", "abstain", "other", "abstain", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other" ]
[ "Marcin Junczys-Dowmunt Microsoft [email protected]", "Roman Grundkiewicz University of Edinburgh [email protected]", "Shubha Guha University of Edinburgh [email protected]", "Kenneth Heafield University of Edinburgh [email protected]", "Previously, neural methods in grammatical error correction (GEC) did not reach state-of-the-art results compared to phrase-based statistical machine translation (SMT) baselines.", "We demonstrate parallels between neural GEC and low-resource neural MT and successfully adapt several methods from low-resource MT to neural GEC.", "We further establish guidelines for trustable results in neural GEC and propose a set of model-independent methods for neural GEC that can be easily applied in most GEC settings.", "Proposed methods include adding source-side noise, domain-adaptation techniques, a GEC-specific training-objective, transfer learning with monolingual data, and ensembling of independently trained GEC models and language models.", "The combined effects of these methods result in better than state-of-the-art neural GEC models that outperform previously best neural GEC systems by more than 10% M 2 on the CoNLL-2014 benchmark and 5.9% on the JFLEG test set.", "Non-neural state-of-the-art systems are outperformed by more than 2% on the CoNLL-2014 benchmark and by 4% on JFLEG.", "Most successful approaches to automated grammatical error correction (GEC) are based on methods from statistical machine translation (SMT), especially the phrase-based variant.", "For the CoNLL 2014 benchmark on grammatical error correction (Ng et al., 2014), Junczys-Dowmunt and Grundkiewicz (2016) established a set of methods for GEC by SMT that remain state-of-the-art.", "Systems (Chollampatt and Ng, 2017; Yannakoudakis et al., 2017) that improve on results by Junczys-Dowmunt and Grundkiewicz (2016) use their set-up as a backbone for more complex systems.", "The view that GEC can be approached as a machine translation problem by translating from erroneous to correct text originates from Brockett et al. (2006) and resulted in many systems (e.g. Felice et al., 2014; Susanto et al., 2014) that represented the current state-of-the-art at the time.", "In the field of machine translation proper, the emergence of neural sequence-to-sequence methods and their impressive results have lead to a paradigm shift away from phrase-based SMT towards neural machine translation (NMT).", "During WMT 2017 (Bojar et al., 2017) authors of pure phrase-based systems offered unconditional sur-render 1 to NMT-based methods.", "Based on these developments, one would expect to see a rise of state-of-the-art neural methods for GEC, but as Junczys-Dowmunt and Grundkiewicz (2016) already noted, this is not the case.", "Interestingly, even today, the top systems on established GEC benchmarks are still mostly phrase-based or hybrid systems (Chollampatt and Ng, 2017; Yannakoudakis et al., 2017; Napoles and Callison-Burch, 2017).", "The best pure neural systems (Ji et al., 2017; Sakaguchi et al., 2017; Schmaltz et al., 2017) are several percent behind.", "2 If we look at recent MT work with this in mind, we find one area where phrased-based SMT dominates over NMT: low-resource machine translation.", "Koehn and Knowles (2017) analyze the behavior of NMT versus SMT for English-Spanish systems trained on 0.4 million to 385.7 million words of parallel data, illustrated in Figure 1.", "Quality for NMT 1 Ding et al. (2017) on their news translation shared task poster http://www.cs.jhu.edu/huda/papers/ jhu-wmt-2017.pdf 2 After submission of this work, Chollampatt and Ng (2018) published impressive new results for neural GEC with some overlap with our methods.", "However, our results stay ahead on all benchmarks while using simpler models.", "starts low for small corpora, outperforms SMT at a corpus size of about 15 million words, and with increasing size beats SMT with a large in-domain language model.", "Table 1 lists existing training resources for the English as-a-second-language (ESL) grammatical error correction task.", "Publicly available resources, NUS Corpus of Learner English (NUCLE) by Dahlmeier et al. (2013), Lang-8 NAIST (Mizumoto et al., 2012) and CLC FCE (Yannakoudakis et al., 2011) amount to about 27M tokens.", "Among these the Lang-8 corpus is quite noisy and of low quality.", "The Cambridge Learner Corpus (CLC) by Nicholls (2003) probably the best resource in this list is non-public and we would strongly discourage reporting results that include it as training data as this makes comparisons difficult.", "Contrasting this with Fig. 1, we see that for about 20M tokens NMT systems start outperforming SMT models without additional large language models.", "Current state-of-the-art GEC systems based on SMT, however, all include large-scale in-domain language models either following the steps outlined in Junczys-Dowmunt and Grundkiewicz (2016) or directly re-using their domain-adapted Common-Crawl language model.", "It seems that the current state of neural methods in GEC reflects the behavior for NMT systems trained on smaller data sets.", "Based on this, we conclude that we can think of GEC as a low-resource, or at most mid-resource, machine translation problem.", "This means that techniques proposed for low-resource (neural) MT should be applicable to improving neural GEC results.", "In this work we show that adapting techniques from low-resource (neural) MT and SMT-based GEC methods allows neural GEC systems to catch up to and outperform SMT-based systems.", "We improve over the previously best-reported neural GEC system (Ji et al., 2017) on the CoNLL 2014 test set by more than 10% M 2 , over a comparable pure SMT system by Junczys-Dowmunt and Grundkiewicz (2016) by 6%, and outperform the state-of-the-art result of Chollampatt and Ng (2017) by 2%.", "On the JFLEG data set, we report the currently best results, outperforming the previously best pure neural system (Sakaguchi et al., 2017) by 5.9% GLEU and the best reported results (Chol-lampatt and Ng, 2017) by 3% GLEU.", "In Section 2, we describe our NMT-based baseline for GEC, and follow recommendations from the MT community for a trustable neural GEC system.", "In Section 3, we adapt neural models to make better use of sparse error-annotated data, transferring low-resource MT and GEC-specific SMT methods to neural GEC.", "This includes a novel training objective for GEC.", "We investigate how to leverage monolingual data for neural GEC by transfer learning in Section 4 and experiment with language model ensembling in Section 5.", "Section 6 explores deep NMT architectures.", "In Section 7, we provide an overview of the experiments and how results relate to the JFLEG benchmark.", "We also recommend a model-independent toolbox for neural GEC.", "In this section, we combine insights from Junczys-Dowmunt and Grundkiewicz (2016) for grammatical error correction by phrase-based statistical machine translation and from Denkowski and Neubig (2017) for trustable results in neural machine translation to propose a trustable baseline for neural grammatical error correction.", "To make our results comparable to state-of-the-art results in the field of GEC, we limit our training data strictly to public resources.", "In the case of error-annotated data, as marked in Table 1, these are the NUCLE (Dahlmeier et al., 2013) and Lang-8 NAIST (Mizumoto et al., 2012) data sets.", "We do not include the FCE corpus (Yannakoudakis et al., 2011) to maintain comparability to Junczys-Dowmunt and Grundkiewicz (2016) and Chollampatt and Ng (2017).", "We strongly urge the community to not use the non-public CLC corpus for training, unless contrastive results without this corpus are provided as well.", "We choose the CoNLL-2014 shared task test set (Ng et al., 2014) as our main benchmark and the test set from the 2013 edition of the shared task (Ng et al., 2013) as a development set.", "For these benchmarks we report MaxMatch (M 2 ) scores (Dahlmeier and Ng, 2012).", "Where appropriate, we will provide results on the JFLEG dev and test sets (Napoles et al., 2017) using the GLEU metric (Sakaguchi et al., 2016) to demonstrate the generality of our methods.", "Table 2 summarizes test/dev set statistics for both tasks.", "For most our experiments, we report M 2 on CoNLL-2013 test (Dev) and precision (Prec.), recall (Rec.), M 2 (Test) on the CoNLL-2014 test set.", "As both benchmarks, CoNLL and JFLEG, are provided in NLTK-style tokenization (Bird et al., 2009), we use the same tokenization scheme for our training data.", "We truecase line beginnings and escape special characters using scripts included with Moses (Koehn et al., 2007).", "Following Sakaguchi et al. (2017), we apply the Enchant 3 spell-checker to the JFLEG data before evaluation.", "No spell-checking is used for the CoNLL test sets.", "3 https://github.com/AbiWord/enchant", "large-vocabulary problem of NMT.", "This is a well established procedure in neural machine translation and has been demonstrated to be generally superior to UNK-replacement methods.", "It has been largely ignored in the field of grammatical error correction even when word segmentation issues have been explored (Ji et al., 2017; Schmaltz et al., 2017).", "To our knowledge, this is the first work to use BPE sub-words for GEC, however, an analysis on advantages of word versus sub-word or character level segmentation is beyond the scope of this paper.", "A set of 50,000 monolingual BPE units is trained on the error-annotated data and we segment training and test/dev data accordingly.", "Segmentation is reversed before evaluation.", "Implementations of all models explored in this work 4 are available in the Marian 5 toolkit (Junczys-Dowmunt et al., 2018).", "The attentional encoder-decoder model in Marian is a re-implementation of the NMT model in Nematus (Sennrich et al., 2017b).", "The model differs from the model introduced by Bahdanau et al. (2014) by several aspects, the most important being the conditional GRU with attention for which Sennrich et al. (2017b) provide a concise description.", "All embedding vectors consist of 512 units; the RNN states of 1024 units.", "The number of BPE segments determines the size of the vocabulary of our models, i.e. 50,000 entries.", "Source and target side use the same vocabulary.", "To avoid overfitting, we use variational dropout (Gal and Ghahramani, 2016) over GRU steps and input embeddings with probability 0.2.", "We optimize with Adam (Kingma and Ba, 2014) with an average mini-batch size of ca.", "200.", "All models are trained until convergence (early-stopping with a patience of 10 based on development set cross-entropy cost), saving model checkpoints every 10,000 mini-batches.", "The best eight model checkpoints w.r.t. the development set M 2 score of each training run are averaged element-wise (Junczys-Dowmunt et al., 2016) resulting in a final single model.", "During decoding we use a beam-size of 24 and normalize model scores by length.", "6 4 Models, system configurations and outputs are available from https://github.com/grammatical/ neural-naacl2018 5 https://github.com/marian-nmt/marian 6 We used a larger beam-size than usual due to experiments with re-ranking of n-best lists not included in the paper.", "Junczys-Dowmunt and Grundkiewicz (2016) noticed that discriminative parameter tuning for GEC by phrase-based SMT leads to unstable M 2 results between tuning runs.", "This is a well-known effect for SMT parameter tuning and Clark et al. (2011) recommend reporting results for multiple tuning runs.", "Junczys-Dowmunt and Grundkiewicz (2016) perform four tuning runs and calculate parameter centroids following Cettolo et al. (2011).", "Neural sequence-to-sequence training is discriminative optimization and as such prone to instability.", "We already try to alleviate this by averaging over eight best checkpoints, but as seen in Table 3, results for M 2 remain unstable for runs with differently initialized weights.", "An amplitude of 3 points M 2 on the CoNLL-2014 test set is larger than most improvements reported in recent papers.", "None of the recent works on neural GEC account for instability, hence it is unclear if observed outcomes are actual improvements or lucky picks among byproducts of instability.", "We therefore strongly suggest to provide results for multiple independently trained models.", "Otherwise improvements of less than 2 or 3 points of M 2 remain doubtful.", "Interestingly, GLEU on the JFLEG data seems to be more stable than M 2 on CoNLL data.", "Running multiple experiments to provide averaged results seems prohibitively expensive, but Denkowski and Neubig (2017) and others (e.g. Sutskever et al., 2014; Sennrich et al., 2017a) show that ensembling of independently trained models leads to consistent rewards for MT. For our baseline in Table 3 the opposite seems to be true for M 2 .", "This is likely the reason why no other work on neural GEC mentions results for ensembles.", "On closer inspection, however, we see that the drop in M 2 for ensembles is due to a precision bias.", "M 2 being an F-score penalizes increasing distance between precision and recall.", "The increase in precision for ensembles is to be expected and we see it later consistently for all experiments.", "Ensembles choose corrections for which all independent models are fairly confident.", "This leads to fewer but better corrections, hence an increase in precision and a drop in recall.", "If the models are weak as our baseline, this can result in a lower score.", "It would, however, be unwise to dismiss ensembles, as we can use their bias towards precision to our advantage whenever they are combined with methods that aim to increase recall.", "This is true for nearly all remaining experiments.", "The methods described in this section turn our baseline into a more GEC-specific system.", "Most have been inspired by techniques from low-resource MT or closely related domain-adaptation techniques for NMT.", "All modifications are applied incrementally, later models include enhancements from the previous ones.", "GEC can be treated as a denoising task where grammatical errors are corruptions that have to be reduced.", "By introducing more corruption on the source side during training we can teach the model to reduce trust into the source input and to apply corrections more freely.", "Dropout is one way to introduce noise, but for now we only drop out single units in the embedding or GRU layers, something the model can easily recover from.", "To make the task harder, we add dropout over source words, setting the full embedding vector for a source word to 1 /p src with a probability of p src .", "During our experiments, we found p src = 0 .", "2 to work best.", "Table 4 show impressive gains for this simple method (+Dropout-Src.).", "Results for the ensemble match the previously best results on the CoNLL-2014 test set for pure neural systems (without the use of an additional monolingual language model) by Ji et al. (2017) and Schmaltz et al. (2017).", "The NUCLE corpus matches the domain of the CoNLL benchmarks perfectly.", "It is however much smaller than the Lang-8 corpus.", "A setting like this seems to be a good fit for domain-adaptation techniques.", "Sennrich et al. (2016a) oversample in-domain news data in a larger non-news training corpus.", "We do the same by adding the NUCLE corpus ten times to the training corpus.", "This can also be seen as similar to Junczys-Dowmunt and Grundkiewicz (2016) who tune phrase-based SMT parameters on the entire NUCLE corpus.", "Respectable improvements on both CoNLL test sets (+Domain-Adapt. in Table 4) are achieved.", "Junczys-Dowmunt and Grundkiewicz (2016) noticed that when tuning on the entire NUCLE corpus, even better results can be achieved if the error rate of NUCLE is adapted to the error rate of the original dev set.", "In NUCLE only 6% of tokens contain errors, while the CoNLL-2013 test set has an error-rate of about 15%.", "Following Junczys-Dowmunt and Grundkiewicz (2016), we remove correct sentences from the ten-fold oversampled NUCLE data greedily until an error-rate of 15% is achieved.", "This can be interpreted as a type of GEC-specific domain adaptation.", "We mark this method as +Domain-Adapt.", "in Table 4 and report for the ensemble the so far strongest results for any neural GEC system on the CoNLL benchmark.", "Press and Wolf (2016) showed that parameter tying between input and output embeddings 7 for language models leads to improved perplexity.", "Similarly, three-way weight-tying between source, target and output embeddings for neural machine translation seems to improve translation quality in terms of BLEU while also significantly decreasing the number of parameters in the model.", "In monolingual cases like GEC, where source and target vocabularies are (mostly) equal, embedding-tying seems to arise naturally.", "Output layer, decoder and encoder embeddings all share information which may further enhance the signal from corrective edits.", "The M 2 scores for +Tied-Emb.", "in Table 4 are inconclusive, but we see improvements in conjunction with later modifications.", "Previously, we applied error-rate adaptation to strengthen the signal from corrective edits in the training data.", "In this section, we investigate the effects of directly modifying the training loss to incorporate weights for corrective edits.", "Assuming that each target token y j has been generated by a source token x i , we scale the loss for each target token y j by a factor if y j differs from x i , i.e. if y j is part of an edit.", "Hence, log-likelihood loss takes the following form: L ( x, y, a ) = T y X t =1 ( x a t , y t ) log P ( y t | x, y <t ) , ( x a t , y t ) = (cid:26) if x a t 6 = y t 1 otherwise , where ( x, y ) is a training sentence pair and a is a word alignment a t { 0 , 1 , . . . , T x } such that source token x a t generates target token y t .", "Alignments are computed for each sentence pair with fast-align (Dyer et al., 2013).", "7 Output embeddings are encoded in the last output layer of a neural language or translation model.", "This is comparable to reinforcement learning towards GLEU as introduced by Sakaguchi et al. (2017) or training against diffs by Schmaltz et al. (2017).", "In combination with previous modifications, edit-weighted Maximum Likelihood Estimation (MLE) weighting seem to outperform both methods.", "The parameter introduces an additional hyper-parameter that requires tuning for specific tasks and affects the precision/recall trade-off.", "Table 5 shows = 3 seems to work best among the tested values when chosen to maximize M 2 on the CoNLL-2013 dev set.", "For this setting, we achieve our strongest results of 50.95 M 2 on the CoNLL benchmark (system +Edit-MLE) yet.", "This outperforms the results of a phrase-based SMT system with a large domain-adapted language model from Junczys-Dowmunt and Grundkiewicz (2016) by 1% M 2 and is the first neural system to beat this strong SMT baseline.", "Many ideas in low-resource neural MT are rooted in transfer learning.", "In general, one first trains a neural model on high-resource data and then uses the resulting parameters to initialize parameters of a new model meant to be trained on low-resource data only.", "Various settings are possible, e.g. initializing from models trained on large out-of-domain data and continuing on in-domain data (Miceli Barone et al., 2017) or using related language pairs (Zoph et al., 2016).", "Models can also be partially initialized by pre-training monolingual language models (Ramachandran et al., 2017) or only word-embeddings (Gangi and Federico, 2017).", "In GEC, Yannakoudakis et al. (2017) apply pretrained monolingual word-embeddings as initializations for error-detection models to re-rank SMT n-best lists.", "Approaches based on pre-training with monolingual data appear to be particularly well-suited to the GEC task.", "Junczys-Dowmunt and Grundkiewicz (2016) published 300GB of compressed monolingual data used in their work to create a large domain-adapted Common-Crawl n-gram language model.", "8 We use the first 100M lines.", "Preprocessing follows section 2.2 including BPE segmentation.", "Similarly to Gangi and Federico (2017) or Yannakoudakis et al. (2017), we use Word2vec (Mikolov et al., 2013) with standard settings to create word vectors.", "Since weights between source, target and output embeddings are tied, these embeddings are inserted once into the model, but affect computations three-fold, see the blue elements in Figure 2.", "The remaining parameters of the model are initialized randomly.", "We refer to this adaptation as +Pretrain-Emb.", "8 https://github.com/grammatical/ baselines-emnlp2016 600 + T i e d -E m b .", "Following Ramachandran et al. (2017), we first train a GRU-based language model on the monolingual data.", "The architecture of the language model corresponds as much as possible to the structure of the decoder of the sequence-to-sequence model.", "All pieces that rely on the attention mechanism or the encoder have been removed.", "After training for two epochs, all red parameters (including embedding layers) in Figure 2 are copied from the language model to the decoder.", "Remaining parameters are initialized randomly.", "This configuration is called +Pretrain-Dec.", "We pretrain each model separately to make sure that all weights have been initialized randomly.", "Table 6 summarizes the results for our transfer learning experiments.", "We compare the effects of pre-training with and without the edit-weighted MLE objective and can see that pre-training has significantly positive effects in both settings.", "The last result of 53.3% M 2 on the CoNLL-2014 benchmark matches the currently highest reported numbers (53.14% M 2 ) by Chollampatt and Ng Model Dev Prec.", "(2017) for a much more complex system and outperforms the highest neural GEC system (Ji et al., 2017) by 8% M 2 .", "Phrase-based SMT systems benefit naturally from large monolingual language models, also in the case of GEC as shown by Junczys-Dowmunt and Grundkiewicz (2016).", "Previous work (Xie et al., 2016; Ji et al., 2017) on neural GEC used n-gram language models to incorporate monolingual data.", "Xie et al. (2016) built a large 5-gram model and integrated it directly into their beam search algorithm, while Ji et al. (2017) re-use the language model provided by Junczys-Dowmunt and Grundkiewicz (2016) for n-best list re-ranking.", "We already combined monolingual data with our GEC models via pre-training, but exploiting separate language models is attractive as no additional training is required.", "Here, we reuse the neural language model created for pre-training.", "Similarly to Xie et al. (2016), the score s ( y | x ) for a correction y of sentence x is calculated as s ( y | x ) = 1 | y | \" 4 X i =1 log P i ( y | x ) + log PLM ( y ) # , where P i ( y | x ) is a translation probability for the i -th model in an ensemble of 4 .", "PLM ( y ) is the language model probability for y weighted by .", "We normalize by sentence length | y | .", "Using the dev set, we choose that maximizes this score via linear search in range [0 , 2] with step 0 .", "1 .", "Table 7 summarizes results for language model ensembling with three of our intermediate configurations.", "All configurations benefit from the language model in the ensemble, although gains for the pre-trained model are rather small.", "So far we analyzed model-independent 9 methods only training data, hyper-parameters, parameter initialization, and the objective function were modified.", "In this section we investigate if these techniques can be generalized to deeper or different architectures.", "Deep RNN A deep RNN-based model (Miceli Barone et al., 2017) proposed by Sennrich et al. (2017a) for their WMT 2017 submissions.", "This model is based on the shallow model we used until now.", "It has single layer RNNs in the encoder and decoder, but increases depth by stacking multiple GRU-style blocks inside one RNN cell.", "A single RNN step passes through all blocks before recursion.", "The encoder RNN contains 4 stacked GRU blocks, the decoder 8 (1 + 7 due to the conditional GRU).", "Following Sennrich et al. (2017a), we enable layer-normalization in the RNN-layers.", "State and embedding dimensions used throughout this work and in Sennrich et al. (2017a) are the same.", "Transformer The self-attention-based model by Vaswani et al. (2017).", "We base our model on their default architecture of 6 complex attention/self-attention blocks in the encoder and decoder and use the same model dimensions embeddings vector size is 512 (as before), filter size is 2048.", "As the deep models are less reliably trained with asynchronous SGD, we change the training algorithm to synchronous SGD and for both models follow the recipe proposed in Vaswani et al. (2017), with an effective base learning rate of 0.0003, learning rate warm-up during the first 16,000 iterations, and an inverse square-root decay after the warmup.", "As before, we average the best 8 checkpoints.", "We increase dropout probability over RNN layers to 0.3 for Deep-RNN and similarly set dropout between transformer layers to 0.3.", "Source-word dropout as a noising technique remains unchanged.", "9 The pre-training procedure however needs to be adapted to model architecture if we want to take advantage of every shared parameter, otherwise matching parameter subsets could probably be used successfully.", "We reuse all methods included up to +Pretrain-Dec.", "The pre-training procedure as described in section 4.1 needs to be modified in order to maximize the number of pre-trained parameters for the larger model architectures.", "Again, we train decoder-only models as typical language models by removing all elements that depend on the encoder, including attention-mechanisms over the source context.", "We can keep the decoder self-attention layers in the transformer model.", "We train for two epochs on our monolingual data reusing the hyper-parameters for the parallel case above.", "Table 8 summarizes the results for deeper models on the CoNLL dev and test set.", "Both deep models improve significantly over the shallow model with the transformer model reaching our best result reported on the CoNLL 2014 test set.", "For that test set it seems that ensembling with language models that were used for pre-training is ineffective when measured with M 2 ; while on the JFLEG data measured with GLEU we see strong improvements (Fig. 3b).", "We summarize the results for our experiments in Figure 3 and provide results on the JFLEG test set.", "Weights for the independent language model in the full ensemble were chosen on the respective dev sets for both tasks.", "Comparing results according to both benchmarks and evaluation metrics (M 2 for CoNLL, GLEU for JFLEG), it seems we can isolate the following set of reliable methods for state-of-the-art neural grammatical error correction: Ensembling neural GEC models with monolingual language models; Dropping out entire source embeddings; 602 B a s e li n e + D r opou t -S r c .", "Weighting edits in the training objective during optimization (+Edit-MLE); Pre-training on monolingual data; Ensembling of independently trained models; Domain and error adaptation (+Domain-Adapt., Error-Adapt.) towards a specific benchmark; Increasing model depth.", "Combinations of these generally 10 model-independent methods helped raising the performance of pure neural GEC systems by more than 10% M 2 on the CoNLL 2014 benchmark, also outperforming the previous state-of-the-art (Chollam-patt and Ng, 2017), a hybrid phrase-based system with a complex spell-checking system by 2%.", "We also showed that a pure neural system can easily 10 Increasing depth or changing the architecture to the Transformer model is clearly not model-independent.", "outperform a strong pure phrase-based SMT system (Junczys-Dowmunt and Grundkiewicz, 2016) when similarly adapted to the GEC task.", "On the JFLEG benchmark we outperform the previously-best pure neural system (Sakaguchi et al., 2017) by 5.9% GLEU (4.5% if no monolingual data is used).", "Improvements over SMT-based system like Napoles and Callison-Burch (2017) 11 and Chollampatt and Ng (2017) are significant and constitute the new state-of-the-art on the JFLEG test set.", "This work was partially funded by Facebook.", "The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of Facebook." ]
[ "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "other", "result", "abstain", "abstain", "abstain", "abstain", "result", "objective", "abstain", "abstain", "abstain", "abstain", "result", "result", "result", "method", "abstain", "abstain", "objective", "abstain", "result", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "result", "result", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "other", "other" ]
[ "Biomedical Information Extraction from scientific literature presents two unique and nontrivial challenges.", "First, compared with general natural language texts, sentences from scientific papers usually possess wider contexts between knowledge elements.", "Moreover, comprehending the fine-grained scientific entities and events urgently requires domain-specific background knowledge.", "In this paper, we propose a novel biomedical Information Extraction (IE) model to tackle these two challenges and extract scientific entities and events from English research papers.", "We perform Abstract Meaning Representation (AMR) to compress the wide context to uncover a clear semantic structure for each complex sentence.", "Besides, we construct the sentence-level knowledge graph from an external knowledge base and use it to enrich the AMR graph to improve the model's understanding of complex scientific concepts.", "We use an edge-conditioned graph attention network to encode the knowledge-enriched AMR graph for biomedical IE tasks.", "Experiments on the GENIA 2011 dataset show that the AMR and external knowledge have contributed 1.8% and 3.0% absolute F-score gains respectively.", "In order to evaluate the impact of our approach on real-world problems that involve topic-specific fine-grained knowledge elements, we have also created a new ontology and annotated corpus for entity and event extraction for the COVID-19 scientific literature, which can serve as a new benchmark for the biomedical IE community.", "1 1 Introduction The task of Biomedical Information Extraction (IE) aims to extract structured knowledge from biomedical literature, which is usually represented by an information network composed of scientific named 1 Data and source code are publicly available at https: //github.com/zhangzx-uiuc/Knowledge-AMR .", "entities, relations, and key events.", "It is an essential task for accelerating practical applications of the results and achievements from scientific research.", "For example, practical progress on combating COVID-19 depends highly on efficient transmission, assessment and extension of cutting-edge scientific research discovery (Wang et al., 2020a; Lybarger et al., 2020; Mller et al., 2020).", "In this scenario, a powerful biomedical IE system will be able to create a dynamic knowledge base from the surging number of relevant papers, making it more efficient to get access to the latest knowledge and use it for scientific discovery, as well as diagnosis and treatment of patients.", "IE from biomedical scientific papers presents two unique and non-trivial challenges.", "First, the authors of scientific papers tend to compose long sentences, where the event triggers and entity mentions are usually located far away from each other within the sentence.", "As shown in Table 1, we can see that compared to the ACE05 dataset in news domain, the average distance between triggers and entities is much longer in biomedical scientific papers.", "Therefore, it is more difficult for IE models to capture the global context with only flat sequential sentence encoders such as BioBERT (Lee et al., 2020) and SciBERT (Beltagy et al., 2019).", "Moreover, comprehending sentences from scientific papers urgently requires external knowledge, because there are a number of domain-specific un-We identified a cell-type-specific differential response: CREB, CTF, OTF-1, OFT-2, and NF-kappa B genes were strongly induced 1 to 4 hours after influenza A virus infection in the monocytic cell line Mono Mac 6, while in freshly prepared human monocytes no significant changes were detected.", "explained common expressions, acronyms, and abbreviations that are difficult for the model to understand.", "For instance, as shown in Figure 1, it is nearly impossible for a typical end-to-end model, which only takes in the sentence as input, to get clear understanding of CTF , OTF-1 , and OTF-2 without background knowledge.", "Moreover, the complex biomedical and chemical interactions between multifarious chemicals, genes, and proteins are even harder to understand in addition to the entities themselves.", "To tackle these two challenges, we propose a novel framework for biomedical IE that integrates Abstract Meaning Representation (AMR) (Ba-narescu et al., 2013) and external knowledge graphs.", "AMR is a semantic representation language that converts the meaning of each input sentence into a rooted, directed, labeled, acyclic graph structure.", "AMR semantic representation includes PropBank (Palmer et al., 2005) frames, non-core semantic roles, coreference, entity typing and linking, modality, and negation.", "The nodes in AMR are concepts instead of words, and the edge types are much more fine-grained compared with traditional semantic languages like dependency parsing and semantic role labeling.", "We train a transformer-based AMR semantic parser (Fernandez Astudillo et al., 2020) on biomedical scientific texts and use it in our biomedical IE model.", "To better handle long sentences with distant trigger and entity pairs, we use AMR parsing to compress each sentence and to better capture global interactions between tokens.", "For example, as shown in Figure 1, the Positive Regulation event trigger changes is located far away from its arguments CTF , OTF-1 , OTF-2 in the original sentence.", "However, in the AMR graph, such trigger-entity pairs are linked within two hops.", "Therefore, it will be much easier for the model to identify such kinds of events with the guidance of AMR parsing.", "In addition, to make better use of the external knowledge, we extract a global knowledge graph from the Comparative Toxicogenomics Database (CTDB) that covers all biomedical entities in the corpus.", "For each sentence, we select a minimal connected subgraph as the sentence-level KG.", "We use this sentence KG to enrich AMR nodes and edges to give the model additional prior domain knowledge, especially the biomedical and chemical interactions between different genes and proteins.", "These fine-grained relations are important for biomedical event extraction.", "For example, as in Figure 1, the incorporation of the external KG can indicate that Mono Mac 6 can result in leukemia, which will affect the expression of CTF , OTF-1 , and OFT-2 proteins.", "With this external knowledge, it will be much easier for the model to identify such proteins as the arguments of a Positive Regulation event.", "We encode the knowledge-enriched AMR graph using an edge-conditioned graph attention network (GAT) that is able to incorporate fine-grained edge features before conducting IE tasks.", "We evaluate our model on the existing benchmark GENIA-2011 dataset where our model greatly outperforms our baseline model by 4.8%.", "In addition to the existing GENIA-2011 benchmark, we also aim to evaluate the effectiveness of our framework on topic-specific literature.", "We develop a new ontology for entities and events with a large corpus from COVID-19 research papers, which is specifically annotated by medical professionals and can serve as a new benchmark for the biomedical IE community.", "The major contributions of this paper are summarized as follows.", "We are the first to enrich the AMR graph with the external knowledge and use a graph neural network to incorporate the fine-grained edge features.", "We evaluate our model and create a new state-of-the-art for biomedical event extraction on the GENIA-2011 corpus.", "We develop a new dataset from COVID-19 related research papers based on a new ontology that contains 25 fine-grained entity types and 14 event types.", "As shown in Figure 2, our proposed biomedical information extraction framework mainly consists of four steps.", "First, we extract a global knowledge graph (KG) that contains all the entities from the corpus, and select out a sentence-level knowledge subgraph for the input sentence.", "Then, we perform AMR parsing and construct the sentence-level AMR graph, and use the sentence knowledge subgraph to enrich the AMR graph by adding additional nodes and edges.", "After that, given the contextualized word embeddings, we first identify entity and trigger spans, and then conduct message passing on the knowledge enriched AMR graph based on an edge-conditioned GAT.", "Finally, we use feed-forward neural networks based classifiers for trigger and argument labeling.", "Global Knowledge Graph We use the Comparative Toxicogenomics Database (CTDB) 2 which contains fine-grained biomedical and chemical interactions between chemicals, genes, and diseases.", "We construct a global knowledge graph that involves all entities from the corpus with their pairwise chemical interactions.", "We extract these entity pairs with their biomedical interactions as triples, e.g., in Figure 1, ( Mono Mac 6, results, leuku-mia ) indicates that Mono Mac 6 cell can result in the disease of leukemia .", "We merge all the extracted triples and form a global knowledge graph G g = ( V g , E g ) .", "Our extracted global KG consists of 39,436 nodes and 590,235 edges.", "Sentence-level Knowledge Graph Given an input sentence, we aim to generate a sentence-level KG by selecting out a subgraph from the global KG, which contains the external knowledge between all entities within the sentence.", "Given an input sentence S , we use SciSpacy 3 to obtain all the related biomedical entities, including genes, 2 http://ctdbase.org/ 3 https://allenai.github.io/scispacy/ chemicals, cells, and proteins.", "We then link each entity mention from the sentence to the nodes in global KG G g = ( V g , E g ) .", "To select the sentence subgraph from the global KG, given the set of entity mentions E = { 1 , , |E| } (where each i is a word span), we select the connected subgraph that covers all entity mentions in E with the minimal number of nodes as the sentence KG.", "Note that such a sentence KG construction procedure can be accomplished in linear time complexity in terms of the number of nodes | V g | .", "This can be done by first traversing all the nodes in the global KG using depth-first search and obtaining all connected subgraphs of G g in linear time.", "After that, we select the set of subgraphs that can cover E and then choose the one G s = ( V s , E s ) with the minimal number of nodes as the sentence KG.", "AMR Parsing After obtaining the sentence KG, we fuse it with the AMR graph as an external knowledge enrichment procedure.", "Given an input sentence S = { w 1 , w 2 , , w N } , we first perform AMR parsing and obtain a sentence-level AMR graph GA = ( VA , EA ) with an alignment between AMR nodes and the spans in the original sentence.", "We employ the transformer-based AMR parser 4 (Fernandez Astudillo et al., 2020) pretrained on the Biomedical AMR corpus 5 released from the AMR official website.", "Each node v Ai = ( m Ai , n Ai ) V a represents an AMR concept or predicate, and we use ( m Ai , n Ai ) to denote the corresponding span for such an AMR node.", "For AMR edges, we use e Ai,j to denote the specific relation type between nodes v Ai and v Aj in AMR annotations (e.g., ARG-x , :time , :location , etc.).", "We randomly initialize the edge embeddings as a lookup embedding matrix EAMR , which is optimized in end-to-end training.", "Enrich AMR with sentence KG Given a pair of AMR graph GA and sentence KGGS , we fuse them into an enriched AMR graph G = ( V, E ) as the external reference for the subsequent information extraction tasks.", "In general, there are three cases for fusing each sentence's KG nodes v si V s into the AMR graph.", "First , if v si represents an entity within the sentence, and there is also an AMR 4 https://github.com/IBM/ transition-amr-parser 5 https://amr.isi.edu/download/2018-01-25/ amr-release-bio-v3.0.txt were strongly induced no significant changes , CTF OTF-1 OFT-2 BERT Embeddings Node Identification Global KG Sentence KG AMR Graph EnrichedAMR Graph IE Results Message Passing Corpus EntityLinking ExternalKB Sentence AMR Parsing Look up Fuse Figure 2: Overview of our proposed framework for biomedical information extraction.", "node v Aj with the same span, we then match v si to v Aj and add all KG edges linked to v si into the AMR graph.", "Second , if v si represents an entity within the sentence, but there is not any AMR node v A j with a matched span, we then add a new node (as well as all related edges) into the AMR graph.", "Third , if v si is an additional KG node that does not represent any entity in the sentence, we directly add this node into the AMR graph with all related KG edges.", "After we match and link all the sentence KG nodes towards the AMR graph, we obtain the fused graph G = ( V, E ) .", "Note that such a graph fusion procedure could result in multiple edges between a pair of nodes.", "We keep all these edges with their embeddings for the subsequent message passing procedure.", "The illustration for the graph fusion procedure is shown in Figure", "2. 2.4 Node Identification and Message Passing Contextualized Encoder Given an input sentence S , we use the BERT model pretrained on biomedical scientific texts (Lee et al., 2020) to obtain the contextualized word representations { x 1 , x 2 , , x N } .", "If one word is split into multiple pieces by the BERT tokenizer, we take the average of the representation vectors for all pieces as the final word representation.", "Node Identification After encoding the input sentence using BERT, we first identify the entity and trigger spans as the candidate nodes.", "Similar to (Wadden et al., 2019), given the contextualized word representations, we first enumerate all possible spans up to a fixed length K , and calculate each span representation according to the concatenation of the left and right endpoints and a trainable feature vector characterizing the span length 6 .", "Specifically, given each span s i = [ start ( i ) , end ( i )] , the span representation vector is: s i = (cid:2) x start ( i ) , x end ( i ) , z ( s i ) (cid:3) , (1) where z ( s i ) denotes a trainable feature vector that is only determined by the span length.", "We use separate binary classifiers for each specific entity and trigger type to handle the spans with multiple labels.", "Each binary classifier is a feed-forward neural network with ReLU activation in the hidden layer, which is trained with binary cross-entropy loss jointly with the whole model.", "In the diagnostic setting of using gold-standard entity mentions, we only employ span enumeration for event trigger identification, and use the gold-standard entity set for the following event extraction steps.", "Edge-conditioned GAT To fully exploit the information of external knowledge and AMR semantic structure, similar to (Zhang and Ji, 2021), we use an L -layer graph attention network to let the model aggregate neighbor information from the fused graph G = ( V, E ) .", "We use h li to denote the node feature for v i V in layer l , and e i,j to represent the edge feature vector for e i,j E .", "To update the node feature from l to l + 1 , we first calculate the attention score for each neighbor j N i based on the concatenation of node features h li , h lj and edge features e i,j .", "where W , W e are trainable parameters, and f l and ( ) are a single layer feed-forward neural network and LeakyReLU activation function respectively.", "Then we obtain the neighborhood information h i by the weighted sum of all neighbor features: h i = (cid:88) k N i li,j W h lk , where W is a trainable parameter.", "The updated node feature is calculated by a combination of the original node feature and its neighborhood information, where controls the level of message passing between neighbors.", "Note that our edge-conditioned GAT structure is similar to (Huang et al., 2020).", "The main difference is that (Huang et al., 2020) only uses edge features for calculating the attention score l i,j , while we use the concatenation of the feature vectors of each edge and its involved pair of nodes.", "Such a method can better characterize differing importance levels for neighbor nodes, and thus yield better model performance.", "We select the last layer h Li as the final representation for each entity or trigger.", "Message Passing Given the knowledge enriched AMR graph G = ( V, E ) and representation vectors of extracted trigger and entity spans, we initialize the feature vectors for nodes and edges as follows.", "For each KG node v si which does not belong to any AMR node, we initialize its feature vectors v si using KG embeddings pre-trained on the global KG using TransE (Bordes et al., 2013).", "For each original AMR node v Ai = ( m Ai , n Ai ) , we first calculate its span representation v Ai according to Eq.", "(1), and then use a linear transformation WA v Ai + b A to initialize the node feature vector h 0 i .", "For edge features, we use pre-trained TransE embeddings for KG edges, and use the trainable embedding matrix EAMR for AMR relations.", "We use our proposed edge-conditioned GAT to conduct message passing and get the feature vectors from the final layer as the updated node representations.", "We obtain the final representation vectors for the trigger and entity nodes and denote them as { 1 , , |T | } and { 1 , , |E| } respectively.", "entity set E with the representations i , we use LI to denote the loss for binary classifiers for event trigger and entity extraction in the node identification step.", "For event argument role labeling, we concatenate candidate trigger-entity pairs or trigger-trigger pairs (for nested events) and feed them into two separate FFNs (with softmax activation function in the output layer) for role type classifica-tion, where we have y tti,j = FFN tt ([ i : j ]) or y tei,j = FFN te ([ i : j ]) .", "The overall training objective is defined in a multi-task setting, which includes the cross-entropy loss for trigger and argument classification, as well as the binary classifi-cation loss LI .", "Data Similarly to the recent work (Li et al., 2019; Huang et al., 2020; Ramponi et al., 2020), we also conduct experiments on the BioNLP GENIA 2011 (Kim et al., 2011) dataset consisting of both abstracts and main body texts from biomedical scientific papers.", "Similarly to previous work (Li et al., 2019; Huang et al., 2020; Ramponi et al., 2020), we only focus on extracting the core events, which involves Protein entities, 9 fine-grained event types, and 2 event argument types.", "We do not incorporate event ontology or training data from the newer versions of the BioNLP GENIA shared tasks (e.g., GENIA 2013) to ensure fair comparisons with previous models.", "The statistics of this dataset are shown in Table", "2. The original GENIA dataset Data Split Train Set Dev Set Test Set # Documents 908 259 231 # Sentences 8,620 2,846 3,348 # Proteins 11,625 4,690 5,301 # Events 10,310 3,250 4,487 Table 2: GENIA 2011 Dataset Statistics.", "is annotated in paragraphs.", "Following (Li et al., 2019), we focus on sentence-level event extraction and only keep events and argument roles within each sentence (around 94% of the events).", "embeddings, we use 600-dim embedding vectors pre-trained on the global knowledge graph using", "TransE.", "We use a two-layer edge-conditioned GAT and the feature dimensions are 2048 for nodes and 256 for edges.", "Specifically, the FFNs consist of two layers with a dropout rate of 0.4, where the numbers of hidden units are 150 for entity extraction and 600 for event extraction.", "We train our model with Adam (Kingma and Ba, 2015) on NVIDIA Tesla V100 GPUs for 80 epochs (approximately takes 4 minutes for 1 training epoch) with learning rate 1e-5 for BERT parameters and 5e-3 for other parameters.", "We select the model checkpoint with optimal F1-Score on the development set to evaluation on the test set from the official website.", "We consider the most recent models on biomedical event extraction: KB-Tree-LSTM (Li et al., 2019), GEANet (Huang et al., 2020), BEESL (Ramponi et al., 2020), and DeepEventMine (Trieu et al., 2020) for comparison in our experiments, and we report the precision, recall, and F1 score from the GENIA 2011 online test set evaluation service 7 .", "In addition to the previous models, we also conduct ablation studies to evaluate the contributions of different parts in our model.", "We adopt the model variants BERT-Flat and BERT-AMR , where BERT-Flat only uses the BERT representations without any help from AMR and KG, and BERT-AMR denotes the model with an edge-conditioned GAT to encode the AMR graph without incorporating external knowledge.", "We report the performance of our model and compare it with the most recent biomedical IE models KB-Tree-LSTM (Li et al., 2019), GEANet (Huang et al., 2020), BEESL (Ramponi et al., 2020), and DeepEventMine (Trieu et al., 2020) in Table", "3. In general, our KG enriched AMR model can achieve slightly higher performance compared with the state-of-the-art model DeepEventMine.", "Besides, our model greatly outperforms all other previous models for biomedical event extraction.", "To further measure the impact of each individual part in our model, we also introduce two model variants for the ablation study.", "We can see that compared with simply finetuning a flat BERT model, the AMR parsing contributes a 1.84% absolute gain on F1-Score, while the incorporation of external 7 http://bionlp-st.dbcls.jp/GE/2011/ eval-test/ knowledge graph contributes 2.95%.", "We also report the overall development set F1 scores without using gold-standard entities, and compare the performance with BEESL in Table", "4. We can discover that our model performs significantly better than the BEESL model, which proves that our model can better handle practical scenarios without gold-standard entities.", "COVID-19 Dataset In order to evaluate the impact of our approach on real-world problems, besides the GENIA dataset, we also develop a new dataset specifically labeled by medical professionals from research papers related to COVID-19.", "We select out 186 full-text articles with 12,916 sentences from PubMed and PMC.", "Three experienced annotators who are biomedical domain experts have participated in the annotation, and the Cohen's Kappa scores for pairwise agreement between the annotators are 0.79, 0.84, and 0.74 respectively.", "The pre-defined entity and event type distributions in this dataset are shown in Table 6.", "Results We evaluate our proposed model by removing the event argument labeling procedure to accommodate a scenario limited to entity and event trigger labeling, that is, we remove the argument role classifiers FFN tt and FFN te while the overall training loss in Eq.", "(3) only contains the first two terms for span identification and event trigger clas-sification.", "As shown in Table 5, our model achieves 78.05% overall F1 score with 83.60% F1 on entity extraction task and 72.37% F1 on event extraction.", "The entity extraction performance on the COVID dataset is lower than typical coarse-grained entity extraction model performance for BERT-like models on other datasets (e.g., our model can get around 86% F1 score for entity extraction on GENIA-2011 development set).", "This is probably because our proposed COVID-19 dataset is challenging with more find-grained biomedical entity and event types.", "In the first example, we can see that the flat model fails to identify CAII as an entity of the bind event, which is probably due to the long distance between the trigger bind and the argument CAII (the model successfully detects the other two arguments V-erbA and C-erbA because they are much nearer).", "With the help of AMR parsing, the model successfully links CAII to the bind event since in the AMR graph, the three entities C-erbA , V-erbA , and CAII are located within the same number of hops from the bind trigger.", "But the model still cannot recognize CAII as the theme of transcription .", "This is probably because the model is not clear what whose refers to in the sentence.", "However, with the help of external knowledge, the model knows in advance that V-erbA could inhibit the transcription of CAII , thus it is able to identify CAII as the theme of the transcription event.", "In the second example, the flat model is confused about which entity belongs to which event between two binding events in the same sentence.", "Here, the AMR parsing provides a clear tree structure and guides the model to correctly link the event-entity pairs (i.e., heterodimers with RAR beta , binding with VDR ).", "However, the BERT-AMR model still fails to identify heterodimers as the theme of stimulated .", "With the further help of the external KG, the model knows in advance that RA can stimulate the generation of RAR beta heterodimers, and thus it is able to correctly identify a positive regulation between these two triggers.", "We compare the predictions from our model with the gold-standard annotations on the development set and discover the following typical remaining error cases.", "Non-verb Event Triggers Most of the biomedical events are triggered by verbs ( bind , express , etc.) or their noun forms ( binding , expression , etc.).", "However, there are also events triggered by adjectives (e.g., subsequent ), proper nouns (e.g., mRNA, SiRNA ), and even prepositions (e.g., from ) and conjunctions (e.g., rather than ).", "Our model misses a lot of these non-verb event triggers due to the insufficient training examples.", "Misleading Verb Prefix We also find that the prefix of a verb can sometimes be misleading for event trigger classification, especially for Negative Regulation events.", "Many Negative Regulation events are triggered by words with certain styles of prefix ( inor de), e.g., inactivation, inactivated, decrease, degradation , etc., representing some negative interactions.", "As a result, the model mistakenly labels many other words with the same prefixes as Negative Regulation event triggers.", "For example, in the sentence: Dephosphorylation of 4E-BP1 was also observed ... , the word dephosphorylation should not be classified as a Negative Regulation event although it has a deprefix.", "Because dephosphorylation denotes an inverse chemical process of phosphorylation rather than negative regulation between different events or proteins.", "This is Sentence: Here, we show that V-erbA and C-erbA bind directly to sequences within the promoter of the erythrocytespecific carbonic anhydrase II (CAII), a gene whose transcription is efficiently suppressed by V-erbA.", "Biomedical Information Extraction A number of previous studies contribute to biomedical event extraction with various techniques, such as dependency parsing (McClosky et al., 2011; Li et al., 2019), external knowledge base (Li et al., 2019; Huang et al., 2020), joint inference of triggers and arguments (Poon and Vanderwende, 2010; Ramponi et al., 2020), Abstract Meaning Representation (Rao et al., 2017), search based neural models (Espinosa et al., 2019), and multi-turn question answering (Wang et al., 2020b).", "Recently, to handle the nested biomedical events, BEESL (Ram-poni et al., 2020) models biomedical event extraction as a unified sequence labeling problem for end-to-end training.", "DeepEventMine (Trieu et al., 2020) proposes to use a neural network based classifier to decide the structure of complex nested events.", "Our model is also in an end-to-end training pipeline, but additionally utilizes fine-grained AMR semantic parsing and external knowledge to improve the performance.", "Utilization of External Knowledge In terms of utilization of external knowledge, (Li et al., 2019) proposes a knowledge-driven Tree-LSTM framework to capture dependency structures and entity properties from an external knowledge base.", "More recently, GEANet (Huang et al., 2020) introduces a Graph Edge conditioned Attention Network (GEANet) that incorporates domain knowledge from the Unified Medical Language System (UMLS) into the IE framework.", "The main difference of our model is that we use fine-grained AMR parsing to compress the wide context, and manage to use an external KG to enrich the AMR to better incorporate domain knowledge.", "Incorporating external knowledge is also widely used in other tasks such as relation extraction (Chan and Roth, 2010; Cheng and Roth, 2013), and QA for domain-specific (science) questions (Pan et al., 2019).", "Biomedical Benchmarks for COVID-19 (Lo et al., 2020) releases a dataset containing open-access biomedical papers related to COVID-19.", "A lot of research has been done based on this dataset, including Information Retrieval (Wise et al., 2020), Entity Recognition (Wang et al., 2020b), distant supervision on fine-grained biomedical name entity recognition to support automatic information retrieval indexing or evidence mining (Wang et al., 2020c), and end-to-end Question Answering (QA) system for COVID-19 with domain adaptive synthetic QA training (Reddy et al., 2020).", "Our COVID-19 dataset will further advance the field in developing effective IE techniques specifically for the COVID-19 domain.", "In this paper, we propose a novel biomedical Information Extraction framework to effectively tackle two unique challenges for scientific domain IE:", "complex sentence structure and unexplained concepts.", "We utilize AMR parsing to compress wide contexts, and incorporate external knowledge into the AMR.", "Our proposed model produces signifi-cant performance gains compared with most state-of-the-art methods.", "In the future, we intend to exploit tables and figures in the scientific literature for multimedia representation.", "We also plan to further incorporate coreference graphs among sentences to further enrich contexts.", "We will also continue exploring the use of richer information from an external knowledge base to further improve the model's performance.", "This research is based upon work supported by the Molecule Maker Lab Institute: An AI Research Institutes program supported by NSF under Award No. 2019897, NSF No. 2034562, U.S. DARPA KAIROS Program No.", "FA8750-19-2-1004, the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via contract No.", "FA8650-17-C-9116, and Air Force No.", "FA8650-17-C-7715.", "Any opinions, findings and conclusions or recommendations expressed in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the U.S. Government.", "The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation here on." ]
[ "abstain", "abstain", "abstain", "objective", "objective", "result", "method", "abstain", "objective", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "result", "abstain", "objective", "objective", "objective", "abstain", "objective", "method", "method", "method", "method", "method", "method", "method", "result", "method", "method", "objective", "other", "method", "method", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "method", "method", "abstain", "other", "method", "method", "method", "method", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "method", "abstain", "method", "method", "abstain", "method", "method", "method", "method", "abstain", "method", "method", "method", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "result", "result", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "other", "other", "other", "abstain", "other", "other", "method", "other", "other", "other", "objective", "objective", "abstain", "method", "objective", "abstain", "abstain", "objective", "other", "other", "other", "other", "other", "other" ]
[ "Large pre-trained language models achieve state-of-the-art results when fine-tuned on downstream NLP tasks.", "However, they almost exclusively focus on text-only representation, while neglecting cell-level layout information that is important for form image understanding.", "In this paper, we propose a new pre-training approach, StructuralLM, to jointly leverage cell and layout information from scanned documents.", "Specifically, we pre-train StructuralLM with two new designs to make the most of the interactions of cell and layout information: 1) each cell as a semantic unit; 2) classification of cell positions.", "The pre-trained StructuralLM achieves new state-of-the-art results in different types of downstream tasks, including form understanding (from 78.95 to 85.14), document visual question answering (from 72.59 to 83.94) and document image classification (from 94.43 to 96.08).", "Document understanding is an essential problem in NLP, which aims to read and analyze textual documents.", "In addition to plain text, many real-world applications require to understand scanned documents with rich text.", "As shown in Figure 1, such scanned documents contain various structured information, like tables, digital forms, receipts, and invoices.", "The information of a document image is usually presented in natural language, but the format can be organized in many ways from multi-column layout to various tables/forms.", "Inspired by the recent development of pre-trained language models (Devlin et al., 2019; Liu et al., 2019; Wang et al., 2019) in various NLP tasks, recent studies on document image pretraining (Zhang et al., 2020; Xu et al., 2019) have pushed the limits of a variety of document image understanding tasks, which learn the interaction between text and layout information across scanned document images.", "Xu et al. (2019) propose LayoutLM, which is a pre-training method of text and layout for document image understanding tasks.", "It uses 2D-position embeddings to model the word-level layout information.", "However, it is not enough to model the word-level layout information, and the model should consider the cell as a semantic unit.", "It is important to know which words are from the same cell and to model the cell-level layout information.", "For example, as shown in Figure 1", "(a), which is from form understanding task (Jaume et al., 2019), determining that the LORILLARD and the ENTITIES are from the same cell is critical for semantic entity labeling.", "The LORIL-LARD ENTITIES should be predicted as Answer entity, but LayoutLM predicts LORILLARD and ENTITIES as two separate entities.", "The input to traditional natural language tasks is usually presented as plain text, and text-only models need to obtain the semantic representation of the input sentences and the semantic relationship between sentences.", "In contrast, document images like forms and tables are composed of cells that are recognized as bounding boxes by OCR.", "As shown in Figure 1, the words from the same cell generally express a meaning together and should be modeled as a semantic unit.", "This requires a text-layout model to capture not only the semantic representation of individual cells but also the spatial relationship between cells.", "In this paper, we propose StructuralLM to jointly exploit cell and layout information from scanned documents.", "Different from previous text-based pre-trained models (Devlin et al., 2019; Wang et al., 2019) and LayoutLM (Xu et al., 2019), StructuralLM uses cell-level 2D-position embeddings with tokens in a cell sharing the same 2D-position.", "This makes StructuralLM aware of which words are", "from the same cell, and thus enables the model to derive representation for the cells.", "In addition, we keep classic 1D-position embeddings to preserve the positional relationship of the tokens within every cell.", "We propose a new pre-training objective called cell position classification, in addition to the masked visual-language model.", "Specifically, we first divide an image into N areas of the same size, and then mask the 2D-positions of some cells.", "StructuralLM is asked to predict which area the masked cells are located in.", "In this way, StructuralLM is capable of learning the interactions between cells and layout.", "We conduct experiments on three benchmark datasets publicly available, all of which contain table or form images.", "Empirical results show that our StructuralLM outperforms strong baselines and achieves new state-of-the-art results in the downstream tasks.", "In addition, StructuralLM does not rely on image features, and thus is readily applicable to real-world document understanding tasks.", "We propose a structural pre-trained model for table and form understanding.", "It jointly leverages cells and layout information in two ways: cell-level positional embeddings and a new pre-training objective called cell position classification.", "StructuralLM significantly outperforms all state-of-the-art models in several downstream tasks including form understanding (from 78.95 to 85.14), document visual question answering (from 72.59 to 83.94) and document image classification (from 94.43 to 96.08).", "We present StructuralLM, a self-supervised pretraining method designed to better model the interactions of cells and layout information in scanned document images.", "The overall framework of StructuralLM is shown in Figure 2.", "Our approach is inspired by LayoutLM (Xu et al., 2019), but different from it in three ways.", "First, we use cell-level 2D-position embeddings to model the layout information of cells rather than word-level 2D-position embeddings.", "We also introduce a novel training objective, the cell position classification, which tries to predict the position of the cells only depending on the position of surrounding cells and the semantic relationship between them.", "Finally, StructuralLM retains the 1D-position embeddings to model the positional relationship between tokens from the same cell, and removes the image embeddings in LayoutLM that is only used in the downstream tasks.", "The architecture overview of StructuralLM is shown in Figure 2.", "To take advantage of existing pre-trained models and adapt to document image understanding tasks, we use the BERT (Devlin et al., 2019) architecture as the backbone.", "The BERT model is an attention-based bidirectional language modeling approach.", "It has been verified that the BERT model shows effective knowledge transfer from the self-supervised nlp tasks with a large-scale pre-training corpus.", "Based on the architecture, we propose to utilize the cell-level layout information from document images and incorporate them into the transformer encoder.", "First, given a set of tokens from different Figure 2: The overall framework of StructuralLM.", "cells and the layout information of cells, the cell-level input embeddings are computed by summing the corresponding word embeddings, cell-level 2D-position embeddings, and original 1D-position embeddings.", "Then, these input embeddings are passed through a bidirectional Transformer encoder that can generate contextualized representations with an attention mechanism.", "Given document images, we use an OCR tool to recognize text and serialize the cells (bounding boxes) from top-left to bottom-right.", "Each document image is represented as a sequence of cells { c 1 , ..., c n } , and each cell is composed of a sequence of words c i = { w 1 i , ..., w mi } .", "Given the sequences of cells and words, we first introduce the method of cell-level input embedding.", "Cell-level Layout Embedding.", "Unlike the position embedding that models the word position in a sequence, the 2D-position embedding aims to model the relative spatial position in a document image.", "To represent the spatial position of cells in scanned document images, we consider a document page as a coordinate system with the top-left origin.", "In this setting, the cell (bounding box) can be precisely defined by (x0, y0, x1, y1), where (x0, y0) corresponds to the top-left position, and (x1, y1) represents the bottom-right position.", "Therefore, we add two cell-level position embedding layers to embed x-axis features and y-axis features separately.", "The words { w 1 i , ..., w mi } in i-th cell c i share the same 2D-position embeddings, which is different from the word-level 2D-position embedding in LayoutLM.", "As shown in Figure 2, the input tokens with the same color background are from the same cell, and the corresponding 2D-positions are also the same.", "In this way, StructuralLM can not only learn the layout information of cells but also know which words are from the same cell, which is better to obtain the contextual representation of cells.", "In addition, we keep the classic 1D-position embeddings to preserve the positional relationship of the tokens within the same cell.", "Finally, the cell-level layout embeddings are computed by summing the four 2D-position embeddings and the classic 1D-position embeddings.", "Input Embedding.", "Given a sequence of cells { c 1 , ..., c n } , we use WordPiece (Wu et al., 2016) to tokenize the words in the cells.", "The length of the text sequence is limited to ensure that the length of the final sequence is not greater than the maximum sequence length L .", "The final cell-level input embedding is the sum of the three embeddings.", "Word embedding represents the word itself, 1D-position embedding represents the token index, and cell-level 2D-position embedding is used to model the relative spatial position of cells in a document image.", "We adopt two self-supervised tasks during the pretraining", "pretraining stage, which are described as follows.", "Masked Visual-Language Modeling.", "We use the Masked Visual-Language Modeling (MVLM) (Xu et al., 2019) to make the model learn the cell representation with the clues of cell-level 2D-position embeddings and text embeddings.", "We randomly mask some of the input tokens but keep the corresponding cell-level position embeddings, and then the model is pre-trained to predict the masked tokens.", "With the cell-level layout information, StructuralLM can know which words surrounding the mask token are in the same cell and which are in adjacent cells.", "In this way, StructuralLM not only utilizes the corresponding cell-level position information but also understands the cell-level contextual representation.", "Therefore, compared with the MVLM in LayoutLM, StructuralLM makes use of the cell-level layout information and predicts the mask tokens more accurately.", "We will compare the performance of the MVLM with the cell-level layout embeddings and word-level layout embeddings respectively in Section 3.5.", "Cell Position Classification.", "In addition to the MVLM, we propose a new Cell Position Classification (CPC) task to model the relative spatial position of cells in a document.", "The previous models represent the layout information at the bottom of the transformer, but the layout information at the top of the transformer may be weakened.", "Therefore, we consider introducing the cell position classification task so that StructuralLM can model the cell-level layout information from the bottom up.", "Given a set of scanned documents, this task aims to predict where the cells are in the documents.", "First, we split them into N areas of the same size.", "Then we calculate the area to which the cell belongs to through the center 2D-position of the cell.", "Meanwhile, some cells are randomly selected, and the 2D-positions of tokens in the selected cells are replaced with (0; 0; 0; 0) .", "In this way, StructuralLM is capable of learning the interactions between cells and layout.", "During the pre-training, a classification layer is built above the encoder outputs.", "This layer predicts a label [1 , N ] of the area where the selected cell is located, and computes the cross-entropy loss.", "Considering the MVLM and CPC are performed simultaneously, the cells with masked tokens will not be selected for the CPC task.", "This prevents the model from not utilizing cell-level layout information when doing the MVLM task.", "We will compare the performance of different N in Section 3.1.", "Pre-training.", "StructuralLM is pre-trained with the two pre-training tasks and we add the two task losses with equal weights.", "We will compare the performance of MVLM and MVLM+CPC in Section 3.5.", "The pre-trained StructuralLM model is fine-tuned on three document image understanding tasks, each of which contains form images.", "These three tasks are form understanding task, document visual question answering task, and document image classification task.", "For the form understanding task, StructuralLM predicts B, I, E, S, O tags for each token, and then uses sequential labeling to find the four types of entities including the question, answer, header, or other.", "For the document visual question answering task, we treat it as an extractive QA task and build a token-level classifier on the top of token representations, which is usually used in Machine Reading Comprehension (MRC) (Rajpurkar et al., 2016; Wang et al., 2018).", "For the document image classification task, StructuralLM predicts the class labels using the representation of the [ CLS ] token.", "Pre-training Dataset .", "Following LayoutLM, we pre-train StructuralLM on the IIT-CDIP Test Collection 1.0 (Lewis et al., 2006).", "It is a large-scale scanned document image dataset, which contains more than 6 million documents, with more than 11 million scanned document images.", "The pretraining dataset (IIT-CDIP Test Collection) only contains pure texts while missing their corresponding bounding boxes.", "Therefore, we need to reprocess the scanned document images to obtain the layout information of cells.", "Like the pre-processing method of LayoutLM, we similarly process the dataset by using Tesseract 1 , which is an open-source OCR engine.", "We normalize the actual coordinates to integers in the range from 0 to 1,000, and an empty bounding box (0; 0; 0; 0) is attached to special tokens [CLS], [SEP] and [PAD], which is similar to (Devlin et al., 2019).", "Implementation Details .", "StructuralLM is based on the Transformer which consists of a 24-layer encoder with 1024 embedding/hidden size, 4096 feed-forward filter size, and 16 attention heads.", "To take advantage of existing pre-trained models and adapt to document image understanding tasks, we initialize the weight of StructuralLM model with the pre-trained RoBERTa (Liu et al., 2019) large model except for the 2D-position embedding layers.", "1 https://github.com/tesseract-ocr/tesseract Model Precision Recall F1 Parameters BERTBASE (Devlin et al., 2019) 0.5469 0.6710 0.6026 110M RoBERTa BASE (Liu et al., 2019) 0.6349 0.6975 0.6648 125M BERTLARGE 0.6113 0.7085 0.6563 349M RoBERTa LARGE 0.6780 0.7391 0.7072 355M BROS (Hong et al., 2021) 0.8056 0.8188 0.8121 LayoutLM BASE (Xu et al., 2019) 0.7597 0.8155 0.7866 113M LayoutLM LARGE 0.7596 0.8219 0.7895 343M StructuralLM LARGE 0.8352 0.8681 0.8514 355M Table 1: Model accuracy (Precision, Recall, F1) on the test set of FUNSD.", "Following Devlin et al. (2019), for the masked visual-language model task, we select 15% of the input tokens for prediction.", "We replace these masked tokens with the mask token 80% of the time, a random token 10% of the time, and an unchanged token 10% of the time.", "Then, the model predicts the corresponding token with the cross-entropy loss.", "For the Bounding-box position classification task, we split the document image into N areas of the same size, and then select 15% of the cells for prediction.", "We replace the 2D-positions of words in the masked cells with the (0; 0; 0; 0) 90% of the time, and an unchanged position 10% of the time.", "StructuralLM is pre-trained on 16 NVIDIA Tesla V100 32GB GPUs for 480K steps, with each mini-batch containing 128 sequences of maximum length 512 tokens.", "The Adam optimizer is used with an initial learning rate of 1e-5 and a linear decay learning rate schedule.", "For the downstream tasks, we use a single Tesla V100 16GB GPU.", "Hyperparameter N .", "For the cell position classification task, we test the performances of StructuralLM using different hyperparameter N during pre-training.", "Considering that the complete pretraining takes too long, we pre-train StructuralLM for 100k steps with a single GPU card to compare the performance of different N .", "As shown in Figure 3, when the N is set as 16, StructuralLM obtains the highest F1-score on the FUNSD dataset.", "Therefore, we set N as 16 during the pre-training.", "We experiment with fine-tuning StructuralLM on several downstream document image understanding tasks, especially form understanding tasks.", "The FUNSD (Jaume et al., 2019) is a dataset for form understanding.", "It includes 199 real, fully annotated, scanned forms with 9,707 semantic entities and 31,485 words.", "The 199 scanned forms are Figure 3: F1 score of StructuralLM pre-training w.r.t different hyperparameter N and fine-tuning on FUNSD dataset.", "split into 149 for training and 50 for testing.", "The FUNSD dataset is suitable for a variety of tasks, where we just fine-tuning StructuralLM on semantic entity labeling.", "Specifically, each word in the dataset is assigned to a semantic entity label from a set of four predefined categories: question, answer, header, or other.", "Following the previous works, we also use the word-level F1 score as the evaluation metric.", "We fine-tune the pre-trained StructuralLM on the FUNSD training set for 25 epochs.", "We set the batch size to 4, the learning rate to 1e-5.", "The other hyperparameters are kept the same as pre-training.", "Table 1 presents the experimental results on the FUNSD test set.", "StructuralLM achieves better performance than all pre-training models.", "First, we compare the StructuralLM model with two SOTA text-only pre-trained models: BERT and RoBERTa (Liu et al., 2019).", "RoBERTa outperforms the BERT model by a large margin in terms of the BASE and LARGE settings.", "Compared with the text-only models, the text+layout model LayoutLM brings significant performance improvement.", "The best performance is achieved by StructuralLM, where an improvement of 6% F1 point compared with Model ANLS ANLS Test set Form&Table BERTBASE 0.6372 RoBERTa BASE 0.6642 BERTLARGE 0.6745 RoBERTa LARGE 0.6952 LayoutLM BASE 0.6979 0.7012 LayoutLM LARGE 0.7259 0.7203 StructuralLM LARGE 0.8394 0.8610 Table 2: Average Normalized Levenshtein Similarity (ANLS) score on the DocVQA test set and the Form&Table subset from the test set.", "LayoutLM under the same model size.", "All the LayoutLM models compared in this paper are initialized by RoBERTa.", "By consistently outperforming the pre-training methods, StructuralLM confirms its effectiveness in leveraging cell-level layout information for form understanding.", "DocVQA (Mathew et al., 2020) is a VQA dataset on the scanned document understanding field.", "The objective of this task is to answer questions asked on a document image.", "The images provided are sourced from the documents hosted at the Industry Documents Library, maintained by the UCSF.", "It consists of 12,000 pages from a variety of documents including forms, tables, etc.", "These pages are manually labeled with 50,000 question-answer pairs, which are split into the training set, validation set and test set with a ratio of about 8:1:1.", "The dataset is organized as a set of triples (page image, questions, answers).", "The official provides the OCR results of the page images, and there is no objection to using other OCR recognition tools.", "Our experiment is based on the official OCR results.", "The task is evaluated using an edit distance based metric ANLS (aka average normalized Levenshtein similarity).", "Results on the test set are provided by the official evaluation site.", "We fine-tune the pre-trained StructuralLM on the DocVQA train set and validation set for 5 epochs.", "We set the batch size to 8, the learning rate to 1e-5.", "Table 2 shows the Average Normalized Levenshtein Similarity (ANLS) scores on the DocVQA test set.", "We still compare the StructuralLM model with the text-only models and the text-layout model.", "Compared with LayoutLM, StructuralLM achieved an improvement of over 11% ANLS point under the same model size.", "In addition, we also compare Model Acc Params BERTBASE 89.81% 110M RoBERTa BASE 90.06% 125M BERTLARGE 89.92% 349M RoBERTa LARGE 90.11% 355M VGG-16 a 90.97% Stacked CNN Single b 91.11% Stacked CNN Ensemble b 92.21% InceptionResNetV2 c 92.63% LadderNet d 92.77% Multimodal Single e 93.03% Multimodal Ensemble e 93.07% LayoutLM BASE 94.42% 113M LayoutLM LARGE 94.43% 390M StructuralLM LARGE 96.08% 355M Table 3: Classification accuracy on the RVL-CDIP test set.", "StructuralLM achieved an improvement of over 14% ANLS point, which shows that StructuralLM can learn better on form and table understanding.", "Finally, we evaluate the document image classification task using the RVL-CDIP dataset (Harley et al., 2015).", "It consists of 400,000 grayscale images in 16 classes, with 25,000 images per class.", "There are 320,000 images for the training set, 40,000 images for the validation set, and 40,000 images for the test set.", "A multi-class single-label classification task is defined on RVL-CDIP, including letter, form, invoice, etc.", "The evaluation metric is the overall classification accuracy.", "Text and layout information is extracted by Tesseract OCR.", "Different from other natural images, the document images are texts in a variety of layouts.", "As shown in Table 3, image-based classification models (Afzal et al., 2017; Das et al., 2018; Szegedy et al., 2017) with pre-training perform much better than the text-based models, which illustrates that text information is not sufficient for this task and it still needs layout information.", "The experiment results show that the text-layout model LayoutLM outperforms the image-based approaches and text-based models.", "Incorporating the cell-level layout Ablation F1 StructuralLM 0.8514 w/o cell-level layout embedding 0.8024 w/o cell position classification 0.8125 w/o pre-training 0.7072 Table 4: Ablation tests of StructuralLM on the FUNSD form understanding task.", "information, StructuralLM achieves a new state-of-the-art result with an improvement of over 1.5% accuracy point.", "We conduct ablation studies to assess the individual contribution of every component in StructuralLM.", "Table 4 reports the results of full StructuralLM and its ablations on the test set of FUNSD form understanding task.", "First, we evaluate how much the cell-level layout embedding contributes to form understanding by removing it from StructuralLM pre-training.", "This ablation results in a drop from 0.8514 to 0.8024 on F1 score, demonstrating the important role of the cell-level layout embedding.", "To study the effect of the cell position classification task in StructuralLM, we ablate it and the F1 score significantly drops from 0.8514 to 0.8125.", "Finally, we study the significance of full StructuralLM pretraining.", "Over 15% of performance degradation resulted from ablating pre-training clearly demonstrates the power of StructuralLM in leveraging an unlabeled corpus for downstream form understanding tasks.", "Actually, after ablating the cell position classification, the biggest difference between StructuralLM and LayoutLM is cell-level 2D-position embeddings or word-level 2D-position embeddings.", "The results show that StructuralLM with cell-level 2D-position embeddings performs better than LayoutLM with word-level position embeddings with an improvement of over 2% F1-score point (from 0.7895 to 0.8125).", "Furthermore, we compare the performance of the MVLM with cell-level layout embeddings and word-level layout embeddings respectively.", "As shown in Figure 4, the results show that under the same pre-training settings, the MVLM training loss with cell-level 2D-position embeddings can converge lower.", "The motivation behind StructuralLM is to jointly exploit cell and layout information across scanned document images.", "As stated above, compared with LayoutLM, StructuralLM improves interactions between cells and layout information.", "To verify this, we show some examples of the output of LayoutLM and StructuralLM on the FUNSD test set, as shown in Figure 5.", "Take the image on the top-left of Figure 5 as an example.", "In this example, the model needs to label Call Connie Drath or Carol Musgrave at 800/424-9876 with the Answer entity.", "The result of LayoutLM missed at 800/424-9876.", "Actually, all the tokens of this Answer entity are from the same cell.", "Therefore, StructuralLM predicts the correct result with the understanding of cell-level layout information.", "These examples show that StructuralLM predicts the entities more accurately with the cell-level layout information.", "The same results can be observed in the Figure 5.", "Statistical machine learning approaches (Marinai et al., 2005; Shilman et al., 2005) became the mainstream for document segmentation tasks during the past decade.", "(Shilman et al., 2005) consider the layout information of a document as a parsing problem.", "They use a grammar-based loss function to globally search the optimal parsing tree, and utilize a machine learning approach to select features and train all parameters during the parsing process.", "In addition, most efforts have been devoted to the recognition of isolated handwritten and printed characters with widely recognized successful results.", "For machine learning approaches (Shilman Figure 5: Examples of the output of LayoutLM and StructuralLM on the FUNSD dataset. The division of | means that the two phrases are independent labels. et al., 2005; Wei et al., 2013), they are usually time-consuming to design manually features and difficult to obtain a high-level abstract semantic context.", "In addition, these methods usually relied on visual cues but ignored textual information.", "Nowadays, deep learning methods have become the mainstream for many machine learning problems (Yang et al., 2017; Borges Oliveira and Viana, 2017; Katti et al., 2018; Soto and Yoo, 2019).", "(Yang et al., 2017) propose a pixel-by-pixel classification to solve the document semantic structure extraction problem.", "Specifically, they propose a multimodal neural network that considers visual and textual information, while this work is an end-to-end approach.", "(Katti et al., 2018) first propose a fully convolutional encoder-decoder network to predict a segmentation mask and bounding boxes.", "In this way, the model significantly outperforms approaches based on sequential text or document images.", "In addition, (Soto and Yoo, 2019) incorporate contextual information into the Faster R-CNN model.", "They involve the inherently localized na-ture of article contents to improve region detection performance.", "In recent years, self-supervised pre-training has achieved great success in natural language understanding (NLU) and a wide range of NLP tasks (Devlin et al., 2019; Liu et al., 2019; Wang et al., 2019).", "(Devlin et al., 2019) introduced BERT, a new language representation model, which is designed to pre-train deep bidirectional representations based on the large-scale unsupervised corpus.", "It can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of NLP tasks.", "Inspired by the development of the pre-trained language models in various NLP tasks, recent studies on document image pretraining (Zhang et al., 2020; Xu et al., 2019) do have pushed the limits of a variety of document image understanding tasks, which learn the interaction between text and layout information across scanned document images.", "(Xu et al., 2019) propose LayoutLM, which is a simple but effective pre-training method of text and layout for the document image understanding tasks.", "By incorporating the visual information into the fine-tuning stage, LayoutLM achieves new state-of-the-art results in several downstream tasks.", "(Hong et al., 2021) propose a pre-trained language model that represents the semantics of spatially distributed texts.", "Different from previous pre-training methods on 1D text, BROS is pre-trained on large-scale semi-structured documents with a novel area-masking strategy while efficiently including the spatial layout information of input documents.", "In this paper, we propose StructuralLM, a novel structural pre-training approach on large unlabeled documents.", "It is built upon an extension of the Transformer encoder, and jointly exploit cell and layout information from scanned documents.", "Different from previous pre-trained models, StructuralLM uses cell-level 2D-position embeddings with tokens in the cell sharing the same 2D-position.", "This makes StructuralLM aware of which words are from the same cell, and thus enables the model to derive representation for the cells.", "We propose a new pre-training objective called cell position classification.", "In this way, StructuralLM is capable of learning the interactions between cells and layout.", "We conduct experiments on three benchmark datasets publicly available, and StructuralLM outperforms strong baselines and achieves new state-of-the-art results in the downstream tasks." ]
[ "abstain", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "method", "objective", "abstain", "objective", "objective", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "abstain", "abstain", "abstain", "objective", "abstain", "objective" ]
[ "Assessing an AI agent that can converse in human language and understand visual content is challenging.", "Generation metrics, such as BLEU scores favor correct syntax over semantics.", "Hence a discriminative approach is often used, where an agent ranks a set of candidate options.", "The mean reciprocal rank (MRR) metric evaluates the model performance by taking into account the rank of a single human-derived answer.", "This approach, however, raises a new challenge: the ambiguity and synonymy of answers, for instance, semantic equivalence ( e.g ., yeah' and yes').", "To address this, the normalized discounted cumulative gain (NDCG) metric has been used to capture the relevance of all the correct answers via dense annotations.", "However, the NDCG metric favors the usually applicable uncertain answers such as I don't know.' Crafting a model that excels on both MRR and NDCG metrics is challenging (Murahari et al., 2020).", "Ideally, an AI agent should answer a human-like reply and validate the correctness of any answer.", "To address this issue, we describe a two-step non-parametric ranking approach that can merge strong MRR and NDCG models.", "Using our approach, we manage to keep most MRR state-of-the-art performance (70.41% vs . 71.24%) and the NDCG state-of-the-art performance (72.16% vs . 75.35%).", "Moreover, our approach won the recent Visual Dialog 2020 challenge.", "Source code is available at https: //github.com/idansc/mrr-ndcg .", "Das et al. (2017) introduced the task of Visual Dialog, which requires an agent to converse about visual input.", "Evaluating visually aware conversation should examine both linguistic properties and visual reasoning.", "Analysis of generative metrics for dialog often shows no correlation with human judgments (Liu et al., 2016).", "Hence, to evaluate the correctness of the candidate answers, a retrieval approach is preferred.", "Two metrics are standard, Question: what is the nightstand made of ?", "MRR and NDCG.", "The MRR metric focuses on a single human-derived ground-truth answer.", "Despite preferring the more human-like answer, the metric ignores many correct candidate answers.", "Differently, the NDCG considers the rank of all the correct answers.", "The metric relies on dense annotation, where three annotators were asked to mark all the correct candidate answers.", "However, the candidate answers are generated plausible answers.", "The analysis shows that the NDCG metric favors uncertain, generally correct answers, such as not sure (Murahari et al., 2020; Qi et al., 2020).", "Prior work in visual dialog focused on a single metric.", "Ideally, an AI agent should answer humanlike and detailed reply (the MRR metric) and be able to validate the correctness of any answer (the NDCG metric).", "However, crafting a model that excels in both metrics is challenging (Murahari et al., 2020).", "To this end, we propose principals to ensemble the rankings of strong MRR and NDCG models.", "Our approach is to find a minimal set that is likely to hold the human-derived answer.", "This permits ranking the rest of the candidates according to the NDCG model.", "Our approach won the recent Visual Dialog 2020 challenge and achieved strong performance on both the MRR and the NDCG metrics simultaneously.", "Visual conversation evaluation: Early attempts to marry conversation with vision used street scene images, and binary questions (Geman et al., 2015).", "While binary answers are easy to verify, such an approach is limiting for an AI agent.", "On the other hand, analysis of generative metrics for dialog often show no correlation with human judgements (Liu et al., 2016).", "Intuitively, metrics like BLEU-scores rely on corresponding words with the ground-truth answer and often miss synonyms or the subjective nature.", "More importantly, generative metrics are geared toward textual assessment rather than visual reasoning, which results in models mainly relying on textual cues (Schwartz et al., 2019a).", "Malinowski and Fritz (2014) suggest Wu-Palmer similarity metric that calculates similarity based on the depth of two words based on the WordNet taxonomy (Miller, 1995).", "A different approach suggested in the VQA dataset focus only on brief, mostly 1-word answers (Antol et al., 2015).", "In this setup, the task turns into popular answers classifi-cation, alleviating many text-generation challenges.", "Notably, VQA requires 3 out of 10 annotators to agree on the answer, which is robust to inter-person variation.", "Still, accuracy ignores the reasoning process.", "Hudson and Manning (2019) propose GQA, which extends the accuracy metric and uses a scene graph for both question generation and evaluation.", "Following, Das et al. (2017) propose the VisDial dataset for the visual dialog task, which formulates multiple image-language interactions via a dialog.", "Concurrently, de Vries et al. (2017) propose Guess-What, a goal-driven dialog dataset for object iden-tification.", "Different from VQA and goal-driven dialogs, the VisDial answers are detailed and more human-like.", "For instance, in Fig. 1, the answer is Can't tell...cloth, while a VQA answer would be cloth.", "Therefore, metrics that require exact matching are no longer suitable.", "Instead, each question is accompanied with 100 candidate answers.", "Consequently, the metric has been shifted from accuracy to retrieval-based metrics, e.g", "., MRR and NDCG.", "Prior works focus on optimizing a single metric (Guo et al., 2019; Jiang et al., 2020; Hu et al., 2017; Gan et al., 2019).", "Differently, Murahari et al. (2020) attempt to optimize both metrics with a joint loss.", "Still, a dedicated single metric model is superior.", "Instead, we propose principals to ensemble two dedicated models, one for NDCG and one for MRR.", "Our approach allows most of the MRR and NDCG to be preserved simultaneously.", "Visual dialog models: Various approaches were proposed to solve the Visual Dialog task.", "Most of them focus on dialog history reasoning per interaction.", "Serban et al. (2017) propose history hierarchical encoding.", "Seo et al. (2017) introduce a memory network based on attention, which also addressed co-referential issues.", "Kottur et al. (2018) focus on visual co-reference.", "Jain et al. (2019) concatenate representations of all the cues ( e.g ., image, question, history, and caption) per candidate answer.", "Zheng et al. (2019) employ a graph structure learning.", "Schwartz et al. (2019b) propose a model, namely Factor Graph Attention (FGA), that lets all entities ( e.g ., question-words, image-regions, answer-candidate, and caption-words) interact to infer an attention map for each modality.", "An ensemble of five FGA models achieves the state-of-the-art MRR performance.", "However, FGA optimizes using the sparse annotations, i.e", "., the human-derived answer.", "Murahari et al. (2020) recently propose Large-Scale(LS) model, which pre-trains on related vision-language datasets, e.g", "., Conceptual Captions and Visual Question Answering(Sharma et al., 2018; Antol et al., 2015).", "Concurrently, Wang et al. (2020) leverage the pretrained BERT language models, and Nguyen et al. (2020) propose a lightweight Transformer that handles the interplay between many modalities.", "The three methods mentioned above finetune using the dense annotation ( i.e ., human assessment of all the candi-dates), resulting in a substantial improvement on the NDCG metric.", "Importantly, Murahari et al .", "find that finetuning a model for NDCG hurts MRR performance.", "This work demonstrates that re-ranking MRR model ( e.g ., FGA) and NDCG model ( e.g ., LS) with simple principles keeps most MRR and NDCG performance.", "The MRR metric depends on a single human-derived answer.", "Hence, given that this answer is ranked highly, the remaining candidates can be ranked according to the NDCG model.", "In the following, we describe two steps:", "(i) the MRR step responsible for preserving the human-derived rank high, and", "(ii) the NDCG step responsible for ranking the remaining candidates based on the NDCG model.", "We are given a set of dialog questions { ( q, C q ) i } di =1 , where d is the dataset size, q is a dialog question, and C q = { c q,j } 100 j =1 are the corresponding candidates.", "The MRR metric, i.e", "., the inverse harmonic mean of rank, is defined as: MRR = 1 d d (cid:88) i =1 1 r i , (1) where r i is the rank of the human response for the i -th dialog question.", "The DCG, i.e", "., discounted cumulative gain over the K correct answers, is defined as: DCGK = K (cid:88) i =1 s i log 2 ( i + 1) , (2) where s i is a binary score, representing the fraction of annotators that marked the candidate at position as correct.", "We normalize by the ideal DCGK score ( IDCGK ), i.e", "., NDCGK = DCGKIDCGK .", "We denote the set of MRR models as M = { M 1 , . . . , M n m } where n m is the number of MRR models.", "Each MRR model is built by altering the initial conditions.", "We denote the NDCG model as N .", "We define an operator T( M, n, q ) that returns the model M 's top n responses given a question q .", "Next, we describe the MRR step that aims to keep the MRR score.", "The purpose of the MRR step is to find a minimal candidate set CMRR ,q that is likely to contain the human-derived answer given a question q .", "We build this set as a union of three sets, as follows: CMRR ,q = T q N q H q , (3) where T q is a set of first ranked candidates according to MRR models, N q is a set of high ranked candidates by both MRR and NDCG models, H q is a set of high-certainty candidates agreed by all the MRR models.", "All sets are conditioned by the question q .", "In the following, we formally define those sets.", "One of the most significant signals to be the human-derived answer is being a top MRR-model's answer.", "However, in many subjective questions, the MRR model is not certain.", "We found that in those cases, the top answers often varies between different MRR models.", "Thus, to verify the top candi-date's certainty, we require an agreement of MRR-models.", "Let q be a dialog question, we define the high-certainty set as follows: H q = { c | ( M M ; c T( M, h , q )) } , (4) where h R is an hyperparameter.", "Intuitively, a low h results in higher certainty.", "We Next, we add the MRR-models' answer at first retrieval.", "The MRR metric prioritizes the first-ranked answer (see", "Eq.(1)).", "This property suits the nature of dialog models that reply with a single response.", "Consequently, we keep the first responses of the MRR models.", "Let q be a dialog question, the top-answers set is defined as: T q = { c | ( M M ; c T( M, t , q )) } , (5) where t R is an hyperparameter.", "When the NDCG model and the MRR model agree that a candidate is likely to be correct, it implies that both the NDCG and MRR metrics gain by ranking this candidate high.", "Thus, we want to rank it high.", "We note that the MRR set is ranked first, so we include these candidates in the MRR set.", "Let q be a dialog question, the ndcg-agreement set is defined as: N q = { c | M M ; c T( N, nn ,q ) T( M, nm ,q ) } , (6) where nn , nm R are hyperparameters that indicate relevancy to NDCG and MRR, respectively.", "I.e", "., as nn increases, we may include more relevant candidates according to the NDCG model.", "Up until this stage we have built a minimal set CMRR ,q that is likely to hold the human-derived answer.", "In the following we describe how we rank this set.", "Let r M i ,c,q denote the rank according to M i M of candidate c for a question q .", "We compute the MRR rank of candidate c CMRR ,q via geometric mean: r MRR ,c,q = (cid:81) n m i =1 r M i ,c,q .", "In this step, we rank the remaining candidates CNDCG ,q = C q \\ CMRR ,q .", "We assume the correct MRR answer is in CMRR .", "Thus, we rank the remaining candidates, according to the NDCG model via geometric mean: r NDCG ,c,q = ( r N,c,q ) p r M,c,q , where M M is the most accurate MRR model, and p R is a calibration hyperparameter which controls the trade-off between MRR and NDCG.", "To conclude, let q be a dialog question and C q the corresponding candidates.", "We first find CMRR ,q , and rank the set according to r MRR ,c,q .", "We then rank the remaining candidates, according to r NDCG ,c,q .", "We show our results on the VisDial v1.0 dataset, where 123,287 images are used for training, 2,000 images for validation, and 8,000 images for testing (Das et al., 2017).", "Each image is associated with ten questions, and each question has 100 cor-0 .", "responding answer candidates.", "We use two MRR models ( i.e ., n m = 2 ), FGA (Schwartz et al., 2019b) and an ensemble of LS (Murahari et al., 2020) with FGA.", "We use LS(CE) as the NDCG model.", "We set h = 3 , t = 1 , nn = 5 , nm = 10 , and p = 3 .", "We tune these parameters using the validation set.", "Comparison to state-of-the-art : In Tab.", "1 we compare our method to nave ensembles and previous baselines.", "We first ensemble the LS's output with the FGA's output.", "By combining them, we achieve the new MRR state-of-the-art (71.24% vs . 69.37%).", "Next, we build a nave ensemble of the MRR model and the NDCG model.", "We do so by adding the MRR ensemble scores (denoted by SM ) and LS(CE) scores (denoted by SN ), as follows: SM + (1 ) SN , where R calibrates the trade-off between MRR and NDCG performance.", "We show in Fig. 2 an analysis of different values on the validation set.", "In Tab.", "1, we report results for = 0 .", "8 .", "Our two-step method outperforms the MRR (70.41% vs . 68.78%) and NDCG (72.16% vs . 69.22%) metrics, despite lacking the output scores and only requiring rankings.", "We also compare our approach to previous baselines.", "Most methods use the sparse annotations, i.e", "., the human-derived answer, while MReal-BDAI, VD-BERT, and LS(CE) finetune using the dense annotations.", "Finetuning with the dense annotations tremendously boosts the NDCG performance but loses MRR performance.", "The MRR performance decline can be attributed to NDCG being biased toward uncertain answers.", "We also note that LS leverages large-scale image-text corpora.", "LS(CE+NSP) optimizes both the dense and sparse annotations but still suffers from a performance drop compared to metric-dedicated LS models, i.e", "., MRR (63.92% vs . 67.50%) and NDCG (68.08% vs . 74.47%).", "Unlike the method mentioned above, our method re-rank the candidates based on two distinct models, with 1 2 3 4 5 6 7 8 910111213141516171819 c 0 .", "two distinct steps, to keep the human-derived answer high.", "In doing so, we achieve a good MRR performance (70.41% vs . 71.24%), yet notably with limited NDCG drop (72.25% vs . 75.35%).", "This property comes in handy in the recent Visual Dialog challenge, where the winners were picked based on both the NDCG and MRR evaluation metrics.", "Our method performs well on both metrics simultaneously and won the challenge.", "Ablation analysis : The MRR candidate set consists of different subsets.", "In Tab.", "2 we show the influence of each of subset independently on the retrieval metrics.", "Further, omitting a subset harms the performance, i.e .", "each component is essential to preserve both the MRR and NDCG metrics.", "We also report the average size of the MRR-candidate set, and the validation performance of the MRR model ( i.e ., 5xFGA) and the NDCG model ( i.e", "., LS(CE)).", "In addition we provide the results of the MRR ensemble, and the nave NDCG and MRR ensemble for = 0 .", "8 .", "In Fig. 3, we examine how the NDCG and MRR metrics are affected by modifying one hyperparameter while maintaining the others.", "On the first figure from the left, we alter c .", "The higher c , we require higher agreement between the MRR models, resulting in higher certainty for elements in the MRR set.", "Because the MRR models are responsible for the MRR set ranking, an MRR set that is too large hurts the NDCG metric.", "For the same reason, in the second image from the left, increasing t , significantly harms the NDCG performance.", "In the third figure from the left, we show that considering more candidates that both NDCG and MRR models agree upon ( i.e ., increasing nn ) helps both metrics' performance.", "However, adding too many candidates harms the NDCG metric.", "In the fourth image from the left, we show that the performance remains stable when nm is larger than three.", "Last, on the fifth image from the left, we show the effect of changing p , which calibrates the trade-off between MRR and NDCG during the NDCG ranking step.", "we provide the ranked MRR candidate set and the next 4 NDCG candidates. The analysis reveals the answers' ambiguity and that the MRR candidate set mostly consists of certain responses.", "In addition, we highlight the candidates within each MRR candidate subset with different colors.", "Additional samples can be found in the appendix.", "We describe a non-parametric method to ensemble the candidate ranks of two strong MRR and NDCG models into a single ranking that excels on both NDCG and MRR.", "Intuitively, we use the MRR-model for non-ambiguous questions with certain answers.", "The dense-annotations cue is more applicable in ambiguous questions than the sparse annotations.", "Thus, in the case of low certainty, our method relies almost entirely on the NDCG model.", "We hope the proposed principles can guide the community towards a parametric model that can employ answers' semantics to measure certainty.", "We thank Yftah Ziser, Itai Gat, Alexander Schwing and Tamir Hazan for useful discussions." ]
[ "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "result", "abstain", "result", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "result", "result", "result", "result", "abstain", "abstain", "method", "abstain", "abstain", "objective", "other" ]
[ "Stance detection is an important task, which aims to classify the attitude of an opinionated text towards a given target.", "Remarkable success has been achieved when sufficient labeled training data is available.", "However, annotating sufficient data is labor-intensive, which establishes significant barriers for generalizing the stance classifier to the data with new targets.", "In this paper, we proposed a Semantic-Emotion Knowledge Transferring (SEKT) model for cross-target stance detection, which uses the external knowledge (semantic and emotion lexicons) as a bridge to enable knowledge transfer across different targets.", "Specifically, a semantic-emotion heterogeneous graph is constructed from external semantic and emotion lexicons, which is then fed into a graph convolutional network to learn multi-hop semantic connections between words and emotion tags.", "Then, the learned semantic-emotion graph representation, which serves as prior knowledge bridging the gap between the source and target domains, is fully integrated into the bidirectional long short-term memory (BiLSTM) stance classifier by adding a novel knowledge-aware memory unit to the BiLSTM cell.", "Extensive experiments on a large real-world dataset demonstrate the superiority of SEKT against the state-of-the-art baseline methods.", "The goal of stance detection is to automatically predict the attitude (i.e., favor , against , or none ) of an opinionated text towards a given target (Du et al., 2017).", "Recently, deep learning methods, such as convolutional neural network (CNN) and long short-term memory (LSTM) (Augenstein et al., 2016; Du et al., 2017), have dominated the study of stance detection.", "Impressive stance detection performances have been achieved when a large corresponding authors: { lixutao, yym } @hit.edu.cn number of labeled samples are available.", "However, obtaining rich annotated data is a time-consuming and labor-intensive process.", "Conventional stance detection methods are struggling to cope well with the data across targets.", "This motivates the studies of cross-target stance detection (Wei and Mao, 2019), which infers the attitude of the destination target by leveraging a large amount of annotated data from the source target.", "So far, several previous studies have been conducted for cross-target stance detection (Augen-stein et al., 2016; Xu et al., 2018; Wei and Mao, 2019).", "These methods leverage either common words or concept-level knowledge shared by different targets to bridge the knowledge gap across the different targets.", "Such models suffer from two issues when they are applied to cross-target stance detection in practice.", "First, stance detection often involves analyzing the texts from social media that are short and informal, making it difficult to extract domain-independent common words shared by different targets from the training data.", "Second, users may express their stance towards a given target in an implicit way.", "Thus, the existing concept-level based methods may fail to distinguish implicit stance-carrying terms and context information.", "To alleviate the aforementioned issues, we propose a semantic-emotion knowledge transferring (SEKT) model for cross-domain stance detection, which leverages external knowledge as a bridge between source and destination targets.", "The proposed model is motivated by the observation that the data with different targets usually shares certain common external knowledge that can be transferred from the source to destination targets.", "First , we build a semantic-emotion graph (SE-graph) from semantic-related and emotion-related lexicons, which incorporates external knowledge from both word-level and concept-level.", "In SE-graph, each node is either a word or an emotion tag, and the edge between each node pair indicates the co-occurrences of the two nodes in the lexicons.", "Second , a graph convolutional network (GCN) (Kipf and Welling, 2016) is employed to learn the graph representation that captures the multi-hop semantic connections between words or emotion tags rather than one-hop connection.", "Third , we extend the standard bidirectional LSTM (BiLSTM) classifier to fully integrate the external knowledge (SE-graph) by adding an additional knowledge-aware memory unit (KAMU) to the LSTM cell.", "KAMU is capable of controlling the influence of the external knowledge in learning the hidden state of each word.", "We construct a semantic-emotion heterogeneous graph from external semantic and emotion lexicons, and employ GCN to learn the semantic graph representation.", "The external knowledge enriches the representation learning of the text and target and can be used as a bridge to enable knowledge transfer across different targets.", "We extend the standard LSTM cell with an additional memory unit, effectively integrating external knowledge into the classifier for stance detection.", "We conduct extensive experiments on a large dataset expanded from SemEval-2016 Task 6 to verify the effectiveness of our model for cross-domain stance detection.", "The experimental results show that our model consistently outperforms the compared methods.", "Stance detection aims to infer the attitude of a text towards specific target expression, which is related to argument mining, fact-checking, and aspect-level sentiment analysis.", "Early stance detection methods were concentrated on debates (Thomas et al., 2006; Somasundaran and Wiebe, 2009; Walker et al., 2012).", "In recent years, mining users' stance from social media has attracted increasing attention due to its broad applications (Du et al., 2017; Dey et al., 2018; Wei et al., 2018).", "For example, Du et al. (2017) incorporated target-specific information into stance classification with an attention mechanism.", "Dey et al. (2018) proposed a two-phase RNN method, where the first phase is to filter the non-neutral text while the second phase is to classify the attitude.", "Wei et al. (2018) further extended the model to deal with multi-target stance detection and utilized a shared memory network to capture the stance related information towards multiple related targets.", "Sun et al. (2018) adopted a hierarchical attention method to construct text representation with various linguistic factors.", "There are also several studies being developed for cross-target stance detection problems, which can be divided into two classes.", "The first one mainly focuses on word-level transfer, which utilizes the common words shared by two targets to bridge the knowledge gap.", "For example, Augenstein et al. (2016) proposed a bidirectional conditional encoding method by incorporating the target to learn the target-specific words.", "Xu et al. (2018) further utilized the self-attention mechanism to identify the word importance.", "The second type of approach attempts to address this transfer learning problem with concept-level knowledge shared by two targets.", "For example, Wei and Mao (2019) proposed a variational Transfer Network (VTN) method, which complements the commonly used knowledge by inferring the latent topics shared by the two targets.", "There are also plenty of studies that incorporate external resources, such as prior knowledge, grammar rules, domain descriptions, into deep learning framework to address the data sparsity issue (Zhang et al., 2018; Dragoni and Petrucci, 2018; Zhang et al., 2019b; Hu et al., 2016).", "For example, Lei et al. (2018) integrated the external knowledge in the word embedding layer.", "Margatina et al. (2019) combined the external knowledge with the hidden layer acquired by RNN.", "However, these methods ignored the relations between external knowledge and input context.", "Ma et al. (2018) developed a Sentic LSTM method, which contained an additional affective gate mechanism in the LSTM cell to assist in learning knowledge-aware context representation.", "We use X s = { x si , p si } N s i =1 to denote the collection of labeled data in the source domain, where each x denotes the input text and p denotes the corresponding target.", "N s represents the number of instances in X s .", "Each sentence-target pair ( x s , p s ) X s has a stance label y s .", "Given an input sentence x t and a corresponding target p t in the target domain, this study aims to predict a stance label for the input sentence x t towards the given target p t by using the model learned with the labeled data X s in source domain.", "As illustrated in Figure 1, our model consists of two primary components: a semantic-emotion graph (SE-graph) network and a knowledge-enhanced BiLSTM network.", "First, we build SE-graph from semantic-related and emotion-related lexicons, where GCN is employed to learn the graph representation that captures the semantic connections between words or emotion tags with the multi-hop connection.", "Then, we extend the BiLSTM classifier to fully integrate the SE-graph by adding a novel knowledge-aware memory unit (KAMU) to the LSTM cell.", "Next, we will introduce the main components of our model in detail.", "The data in different domains usually shares certain background knowledge that can possibly be transferred from the source domain to the target domain.", "Thus, we leverage external knowledge as a bridge between the source and target domains.", "To this end, we build a semantic-emotion knowledge graph (SE-graph) to represent the external knowledge that may contribute to cross-target stance detection.", "The SE-graph utilizes the words or emotion tags in the semantic and emotion lexicons as nodes, and constructs weighted edges between words or emotion tags based on their co-occurrence frequency.", "First, we utilize the whole words from the semantic lexicon SenticNet (Cam-bria et al., 2018) as the word-nodes and add edges between the semantic words that capture the word-word semantic connections.", "Second, we attempt to assign emotion tags to the words in SenticNet by looking for the emotion lexicon EmoLex (Moham-mad and Turney, 2013), and add edges between the words and emotion tags that capture the word-tag connection.", "For example, for a word mad in SenticNet, its semantic-related words from SenticNet are resent , malice , rage , temper , and the corresponding emotion tags from EmoLex are # anger ', # disgust .", "In this way, we can construct a weighted SE graph G .", "However, each emotion tag (node) represents a concept-level knowledge, which tends to have many connected nodes.", "As a result, emotional knowledge may dominate the input text.", "To alleviate this issue, we re-scale the weights of the word-tag edges by a constant.", "The SE-graph can capture the semantic connections between words and emotion tags with multihop connections.", "It can help the stance detector to differentiate the important and appropriate words for knowledge transfer.", "Intuitively, the nodes with high degrees can be considered as the words that contain common background knowledge, which often act as a bridge between different targets.", "We learn the embedding of each node in the SE-graph with graph convolutional network (GCN), aiming to fully exploit the multi-hop semantic and emotional connections between the nodes.", "Due to the semantic locality between the words, we extract a k -hop subgraph from SE-graph for each word.", "The subgraph is then fed into a GCN to learn the graph representation.", "Here, we adopt GCN because it has been proved to be effective and efficient to learn graph embedding (Zhang et al., 2019a).", "Formally, let E R v d be a matrix containing all v nodes in SE-graph with their features, where d is the size of the node embedding.", "For each node, we extract a k -hop subgraph G s from the whole graph, which has a degree matrix D and adjacency matrix A .", "The normalized symmetric adjacency matrix of subgraph G s can be calculated as: A = D 12 AD 12 .", "By feeding the subgraph G s into a two-layer GCN, the corresponding subgraph representation L R n c with n nodes can be calculated by: L = ( A ( AEW 0 ) W 1 ) (1) where represents a non-linear function, W 0 R d v and W 1 R d c are trainable parameters.", "To obtain a more compact graph representation, we further feed L into a fully-connected layer, producing a final graph representation M R d .", "Preliminary (Vanilla BiLSTM) Generally, two independent BiLSTM networks (denoted as BiLSTM x and BiLSTM p ) are employed to encode the input sentence x and the target p , respectively.", "BiLSTM can capture the left and right context of each word in the input.", "In particular, for the t -th word w t in the input sequence of the target, BiLSTM p computes its forward hidden state h pt and backward hidden state h pt .", "We concatenate both the forward and backward hidden states to form the final hidden state h pt = [ h pt h pt ] for word w t at the t -th position of the input target.", "After learning the contextual representation of the target, we learn a target-aware sentence representation H s by initializing BiLSTM x with the final hidden state of BiLSTM p .", "The background knowledge contained in external lexicons is the collection of facts that individuals are expected to know, and plays a crucial role in reading comprehension.", "We propose a knowledge-enhanced BiLSTM (KE-BiLSTM) model, which incorporates the external background knowledge contained in the semantic-emotion knowledge graph into the BiLSTMs via a novel knowledge-aware memory unit (KAMU).", "KE-BiLSTM helps to identify discriminative semantic and emotion knowledge from the input text.", "It is motivated by two considerations: The external commonsense knowledge provides rich information of entities and relations between them, and highlights the features that are essential for stance detection.", "For example, with the external semantic lexicon, we can correctly understand the unusual word zugzwang through the semantically related words chess, strategy, forced contained in the semantic lexicon.", "Hence, we devise KE-BiLSTM to effectively leverage the graph embedding of SE-graph and fully explore the external knowledge from both word-level and concept-level.", "There exist dynamic interaction patterns and complementarity between the context and the external knowledge within the input sequence for stance detection.", "Instead of leveraging only the input context in each BiLSTM unit, we take external commonsense knowledge into consideration by adding a novel knowledge-aware memory unit to the BiLSTM, which dynamically controls the amount of external knowledge at each encoding step and thus balances the contextual and knowledge information for stance detection.", "As illustrated in Figure 2, KE-BiLSTM consists of two primary parts: a BiLSTM network (depicted in blue) and a knowledge-aware memory unit (de-picted in green).", "Similar to the standard BiLSTM Figure 2: The structure of the knowledge-enhanced BiLSTM unit.", "network, KE-BiLSTM also computes forward and backward hidden sequences, which are then combined to form the output representation.", "Due to limited space, we solely introduce the implementation details of the forward layer.", "The forward and backward knowledge-enhance LSTMs can be computed in a similar way.", "In KE-BiLSTM, the BiLSTM network learns the sequential features of the input text.", "Formally, in the forward layer of BiLSTM, the input gate i t , forget gate f t , output gate g t , and the memory cell C t are updated as: i t = ( W i w t + U i h t 1 + V i C t 1 ) (2) f t = ( W f w t + U f h t 1 + V f C t 1 ) (3) g t = tanh ( W g w t + U g h t 1 + V g C t 1 ) (4) C t = f t (cid:12) C t 1 + i t (cid:12) g t (5) where represents the sigmoid function.", "W , U , and V are trainable parameters.", "w t is the t -th word of the input text.", "h t 1 is the hidden state for the t 1 -th word.", "We propose a knowledge-aware memory component to incorporate the external knowledge into BiLSTM.", "For each word w t , we extract the corresponding entity from SE-graph by performing n-gram matching and acquire a subgraph representation M 0 t .", "A new knowledge memory M t at time t is computed with a linear interpolation between the previous M 0 t and its candidate activation t : M t = z t (cid:12) M 0 t + (1 z t ) (cid:12) t (6) where z t [0 , 1] is utilized to balance the importance of M 0 t and t , which can be computed by: z t = ( W z w t + U z M 0 t ) (7) where W z and U z are parameters to be learned.", "The candidate activation t is updated as: t = tanh ( W w t + U ( r t (cid:12) M 0 t )) (8) where W and U are parameters to be learned.", "r t is the reset gate which aims to combine the knowledge in M 0 t and w t , which is defined as: r t = ( W r w t + U i M 0 t ) (9) where W r and U r are projection parameters.", "Finally, the linear transformation of w t , h t 1 , M t and C t are combined to calculate the output o t of the forward KE-BiLSTM layer: o t = ( W o w t + U o h t 1 + V o M t + Q o C t ) (10) h t = o t (cid:12) tanh ( C t + M t ) (11) where o t and h t denote the output gate and the hidden state of the forward network of KE-BiLSTM unit at time step t .", "The hidden state h t of the backward network at time step t can be computed in a same way.", "We can get the overall hidden state h t = [ h t h t ] for word w t .", "Finally, we can use KE-BiLSTM to learn knowledge-enhanced sentence representation H s = { h s 1 , . . . , h sn } and knowledge-enhanced target representation H p = { h p 1 , . . . , h pm } , where n and m denote the lengths of sentence x and given target p , respectively.", "We employ an attention mechanism to characterize the effect of the target on enforcing our SEKT model to pay more attention to the important words of the context.", "In particular, we use the target representation H p as the attention source to calculate the attention weight t for the t -th word: t = softmax ( h p T h xt ) (12) where h p denote the average vector of target representation H p .", "We can learn the attentive sentence representation emb by congregating the embeddings of hidden states H s with attention vector : emb = n (cid:88) t =1 t h xt (13) Target Favor/Against/None Avg-length DT 148/299/260 17.1 HC 163/565/256 17.0 FM 268/ 511/170 18.4 LA 167/544/222 19.0 TP 333/452/460 33.3 Table 1: The statistics of our experimental data extended from SemEval-2016 Task 6.", "Finally, the sentence representation emb is fed into a fully-connected layer followed by a softmax layer to compute a stance probability distribution: y = softmax ( W y emb + b y ) (14) where W y is a projection parameter and b y is a bias term.", "y denotes the predicted stance probability for the input sentence x and target p .", "Given an annotated training set X s , we utilize the cross-entropy between the predicted stance y and the ground-truth stance y as our loss function for stance detection: L = N (cid:88) i =1 C (cid:88) j =1 y ij log y ij (15) where N represents the number of instances in the training set.", "C denotes the number of possible stance categories.", "y i represents the one-hot represented ground-truth label for the i -th instance.", "y i is the predicted stance probability vector.", "This model can be optimized with the standard gradient descent algorithm.", "We extend the SemEval-2016 Task 6 dataset (de-noted as SemEval-2016) to evaluate the performance of our SEKT model for cross-target stance detection.", "SemEval-2016 is the first stance detection dataset collected from Twitter, which contains 4870 stance-bearing tweets towards different targets.", "Each tweet is classified as favor , against or none .", "Following the previous work (Wei and Mao, 2019), we use the tweets from four targets, including Donald Trump (DT), Hillary Clinton (HC), Legalization of Abortion (LA), and Feminist Movement (FM).", "These targets are commonly utilized to evaluate the cross-target stance classification.", "In addition to the four targets in SemEval-2016, we introduce an additional Trade Policy (TP) target as the fifth target, which is an incredibly hot topic nowadays.", "Specifically, 1245 tweets related to TP are collected and manually labeled as favor , against and none .", "The statistics of this expanded dataset are reported in Table 1.", "Concerning the targets, the expanded dataset can be divided into two groups: Women's Right (FM, LA) and American Politics (HC, DT, TP).", "Thus, we constructed 8 cross-target stance detection tasks ( DT HC, HC DT, FM LA, LA FM, TP HC, HC TP, TP DT, DT TP ).", "Here, the left side of the arrow corresponds to the source target and the right side of the arrow denotes the destination target.", "Two evaluation metrics are adopted to verify our SEKT model.", "First, following (Wei and Mao, 2019), we leverage the average F1-score as one evaluation metric (denoted as F avg ).", "Second, since the targets in the dataset are imbalanced, we also compute both the micro-averaged F1 (dominating large class) and macro-averaged F1 (dominating small class), and treat their average as another evaluation metric: F 1 m = ( F 1 micro + F 1 macro ) / 2 .", "In the experiments, we use the 300-dimensional word2vec pre-trained on English Google News corpus to initialize the word embeddings.", "Follow (Augenstein et al., 2016), the node features is pre-trained on unlabelled corpora.", "The hidden size of LSTM is set to 100.", "Dropout (dropout rate = 0.2) is used to avoid overfitting.", "The Adam optimizer is applied to train the model, with the mini-batch size of 8 and the learning rate of 0.001.", "We evaluate and compare our model with several strong baselines, which are described as follows:", "BiLSTM : This method uses BiLSTM to encode the sentence and target separately.", "The hidden states from both directions are combined to infer the stance label.", "BiCond (Augenstein et al., 2016): This method is similar to BiLSTM but uses a conditional encoding method that learns a target-dependent sentence representation for stance detection.", "CrossNet (Xu et al., 2018): This model is a variant of BiCond, which leverages a self-Source-Target: FM LA LA FM HC DT DT HC HC TP TP HC DT TP TP DT BiLSTM 0.448 0.412 0.298 0.358 0.291 0.395 0.311 0.341 BiCond 0.450 0.416 0.297 0.358 0.292 0.402 0.317 0.347 CrossNet 0.454 0.433 0.431 0.362 0.298 0.417 0.314 0.374 VTN 0.473 0.478 0.479 0.364 --BERT 0.479 0.339 0.436 0.365 0.261 0.231 0.241 0.456 CrossNet-C 0.449 0.439 0.442 0.369 0.297 0.413 0.324 0.355 CrossNet-CF 0.467 0.457 0.457 0.396 0.307 0.411 0.377 0.398 CrossNet-CA 0.473 0.475 0.455 0.407 0.301 0.442 0.409 0.396 TextCNN-E 0.469 0.458 0.380 0.404 0.309 0.450 0.356 0.396 SEKT (Ours) 0.536 0.513 0.477 0.420 0.335 0.460 0.444 0.395 Table 2: Performance comparison of cross-target stance detection in terms of F 1 avg on 8 tasks.", "the input text.", "VTN (Wei and Mao, 2019): The model utilizes the latent topics shared between the two targets as transferable knowledge for cross-target adaptation.", "BERT (Devlin et al., 2019): The method fine-tunes a pre-trained BERT model to perform cross-target detection.", "Specifically, we convert the given context and target to [CLS] + target + [SEP] + context structure for source and target domain, respectively.", "CrossNet-C : Similar to (Margatina et al., 2019), we extend the original CrossNet model by incorporating external knowledge.", "Here, three variants are considered, where CrossNet-C adopts the attentional concatenation, CrossNet-CF uses the feature-based gating mechanism, and CrossNet-CA adopts an attentional affine transformation.", "cross-target setting, denoted as TextCNN-E.", "Specifically, each word is represented as a 3D tensor by concatenating the embeddings of k semantically/emotionally-related words.", "We report the experimental results in terms of F1 avg and F1 m in Table 2 and Table 3, respectively.", "From the results, we can observe that BiLSTM has the worst performance because BiLSTM neither exploits the target information nor considers knowledge transfer for the cross-target stance detection.", "BiCond performs slightly better than BiLSTM, since it explicitly encodes the target information.", "As an extension to BiCond by introducing the attention mechanism, CrossNet shows a marginal improvement (e.g., 13.4% on HC DT for F1 avg , 3.9% on LA FM for F1 m ).", "This may be because that the attention mechanism can learn the informative stance-aware sentence representation.", "However, this knowledge transfer scheme is based on word-level information, which often suffers from the data scarcity problem.", "VTN, which is a concept-level knowledge transfer model, achieves the best performance among all the baseline methods.", "It is noteworthy that the performance of BERT is not stable.", "Promising results are achieved on FM LA and HC DT, but it performs unsatisfactorily on other tasks.", "The reason may be that BERT does not explicitly employ any knowledge transfer SEKT w/o SE w/o KAMU FM LA 0.536 (0.523) 0.461 (0.492) 0.471 (0.499) LA FM 0.513 (0.510) 0.443 (0.455) 0.475 (0.469) HC DT 0.477 (0.463) 0.449 (0.439) 0.449 (0.450) DT HC 0.420 (0.432) 0.400 (0.404) 0.411 (0.407) HC TP 0.335 (0.279) 0.314 (0.278) 0.321 (0.280) TP HC 0.460 (0.489) 0.448 (0.466) 0.453 (0.471) DT TP 0.444 (0.391) 0.407 (0.371) 0.411 (0.376) TP DT 0.395 (0.435) 0.394 (0.420) 0.395 (0.431) Table 4: Ablation test results in terms of F 1 avg and F 1 m (in the parentheses) by discarding SE graph (w/o SE) and knowledge-aware memory unit (w/o KAMU).", "strategy.", "The proposed SEKT method yields better performance than all the baselines in most of the tasks.", "For example, our method improves 5.7% on FM LA, 3.5% on LA FM, 5.5% on DT HC over the best competitors in terms of F1 avg .", "The advantage of SEKT comes from its two characteristics:", "(i) we develop a GCN based model to fully exploit the external knowledge from both semantic and emotion lexicons;", "(ii) a knowledge-aware memory unit is proposed to better fuse the external knowledge.", "We also compare our SEKT model with the competitors that also integrate the semantic-emotion knowledge graph with GCN, e.g., CrossNet-C, CrossNet-CF, CrossNet-CA and TextCNN-E.", "The results are demonstrated in Table 2 and Table", "3. CrossNet-C produces the worst performance in general.", "The reason is that concatenating the external knowledge and context representation could make the external knowledge lost in the sentence encoding process.", "CrossNet-CF and CrossNet-CA perform better than CrossNet-C since they incorporate the external knowledge into the hidden layers of BiLSTM.", "As expected, SEKT achieves the best performance, which verifies the effectiveness of the KAMU model.", "To investigate the impact of each part on our SEKT model, we perform the ablation test by discarding SE graph knowledge (denoted as w/o SE) and knowledge-aware memory unit (denoted as w/o KAMU), respectively.", "Specifically, for the w/o SE model, the external knowledge is expressed by a weighted sum of the embeddings of four semantically/emotionally-related words.", "For the w/o KAMU model, we replace the KE-BiLSTM structure by the standard BiLSTM layer, and the external knowledge is combined in the hidden layer.", "The ablation results are summarized in Table", "4. From the results, we observe that both the SE graph and KAMU make great improvements to our SEKT method.", "The external semantic and emotional knowledge can help SEKT to capture multi-hop semantic correlations between words or emotion tags.", "On the one hand, KAMU helps to fully incorporate the external knowledge into the BiLSTM network, which makes the representation learning model more general to new targets.", "Number of Hops Based on our empirical observation, capturing the multi-hop semantic correlation is one of the most important parts for the overall performance of SEKT.", "Thus, we also investigate the impact of the number of hops used in GCN.", "In particular, we evaluate the performance of SEKT by varying the number of hops from 1 to 4 with a step size of 1.", "From Table 5, we can observe that the best results are achieved when the number of hops is 2 or", "3. This is because GCN with a mediate hop number can capture semantic correlations between words while preventing from introducing unnecessary noises.", "To better understand the limitations of SEKT, we additionally carry out an analysis of the errors made by SEKT.", "Specifically, we randomly select 100 instances that are incorrectly predicted by SEKT from the expanded SemEval-2016 dataset.", "We revealed several reasons for the classification errors, which can be divided into the following categories.", "First, SEKT fails to classify some sentences that contain latent opinions or require deep comprehension.", "For example, for the sentence I guess NBC does not like to hear the truth. [ favor ] with a target Donald Trump, SEKT tends to predict an incorrect against stance.", "This is because the SEKT model cannot learn the implicit relationship between NBC and TRUMP, which is not acquirable from the semantic-emotion lexicons.", "The National Broadcasting Company second error category is caused by special hash-tags with implicit meanings.", "For example, SEKT cannot correctly predict the stance for the sentence The gift that keeps on giving. #makeitstop #SemST [ against ] .", "This may be because the information in the sentence is not sufficient enough such that SEKT cannot capture the sequential patterns of the stance-related words.", "It suggests that certain data augmentation strategy needs to be devised in the future so as to capture the sequential patterns between stance-related words from short texts.", "In this paper, we proposed a semantic-emotion knowledge transferring (SEKT) model for cross-target stance classification, which used the external knowledge from semantic and emotion lexicons as commonsense knowledge to bridge the gap across different targets.", "Specifically, we first built a SE-graph from semantic and emotion lexicons, which leveraged external knowledge from both word-level and concept-level.", "Second, the GCN was employed to learn the graph representation that captured multi-hop semantic connections between words or emotion tags.", "Third, we extend the standard BiLSTM classifier to fully integrate the external knowledge by adding a novel knowledge-aware memory unit to the BiLSTM cell.", "The experimental results demonstrated that the SEKT model significantly outperformed the state-of-the-art methods for cross-target stance detection.", "This research was supported in part by the National Key R & D Program of China, 2018YFB2101100, 2018YFB2101101 and NSFC under Grant Nos.", "U1836107, 61972111, 61572158 and 61602132.", "Min Yang was partially supported by National Natural Science Foundation of China (No. 61906185), Guangdong Basic and Applied Basic Research Foundation (No. 2019A1515011705 and No. 2018A030313943), and the project AWS13C008." ]
[ "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "abstain", "abstain", "objective", "abstain", "method", "objective", "objective", "result", "result", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "other", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "objective", "abstain", "other", "other", "other" ]
[ "We study the problem of building entity tagging systems by using a few rules as weak supervision.", "Previous methods mostly focus on disambiguating entity types based on contexts and expert-provided rules, while assuming entity spans are given.", "In this work, we propose a novel method TALLOR that bootstraps high-quality logical rules to train a neural tagger in a fully automated manner.", "Specifically, we introduce compound rules that are composed from simple rules to increase the precision of boundary detection and generate more diverse pseudo labels.", "We further design a dynamic label selection strategy to ensure pseudo label quality and therefore avoid overfitting the neural tagger.", "Experiments on three datasets demonstrate that our method outperforms other weakly supervised methods and even rivals a state-of-the-art distantly supervised tagger with a lexicon of over 2,000 terms when starting from only 20 simple rules.", "Our method can serve as a tool for rapidly building taggers in emerging domains and tasks.", "Case studies show that learned rules can potentially explain the predicted entities.", "Entity tagging systems that follow supervised training, while accurate, often require a large amount of manual, domain-specific labels, making them difficult to apply to emerging domains and tasks.", "To reduce manual effort, previous works resort to manual lexicons (Shang et al., 2018b; Peng et al., 2019) or heuristic rules provided by domain experts (Fries et al., 2017; Safranchik et al., 2020; Lison et al., 2020b) as weak supervision.", "For example, LinkedHMM (Safranchik et al., 2020) can achieve performance close to supervised models using 186 heuristic rules in addition to a lexicon of over two million terms.", "However, it is challenging Work done during an internship at Bosch Research.", "for experts to write complete and accurate rules or lexicons in emerging domains, which requires both a significant amount of manual effort and a deep understanding of the target data.", "How to build accurate entity tagging systems using less manual effort is still an open problem.", "In this work, we explore methods that can automatically learn new rules from unlabeled data and a small set of seed rules (e.g. 20 rules).", "Such methods are desirable in real-world applications not only because they can be rapidly deployed to new domains or customized entity types, but also because the learned rules are often effective, interpretable, and simple for non-experts to debug incorrect predictions.", "As explained in Figure 1, new rules can be learned from seed rules.", "Specifically, we propose a novel iterative learning method TALLOR that can learn accurate rules to train a neural tagger in an automated manner, with goal to address two key issues during learning process: (1) how to detect entity boundaries and predict their types simultaneously with rules, (2) how to generate accurate and diverse pseudo labels from rules.", "With such a small set of seed rules as supervision, previous works (Niu et al., 2003; Huang and Riloff, 2010; Gupta and Manning, 2014) only focus on disambiguating entity types assuming entity spans are given or just syntactic chunks (e.g., noun phrases).", "However, we find that syntactic chunks often do not align well with target entity spans.", "For example, given a sentence from CoNLL2003: Germany's representative to the European Union's veterinary committee... , the noun phrases 1 are Germany's representative and the European Union's veterinary committee , but gold entities in the sentence are Germany and European Union .", "We used noun phrases extracted from spaCy as predicted entity boundaries and compared them with ground truth entity boundaries, which are extracted based on the results from syntactic parsing.", "This setting of using noun phrases as entity candidates is similar to previous work (Niu et al., 2003; Huang and Riloff, 2010).", "The results are shown in Table 1, a majority of target entities are missed if we use noun phrases as entity candidates, which will not be recognized correctly later.", "To address both entity boundary detection and type classification simultaneously, we first define five types of simple logical rules considering the lexical, local context, and syntax information of entities.", "We notice that simple logical rules are often inaccurate when detecting entity boundaries.", "Therefore, we propose to learn compound logical rules, which are composed from multiple simple rules and logical connectives (e.g. and).", "For example, given the sentence John lives in Dallas where he was born , the simple rule lives in , which is a preceding context clue, will match multiple token spans such as Dallas , Dallas where , Dallas where he etc.", "In contrast, compound logical rules can both detect entity boundaries and classify their types accurately.", "For example, using both the preceding context and the part-of-speech (POS) tag rule (e.g. lives in and POS is a proper noun) can correctly identify the Location entity Dallas.", "Though we aim to learn accurate rules, automatically acquired rules can be noisy.", "To ensure the quality of generated pseudo labels, we design a dynamic label selection strategy to select highly 1 Noun phrases are extracted using spaCy noun chunks.", "accurate labels so that the neural tagger can learn new entities instead of overfitting to the seed rules.", "Specifically, we maintain a high-precision label set during our learning process.", "For each learning iteration, we first automatically estimate a filtering threshold based on the high-precision set.", "Then, we filter out low-confidence pseudo labels by considering both their maximum and average distances to the high-precision set.", "Highly confident labels are added into the high-precision set for the next iteration of learning.", "Our dynamic selection strategy enables our framework to maintain the precision of recognized entities while increasing recall during the learning process, as shown in our experiments.", "We evaluate our method on three datasets.", "Experimental results show that TALLOR outperforms existing weakly supervised methods and can increase the average F 1 score by 60% across three datasets over methods using seed rules.", "Further analysis shows that TALLOR can achieve similar performance with a state-of-the-art distantly supervised method trained using 1% of the human effort 2 .", "We also conduct a user study concerning the explainability of learned logical rules.", "In our study, annotators agree that 79% (on average over three annotators) of the matched logical rules can be used to explain why a span is predicted as a target entity.", "In summary, our main contributions are: We define five types of logical rules and introduce compound logical rules that can accurately detect entity boundaries and classify their types.", "Automatically learned rules can significantly reduce manual effort and provide explanations for entity predictions.", "To effectively learn rules, we propose a novel weakly supervised method with a dynamic label selection strategy that can ensure the quality of pseudo labels.", "We conduct experiments on both general and domain-specific datasets and demonstrate the effectiveness of our method.", "We study named entity tagging under a weakly supervised setting, and propose TALLOR ( Ta gging with L earnable Lo gical R ules) to build a tagger with only a small set of rules.", "Compared with previous work, our framework requires less human 2 In experiments, our method used 20 rules, the other system used a manually constructed lexicon of over 2000 terms.", "effort via the use of learned rules; we also show that these rules can be used to explain tagging results.", "Instead of treating tagging as a sequence labeling task, we formulate tagging as a span labeling task, in which named entities are modeled as spans over one or more tokens.", "With this setting, logical rules can easily be used for labeling entities.", "Overview Figure 2 shows the flow of our iterative learning framework, which consists of the following components.", "First, we generate all entity candidates and rule candidates from unlabeled data.", "Then, for each iteration, we apply logical rules to the unlabeled data and select a set of high-quality weak training examples.", "Next, we train a neural tagger with the selected training examples and predict the labels of unlabeled data using the trained model.", "Finally, we select new accurate logical rules from candidate rules using the predictions.", "The newly learned rules will further be used to obtain weak training labels for the next iteration.", "In our work, a logical rule is defined in the form of if p then q (or p q ).", "3 For entity tagging, q is one of the target entity classes, and p can be any matching logic.", "For example, if a span's preceding tokens are lives in ', then it is a Location.", "We design the following five types of simple logical rules to consider the lexical, local context, and syntax information of an entity candidate.", "Simple Logical Rules.", "A simple logical rule is defined as a logical rule that contains a single condition predicate.", "We design the following five predicates to represent common logical conditions.", "Given a candidate entity, (1) 3 heuristic rules and labeling rules can also be converted to logical rules, so they can be used interchangeably.", "TokenString matches its lexical string; (2) PreNgram matches its preceding context tokens; (3) PostNgram matches its succeeding context tokens; (4) POSTag matches its part-of-speech tags; (5) DependencyRel matches the dependency relations of its head word.", "States He moved in 1916 to the United PRON VERB ADP NUM ADP DET PROPN PROPN pobj Given a candidate entity United States in the above example, we can extract the following example logical rules for recognizing Locations: 4 TokenString == united state Location, PreNgram == move to the Location, PostNgram == in 1916 Location, POSTag == PROPN PROPN Location, DependencyRel == to (via pobj) Location.", "predicate are included in Appendix A.1.", "Compound Logical Rules.", "A compound logical rule is formed with multiple condition predicates and logical connectives including and ( ), or ( ), and negation ( ).", "In this work, we focus on learning compound logical rules connected with conjunctions ( ) to recognize entities precisely, because simple logical rules are often in-sufficient to identify entity boundaries.", "In the above example, the rule PreNgram == move to the can match multiple candidates such as United , United States , and United States in etc., of which many are inaccurate.", "However, with a compound rule, e.g. PreNgram == move to the POSTag == PROPN PROPN , we can correctly recognize that Unitied States is a Location.", "4 All words in rules are lower-case and lemmatized.", "We enumerate and extract all possible logical rules from unlabeled data based on our pre-defined rule types before the training process.", "At each iteration, we apply both seed and learned logical rules to unlabeled entity candidates to obtain a set of weakly labeled instances.", "In case an entity candidate is matched by multiple rules (po-tentially conflicting), we use the majority vote as the final weak label.", "Entity Candidates.", "In this work, we treat tagging as a span labeling task as described earlier.", "Before our learning process, we enumerate all token spans up to a maximum length from unlabeled data as entity candidates.", "We also notice that common phrases (e.g., United States ) are rarely split into different entities (e.g. United , States ).", "Therefore, we generate a list of common phrases using the unsupervised AutoPhrase method (Shang et al., 2018a) and merge two continuous spans together as a single entity candidate if they can form a common phrase.", "After applying the learned rules to unlabeled data, some of the weakly generated labels can be incorrect, which will lead to poor performance of our neural tagger in the next step.", "To filter out noisy labels, we propose to maintain a high-precision entity set to keep the accurately labeled training examples from each iteration.", "Inspired by Zhang et al. (2020), we design a method to select high-quality labels from weakly generated labels by seed logical rules into the high-precision set.", "Specifically, given an entity category i , its corresponding high-precision set H i , and a weakly labeled instance e q , we first compute a confidence score of e q belonging to category i by considering both its maximum pair similarity to the high-precision set H i (called local score ) and its average similarity to H i (called global score ).", "Then, the weakly labeled instance e q will be selected into the high-precision set if its confidence score is larger than a threshold that is also estimated based on the high-precision set.", "Instance Embedding.", "We compute the embedding of an entity instance as the mean of the embeddings of its tokens.", "A token's embedding is computed as the average of the first three layers' outputs from a pre-trained language model 5 .", "Local Score.", "Given a weakly labeled instance e q and an example e i from the high-precision set, we first compute their similarity as the cosine score between their embeddings.", "Then, we compute the local confidence score of e q belonging to category i as the maximum of its similarities between all examples in the high-precision set.", "Global Score.", "The local score is estimated based on a single instance in the high-precision set.", "Though it can help explore new entities, it can also be inaccurate in some cases.", "Therefore, we propose to compute a more reliable score to estimate the accuracy of an instance e q belonging to a category i , which is called the global score.", "Specifically, we first sample a small set E s from the high precision set H i and then compute the prototypical embedding x E s of E s as the average of embeddings of all instances in E s .", "In our work, we sample N times and compute the global score as: score glb i = 1 N (cid:88) 1 j N cos( x jE s , x e q ) (1) To balance the exploration ability and reliability, we compute the final confidence score of a weakly labeled instance belonging to a category as the geometric mean of its local and global scores.", "Dynamic Threshold Estimation.", "We hypothesize that different categories of entities may have different thresholds for selecting high-quality weak labels.", "We may also need to use different thresholds at different iterations to dynamically balance exploration and reliability.", "For example, we may expect our learning process to be reliable at earlier iterations and be exploratory at later stages.", "Motivated by this hypothesis, we propose to use a dynamic threshold to select high-quality weak labels.", "Specifically, we hold out one entity instance in the high precision set and compute its confidence score with respect to the rest of the examples in the high-precision set.", "We randomly repeat T times and use the minimum value as the threshold.", "For category i , it is calculated as: threshold = min k T,e k H i score i ( e k ) (2) where e k is the held-out entity instance and [0 , 1] is a temperature to control the final threshold.", "5 We used different pre-trained language models for different domains.", "Details are in Section 3.1.", "Following Jiang et al. (2020), we treat tagging as a span labeling problem.", "The key idea is to represent each span as a fixed-length embedding and make predictions based on its embedding.", "Briefly, given a span and its corresponding sentence, we first initialize all tokens in a sentence using a pre-trained language model, and then apply a Bi-LSTM and Self-Attention layer, and obtain the contextual embedding of the sentence.", "Finally, we compute the span embedding by concatenating two components: a content representation calculated as the weighted average across all token embeddings in the span, and a boundary representation that concatenates the embeddings at the start and end positions of the span.", "Then, we predict the label of a span using a multilayer perceptron (MLP).", "For our detailed formulation please refer to Appendix A.2.", "Every iteration, we first predict the labels of all text spans using our neural tagging model.", "Then, we rank and select the 70% 6 most confident spans per category based on their prediction probabilities from the tagging model as weak labels for computing rule scores.", "We select new rules from rule candidates based on their confidence scores.", "We adopt the RlogF method (Thelen and Riloff, 2002) to compute the confidence score of a rule r : F ( r ) = F i N i log 2 ( F i ) (3) where F i is the number of spans predicted with category label i and matched by rule r , and N i is the total number of spans matched by rule r .", "Intuitively, this method considers both the accuracy and coverage of rules because F i N i is the accuracy of the rule and log 2 ( F i ) represents the rule's ability to cover more spans.", "In our experiments, we select the top K rules for each entity class per iteration.", "We increase K by per iteration to be more exploratory in later iterations.", "We also use a threshold of rule accuracy (i.e. F i N i ) to filter out noisy rules.", "This method allows a variety of logical rules to be considered, yet is precise enough that all logical rules are strongly associated with the category.", "6 Different categories and datasets may require different thresholds to select high-quality labels.", "Setting a percentage means we will have dynamic thresholds for different categories so that the model will be robust to different categories and domains.", "We first compare our method with baselines on three datasets and further analyze the importance of each component in an ablation study.", "We also report the performance of our method with different numbers of seed rules and at different iterations.", "Finally, we show an error analysis and present a user study to analyze how many logical rules can be used as understandable explanations.", "We evaluate our method on the following three datasets.", "Note that we use each training set without labels as our unlabeled data.", "BC5CDR (Li et al., 2016) is the BioCreative V CDR task corpus.", "It contains 500 train, 500 dev, and 500 test PubMed articles, with 15,953 chemical and 13,318 disease entities.", "CHEMDNER (Krallinger et al., 2015) contains 10,000 PubMed abstracts with 84,355 chemical entities, in which the training/dev/test set contain 14,522/14,572/12,434 sentences respectively.", "CoNLL2003 (Sang and Meulder, 2003) consists of 14,041/3,250/3,453 sentences in the train-ing/dev/test set extracted from Reuters news articles.", "We use Person, Location, and Organization entities in our experiments.", "7 Seed Rules and Parameters.", "In our experiments, we set the maximum length of spans to 5, and select the top K = 20 rules in the first iteration for BC5CDR and CoNLL2003, and K = 60 for the CHEMDNER dataset.", "Since it is relatively easy for users to manually give some highly accurate TokenString rules (i.e., entity examples), we use TokenString as seed rules for all experiments.", "To be specific, we manually select 20 highly frequent TokenString rules as seeds for BC5CDR and CoNLL2003 and 40 for CHEMDNER because of its large number of entities.", "The manual seeds for each dataset are shown in Appendix A.7.", "For pre-trained language models, we use BERT (Devlin et al., 2019) for CoNLL2003, and SciBert (Beltagy et al., 2019) for BC5CDR and CHEMDNER.", "All our hyperparameters are selected on dev sets.", "More setting details are in Appendix A.4.", "7 We do not evaluate on Misc category because it does not represent a single semantic category, which cannot be represented with a small set of seed rules.", "set directly and evaluate their performance.", "CGExpan (Zhang et al., 2020) is a state-of-the-art lexicon expansion method by probing a language model.", "Since TokenString seed rules can be viewed as a seed lexicon, we expand its size to 1,000 using this method and use them as TokenString rules.", "We apply the top 200, 500, 800, and 1,000 rules to test sets and report the best performance.", "AutoNER (Shang et al., 2018b) takes lexicons of typed terms and untyped mined phrases as input.", "We use the best expanded lexicon from CGExpan as typed terms, and both of the expanded lexicon and the mined phrases from AutoPhrase (Shang et al., 2018a) as untyped mined phrases.", "For detailed information on the AutoNER dictionary, refer to Appendix A.6", "LinkedHMM (Safranchik et al., 2020) introduces a new generative model to incorporate noisy rules as supervision and predict entities using a neural NER model.", "In our experiments, we use the expanded lexicon by CGExpan as tagging rules and AutoPhrase mined phrases as linking rules.", "HMM-agg.", "(Lison et al., 2020a) proposes a hidden Markov model to first generate weak labels from labeling functions and train a sequence tagging model.", "We convert the expanded lexicon by CGExpan to labeling functions and report results of the tagging model.", "Seed Rule + Neural Tagger .", "This method is our framework without iteration learning.", "After applying seed rules, we use the weakly generated labels to train our neural tagger and report the result of the tagger.", "applying seed rules.", "Then, we build a self-training system using the weak labels as initial supervision and our neural tagger as the base model.", "Methods (Fries et al., 2017; Ratner et al., 2017; Huang and Riloff, 2010) which use noun phrases as entity candidates are not included here because noun phrases have poor recall on the three datasets as shown in Table 1. CGExpan outperforms other entity set expansion methods (e.g., Yan et al. (2019)) so we use CGExpan as our baseline for automatic lexicon expansion.", "We present the precision, recall, and micro-averaged F 1 scores on three datasets in Table 2. Results show that our method significantly outperforms baseline methods obtaining an average of 24 point F 1 improvement across three datasets over the best baseline.", "We see that the precision of our seed rules is high, but the recall is lower.", "The lexicon expansion method (CGExpan) can recognize more entities but also introduces errors resulting in an improvement to recall but a dramatic decrease in precision.", "Existing weakly supervised methods (i.e., AutoNER, LinkedHMM and HMM-agg.) cannot recognize entities effectively with either seed rules or expanded rules by CGExpan.", "These methods require a high-precision lexicon as input; however, the precision of the automatically expanded lexicon is not sufficient to meet this requirement.", "Though seed rules are very accurate, they lack coverage of various entities.", "Our method without iteration ( Seed Rules + Neural Tagger ) and self-training can achieve high precision because of the accurate pseudo labels generated from seed rules.", "It is interesting to note that 0 5 10 15 20 25 30 Iterations 20 30 40 50 60 70 80 90 F1 Precision Recall", "the self-training method based on our neural tagger also achieved low recall.", "We hypothesize that this is mainly due to the neural tagger overfitting the small set of labels from seed rules.", "Ablation Study.", "We also performed an ablation study to analyze the importance of some components in our framework, and report the performance in Table 2 (the lower section).", "Results show that our learned rules are accurate but lack coverage.", "Without using common phrases mined by Autophrase (i.e., Ours w/o Autophrase ), our method achieves dramatically lower recall demonstrating the effectiveness of common phrases for improving coverage.", "Without high-quality training instance selection ( Ours w/o Instance Selection ), the precision is lower than our best method indicating the importance of the instance selection step.", "Performance vs. Iterations.", "Figure 3a shows the performance of our method at different iterations.", "We see that our method improves the recall from 20% to over 60% during the learning process with a slight decease in precision, and achieves the best F 1 score after 25 iterations.", "Results on other two datasets show the same trend (in Appendix A.8).", "Performance with Different Numbers of Seeds.", "Figure 3b shows the performance of our method using different numbers of manually selected seed rules on three datasets.", "We see that our method can achieve continuous improvement using more seeds.", "We also notice that our method can achieve over 55% F 1 on CHEMDNER with only 10 seeds demonstrating the effectiveness of our framework under minimal supervision setting.", "Our method obtains significantly better results (around 65% F 1 ) when using 20 seeds than 10 seeds on BC5CDR and CoNLL indicating that 20 seeds is a reasonable starting point for building a tagging system without much manual effort.", "AutoNER (Shang et al., 2018b) is a distantly supervised method using a manually created lexicon as supervision.", "We also compared our method to this method to figure out how many terms we need to manually created for AutoNER to achieve similar performance with our method.", "We conducted experiments on BC5CDR and used only 20 seeds for our method.", "For AutoNER, we used additional M terms from a manually created lexicon (Shang et al., 2018b) 8 .", "Figure 3c shows the performance with different values of M .", "Results show that AutoNER needs an additional 2000 terms to achieve similar performance (around 66% F 1 ) with our method, which demonstrates that our method is effective under minimal supervision without access to a large manual lexicon.", "In our work, we designed three rule selection strategies: (1) entity type selects the top K rules for each entity category; (2) rule type selects the top K rules for each logical rule type; (3) entity&rule type selects the top K rules for each entity category and logical rule type.", "Results in Table 4 show that entity type based selection achieves the best performance.", "We show the statistics of different types of rules learned after all iterations in Table 5.", "9 We see that TokenString rule is the most rule type for domain-specific datasets (BC5CDR and CHEMDNER).", "For 8 AutoNER authors compiled the lexicon from MeSH database and CTD Chemical and Disease vocabularies, which are manually created by experts.", "We also performed an error analysis on the BC5CDR dataset.", "Specifically, we sampled 100 entities predicted incorrectly by our learned rules and analyzed their error types.", "Analysis results show that 56% of errors are caused by an inability to distinguish closely related entity categories (chemicals vs medications), and another 20% are due to incorrect detection of entity boundaries.", "We also notice that some spans (e.g. HIT type II) and their sub-spans (e.g. HIT) are both disease entities (i.e., nested entities), but only the longer ones are annotated with gold labels.", "Our rules sometimes only predict the sub-spans as diseases, which contributes to 20% of the errors.", "We put examples of each error type in Table 3.", "Since our logical rules are intuitive clues for recognizing entities, we hypothesize that automatically learned rules can be used as understandable explanations for the predictions of entities.", "Therefore, we conducted a user study to find out how many logical rules are explainable.", "Specifically, we applied the learned rules in BC5CDR and sampled 100 entities labeled by at least one logical rule other Labeled Entities and Sentences Learned Logical Rules Entity type This occlusion occurred after EACA therapy in a patient with SAH and histopathological documentation of recurrent SAH.", "than TokenString 10 for our user study.", "Some examples are shown in Table 6.", "We asked two annotators without domain knowledge and one biological expert to annotate whether our learned logical rules can be understood and used as explanations for why a span is predicted as a disease or chemical.", "Manual annotation results show that the two annotators and the biological expert agree that 81%, 87%, and 70% of the predicted entities can be explained by logical rules, respectively.", "Different types of methods have been proposed to build named entity tagging systems using indirect or limited supervision.", "Distant supervision (Mintz et al., 2009) is one kind of methods that have been proposed to alleviate human effort by training models using existing lexicons or knowledge bases.", "Recently, there have been attempts to build NER systems with distant supervision (Ren et al., 2015; Fries et al., 2017; Giannakopoulos et al., 2017).", "AutoNER (Shang et al., 2018b) trained a NER system by using both typed lexicons and untyped mined phrases as supervision.", "Peng et al. (2019) proposed an AdaPU algorithm to incorporate an incomplete dictionary as supervision.", "However, lexicons or knowledge bases are not always available for new domains and tasks, especially in specific domains and low-resource settings.", "Manually constructing these lexicons is often very expensive.", "Bootstrapping is a technique to learn models from a small set of seeds, which has been proposed for word sense disambiguation (Yarowsky, 1995) and product attribute extraction (Putthivid-hya and Hu, 2011).", "Bootstrapping methods (Niu et al., 2003; Huang and Riloff, 2010) have been 10 We exclude TokenString rules because they are self-explainable.", "proposed for building entity tagging systems by assuming target entities are just proper names or noun phrases.", "Gupta and Manning (2014) used an improved pattern scoring method to bootstrap domain-specific terminologies with restricted part-of-speech patterns.", "However, previous works only focused on disambiguating entity types by assuming target entities are given or just syntactic chunks.", "But, as we shown earlier, target entities often do not align well with simple syntactic chunks.", "Bootstrapping methods that can automatically detect entity boundaries and predict their types simultaneously are desirable in real-world applications.", "Recently, methods have been proposed to obtain weak labels by manually writing labeling functions (Bach et al., 2017).", "Based on this idea, several methods (Safranchik et al., 2020; Lison et al., 2020a) have been proposed for NER by assuming the availability of a sufficient amount of handcrafted labeling functions and lexicons.", "However, manually designing labeling rules is challenging, which requires a significant amount of manual effort and domain expertise.", "Our work aims to learn logical rules automatically to reduce human effort.", "In this work, we explored how to build a tagger from a small set of seed logical rules and unlabeled data.", "We defined five types of simple logical rules and introduced compound logical rules that are composed from simple rules to detect entity boundaries and classify their types simultaneously.", "We also design a dynamic label selection method to select accurate pseudo labels generated from learned rules for training a discriminative tagging model.", "Experimental results demonstrate that our method is effective and outperforms existing weakly supervised methods." ]
[ "method", "abstain", "objective", "method", "method", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "objective", "abstain", "result", "abstain", "result", "abstain", "result", "objective", "abstain", "objective", "abstain", "abstain", "abstain", "objective", "method", "abstain", "abstain", "objective", "method", "abstain", "result", "method", "abstain", "abstain", "method", "method", "objective", "objective", "objective", "objective", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "result", "abstain", "method", "result", "abstain", "method", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "method", "abstain", "method", "result", "method", "method", "abstain", "result", "result", "abstain", "abstain", "result", "result", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "abstain", "other", "other", "other", "other", "objective", "objective", "method", "method", "objective" ]
[ "We propose syntactically controlled paraphrase networks ( SCPN s) and use them to generate adversarial examples.", "Given a sentence and a target syntactic form (e.g., a constituency parse), SCPN s are trained to produce a paraphrase of the sentence with the desired syntax.", "We show it is possible to create training data for this task by first doing backtranslation at a very large scale, and then using a parser to label the syntactic transformations that naturally occur during this process.", "Such data allows us to train a neural encoder-decoder model with extra inputs to specify the target syntax.", "A combination of automated and human evaluations show that SCPN s generate paraphrases that follow their target spec-ifications without decreasing paraphrase quality when compared to baseline (uncontrolled) paraphrase systems.", "Furthermore, they are more capable of generating syntactically adversarial examples that both (1) fool pretrained models and (2) improve the robustness of these models to syntactic variation when used to augment their training data.", "Natural language processing datasets often suffer from a dearth of linguistic variation, which can hurt the generalization of models trained on them.", "Recent work has shown it is possible to easily break many learned models by evaluating them on adversarial examples (Goodfellow et al., 2015), which are generated by manually introducing lexical, pragmatic, and syntactic variation not seen in the training set (Ettinger et al., 2017).", "Robustness to such adversarial examples can potentially be improved by augmenting the training data, as shown by prior work that introduces rule-based lexical substitutions (Jia and Liang, 2017; F Authors contributed equally.", "Liang et al., 2017).", "However, more complex transformations, such as generating syntactically adversarial examples, remain an open challenge, as input semantics must be preserved in the face of potentially substantial structural modifications.", "In this paper, we introduce a new approach for learning to do syntactically controlled paraphrase generation : given a sentence and a target syntactic form (e.g., a constituency parse), a system must produce a paraphrase of the sentence whose syntax conforms to the target.", "General purpose syntactically controlled paraphrase generation is a challenging task.", "Approaches that rely on handcrafted rules and grammars, such as the question generation system of McKeown (1983), support only a limited number of syntactic targets.", "We introduce the first learning approach for this problem, building on the generality of neural encoder-decoder models to support a wide range of transformations.", "In doing 1875 so, we face two new challenges: (1) obtaining a large amount of paraphrase pairs for training, and (2) defining syntactic transformations with which to label these pairs.", "Since no large-scale dataset of sentential paraphrases exists publicly, we follow Wieting et al. (2017) and automatically generate millions of paraphrase pairs using neural backtranslation .", "Backtranslation naturally injects linguistic variation between the original sentence and its backtranslated counterpart.", "By running the process at a very large scale and testing for the specific variations we want to produce, we can gather ample input-output pairs for a wide range of phenomena.", "Our focus is on syntactic transformations, which we define using templates derived from linearized constituency parses (2).", "Given such parallel data, we can easily train an encoder-decoder model that takes a sentence and target syntactic template as input, and produces the desired paraphrase.", "1 A combination of automated and human evaluations show that the generated paraphrases almost always follow their target specifications, while paraphrase quality does not significantly deteriorate compared to vanilla neural backtranslation (4).", "Our model, the syntactically controlled paraphrase network ( SCPN ), is capable of generating adversarial examples for sentiment analysis and textual entailment datasets that significantly impact the performance of pretrained models (Fig-ure 1).", "We also show that augmenting training sets with such examples improves robustness without harming accuracy on the original test sets (5).", "Together these results not only establish the first general purpose syntactically controlled paraphrase approach, but also suggest that this general paradigm could be used for controlling many other aspects of the target text.", "In this section, we describe a general purpose process for gathering and labeling training data for controlled paraphrase generation.", "Inducing paraphrases from bilingual data has long been an effective method to overcome data limitations.", "In particular, bilingual pivoting (Ban-nard and Callison-Burch, 2005) finds quality para-1 Code, labeled data, and pretrained models available at https://github.com/miyyer/scpn .", "phrases by pivoting through a different language.", "Mallinson et al. (2017) show that neural machine translation (NMT) systems outperform phrase-based MT on several paraphrase evaluation metrics.", "In this paper, we use the PARA NMT-50M corpus from Wieting and Gimpel (2017).", "This corpus consists of over 50 million paraphrases obtained by backtranslating the Czech side of the CzEng (Bojar et al., 2016) parallel corpus.", "The pretrained Czech-English model used for translation came from the Nematus NMT system (Sen-nrich et al., 2017).", "The training data of this system includes four sources: Common Crawl, CzEng 1.6, Europarl, and News Commentary.", "The CzEng corpus is the largest of these four and was found to have significantly more syntactic diversity than the other data sources (Wieting and Gimpel, 2017).", "2 2.2 Automatically labeling paraphrases with syntactic transformations We need labeled transformations in addition to paraphrase pairs to train a controlled paraphrase model.", "Manually annotating each of the millions of paraphrase pairs is clearly infeasible.", "Our key insight is that target transformations can be detected (with some noise) simply by parsing these pairs.", "3 Specifically, we parse the backtranslated paraphrases using the Stanford parser (Manning et al., 2014), 4 which yields a pair of constituency parses h p 1 , p 2 i for each sentence pair h s 1 , s 2 i , where s 1 is the reference English sentence in the CzEng corpus and s 2 is its backtranslated counterpart.", "For syntactically controlled paraphrasing, we assume s 1 and p 2 are inputs, and the model is trained to produce s 2 .", "To overcome learned biases of the NMT system, we also include reversed pairs h s 2 , s 1 i during training.", "To provide syntactic control, we linearize the bracketed parse structure without leaf nodes (i.e., tokens).", "For example, the corresponding linearized parse 2 Syntactic diversity was measured by the entropy of the top two levels of parse trees in the corpora.", "3 Similar automated filtering could be used to produce data for many other transformations, such as tense changes, point-of-view shifts, and even stylometric pattern differences (Feng et al., 2012).", "This is an interesting area for future work.", "4 Because of the large dataset size, we use the faster but less accurate shift-reduce parser written by John Bauer.", "tree for the sentence She drove home. is (S(NP(PRP))(VP(VBD)(NP(NN)))(.)) .", "A system that requires a complete linearized target parse at test-time is unwieldy; how do we go about choosing the target parse?", "To simplify test-time usage, we relax the target syntactic form to a parse template , which we define as the top two levels of the linearized parse tree (the level immediately below the root along with the root); the prior example's template is S NP VP .", "In the next section, we design models such that users can feed in either parse templates or full parses depending on their desired level of control.", "The SCPN encoder-decoder architecture is built from standard neural modules, as we describe in this section.", "Given a sentential paraphrase pair h s 1 , s 2 i and a corresponding target syntax tree p 2 for s 2 , we encode s 1 using a bidirectional LSTM (Hochre-iter and Schmidhuber, 1997), and our decoder is a two-layer LSTM augmented with soft attention over the encoded states (Bahdanau et al., 2014) as well as a copy mechanism (See et al., 2017).", "Following existing work in NMT (Sennrich et al., 2015), we preprocess s 1 and s 2 into subword units using byte pair encoding, and we perform decoding using beam search.", "For all attention computations, we use a bilinear product with a learned parameter matrix W : given vectors u and v , we score them by u TW v .", "We incorporate the target syntax p 2 into the generation process by modifying the inputs to the decoder.", "In particular, a standard decoder LSTM receives two inputs at every time step: (1) the embedding w t 1 of the ground-truth previous word in s 2 , and (2) an attention-weighted average a t of the encoder's hidden states.", "We additionally provide a representation z t of the target p 2 , so at every time step the decoder computes h t = LSTM ([ w t 1 ; a t ; z t ]) .", "Since we preserve bracketed parse structure, our linearized parses can have hundreds of tokens.", "Forcing all of the relevant information contained by the parse tree into a single fixed representation (i.e., the last hidden state of an LSTM) is difficult with such large sequences.", "Intuitively, we want the decoder to focus on portions of the target parse tree that correspond with the current time step.", "As such, we encode p 2 using a (unidirectional) LSTM and compute z t with an attention-weighted average of the LSTM's encoded states at every time step.", "This attention mechanism is conditioned on the decoder's previous hidden state h t 1 .", "As mentioned in Section 2.2.1, user-friendly systems should be able to accept high-level parse templates as input rather than full parses.", "Preliminary experiments show that SCPN struggles to maintain the semantics of the input sentence when we replace the full target parse with templates, and frequently generates short, formulaic sentences.", "The paraphrase generation model seems to rely heavily on the full syntactic parse to determine output length and clausal ordering, making it difficult to see how to modify the SCPN architecture for template-only target specification.", "Instead, we train another model with exactly the same architecture as SCPN to generate complete parses from parse templates.", "This allows us to do the prediction in two steps: first predict the full syntactic tree and then use that tree to produce the paraphrase.", "Concretely, for the first step, assume t 2 is the parse template formed from the top two levels of the target parse p 2 .", "The input to this parse generator is the input parse p 1 and t 2 , and it is trained to produce p 2 .", "We train the parse generator separately from SCPN (i.e., no joint optimization) for efficiency purposes.", "At test time, a user only has to specify an input sentence and target template; the template is fed through the parse generator, and its predicted target parse is in turn sent to SCPN for paraphrase generation (see Figure 2).", "By switching from full parses to templates, we have reduced but not completely removed the burden of coming up with a target syntactic form.", "Certain templates may be not be appropriate for particular input sentences (e.g., turning a long sentence with multiple clauses into a noun phrase).", "However, others may be too similar to the input syntax, resulting in very little change.", "Since template selection is not a major focus of this paper, we use a relatively simple procedure, selecting the twenty most frequent templates in PARA NMT-1877 The man is standing in the water + The man , at the base The man , at the base of ( ROOT ( S ( NP ( DT ) ( NN ) ) ( VP ( VBZ ) ( VP ( VBG ) ( PP ( IN ) ( NP ( NP ( DT ) ( NN ) + ( ROOT ( S ( NP (NP ( DT ) ( NN ) ) ( , ) ( PP ( IN ) ( NP ( NP ( DT ) ( NN ) ) ( PP ( IN ) + ( ROOT ( S ( ( ROOT ( S ( NP ) ( , ) ( PP ) ( , ) ( VP ) ) ) + parse generator paraphrase generator target template t 2 input parse p 1 input sentence s 1 target sentence s 2 target parse p 2 Figure 2: SCPN implements parse generation from templates as well as paraphrase generation from full parses as encoder-decoder architectures (attention depicted with dotted lines, copy mechanism with double stroked lines).", "Since we cannot generate a valid paraphrase for every template, we postprocess to remove nonsensical outputs.", "In particular, we filter generated paraphrases using n-gram overlap and paraphrastic similarity, the latter of which is computed using the pretrained WORD , TRIAVG sentence embedding model from Wieting and Gimpel (2017).", "6 These paraphrastic sentence embeddings significantly outperform prior work due to the PARA NMT-50M data.", "Before using SCPN to generate adversarial examples on downstream datasets, we need to make sure that its output paraphrases are valid and grammatical and that its outputs follow the specified target syntax.", "In this section, we compare SCPN to a neural backtranslation baseline ( NMT-BT ) on the development set of our PARA NMT-50M split using both human and automated experiments.", "NMTBT is the same pretrained Czech-English model used to create PARA NMT-50M; however, here we use it to generate in both directions (i.e., English-Czech and Czech-English).", "5 However, we do provide some qualitative examples of rare and medium-frequency templates in Table 3.", "6 After qualitatively analyzing the impact of different filtering choices, we set minimum n-gram overlap to 0.5 and Model 2 1 0 SCPN w/ full parses 63.7 14.0 22.3 SCPN w/ templates 62.3 19.3 18.3 NMT-BT 65.0 17.3 17.7 Table 1: A crowdsourced paraphrase evaluation on a three-point scale ( 0 = no paraphrase, 1 = ungrammatical paraphrase, 2 = grammatical paraphrase) shows both that NMT-BT and SCPN produce mostly grammatical paraphrases.", "To measure paraphrase quality and grammaticality, we perform a crowdsourced experiment in which workers are asked to rate a paraphrase pair h s, g i on the three-point scale of Kok and Brock-ett (2010), where s is the source sentence and g is the generated sentence.", "A 0 on this scale indicates no paraphrase relationship, while 1 means that g is an ungrammatical paraphrase of s and 2 means that g is a grammatical paraphrase of s .", "We select 100 paraphrase pairs from the development set of our PARA NMT-50M split (after the postprocessing steps detailed in Section 3.3) and have three workers rate each pair.", "7 To focus the evaluation on the effect of syntactic manipulation on quality, we minimum paraphrastic similarity to 0.7.", "only select sentences whose top-level parse templates differ (i.e., t s 6 = t g ), ensuring that the output of both systems varies syntactically from the source sentences.", "The results (Table", "1) show that the uncontrolled NMT-BT model's outputs are comparable in quality and grammaticality to those of SCPN ; neither system has a significant edge.", "More interestingly, we observe no quality drop when feeding templates to SCPN (via the parse generator as described in Section 3.2) instead of complete parse trees, which suggests that the parse generator is doing a good job of generating plausible parse trees; thus, for all of the adversarial evaluations that follow, we only use the templated variant of SCPN .", "We next determine how often SCPN 's generated paraphrases conform to the target syntax: if g is a generated paraphrase and p g is its parse, how often does p g match the ground-truth target parse p 2 ?", "We evaluate on our development set using exact template match : g is deemed a syntactic match to s 2 only if the top two levels of its parse p g matches those of p 2 .", "We evaluate two SCPN configurations, where one is given the full target parse p 2 and the other is given the result of running our parse generator on the target template t 2 .", "As a sanity check, we also evaluate our parse generator using the same metric.", "The results (Table", "2) show that SCPN does indeed achieve syntactic control over the majority of its inputs.", "Our parse generator produces full parses that almost always match the target template; however, paraphrases generated using these parses are less syntactically accurate.", "8 A qualitative inspection of the generated parses reveals that they can differ from the ground-truth target parse in terms of ordering or existence of lower-level constituents (Table 6); we theorize that these differences may throw off SCPN 's decoder.", "The NMT-BT system produces paraphrases that tend to be syntactically very similar to the input sentences: 28.7% of these paraphrases have the same template as that of the input sentence s 1 , while only 11.1% have the same template as the ground-truth target s 2 .", "Even though we train SCPN 8 With that said, exact match is a harsh metric; these paraphrases are more accurate than the table suggests, as often they differ by only a single constituent.", "on data generated by NMT backtranslation, we avoid this issue by incorporating syntax into our learning process.", "The intrinsic evaluations show that SCPN produces paraphrases of comparable quality to the uncontrolled NMT-BT system while also adhering to the specified target specifications.", "Next, we examine the utility of controlled paraphrases for adversarial example generation.", "To formalize the problem, assume a pretrained model for some downstream task produces prediction y x given test-time instance x .", "An adversarial example x 0 can be formed by making label-preserving modifications to x such that y x 6 = y x 0 .", "Our results demonstrate that controlled paraphrase generation with appropriate template selection produces far more valid adversarial examples than backtranslation on sentiment analysis and entailment tasks.", "We evaluate our syntactically adversarial paraphrases on the Stanford Sentiment Tree-bank (Socher et al., 2013, SST) and SICK entailment detection (Marelli et al., 2014).", "While both are relatively small datasets, we select them because they offer different experimental conditions: SST contains complicated sentences with high syntactic variance, while SICK almost exclusively consists of short, simple sentences.", "As a baseline, we compare the ten most probable beams from NMT-BT to controlled paraphrases generated by SCPN using ten templates randomly sampled from the template set described in Section 3.3.", "9 We also need pretrained models 9 We also experimented with the diverse beam search modification proposed by Li et al. (2016b) for NMT-BT but found that it dramatically warped the semantics of many beams; crowdsourced workers rated 49% of its outputs as 0 1879 template paraphrase original with the help of captain picard , the borg will be prepared for everything .", "for which to generate adversarial examples; we use the bidirectional LSTM baseline for both SST and SICK outlined in Tai et al. (2015) since it is a relatively simple architecture that has proven to work well for a variety of problems.", "10 Since the SICK task involves characterizing the relationship between two sentences, for simplicity we only generate adversarial examples for the first sentence and keep the second sentence fixed to the ground truth.", "For each dataset, we generate paraphrases for held-out examples and then run a pretrained model over them.", "11 We consider a development example x broken if the original prediction y x is correct, but the prediction y x 0 for at least one paraphrase x 0 is incorrect.", "For SST, we evaluate on the binary sentiment classification task and ignore all phrase-level labels (because our paraphrase models are trained on only sentences).", "Table 4 shows that for both datasets, SCPN breaks many more examples than NMT-BT .", "Moreover, as shown in Table 5, NMT-BT 's paraphrases differ from the original example mainly by lexical substitutions, while SCPN often produces dramatically different syntactic structures.", "We have shown that we can break pretrained models with controlled paraphrases, but are these para-on the three-point scale.", "10 We initialize both models using pretrained GloVe embeddings (Pennington et al., 2014) and set the LSTM hidden dimensionality to 300.", "11 Since the SICK development dataset is tiny, we additionally generate adversarial examples on its test set.", "phrases actually valid adversarial examples?", "After all, it is possible that the syntactic modifications cause informative clauses or words (e.g., negations) to go missing.", "To measure the validity of our adversarial examples, we turn again to crowdsourced experiments.", "We ask workers to choose the appropriate label for a given sentence or sentence pair (e.g., positive or negative for SST), and then we compare the worker's judgment to the original development example's label.", "For both models, we randomly select 100 adversarial examples and have three workers annotate each one.", "The results (Table 4) show that on the more complex SST data, a higher percentage of SCPN 's paraphrases are valid adversarial examples than those of NMT-BT , which is especially encouraging given our model also generates significantly more adversarial examples.", "If we additionally augment the training data of both tasks with controlled paraphrases, we can increase a downstream model's robustness to adversarial examples in the development set.", "To quantify this effect, we generate controlled paraphrases for the training sets of SST and SICK using the same templates as in the previous experiments.", "Then, we include these paraphrases as additional training examples and retrain our biL-STM task models.", "12 As shown by Table 4, training on SCPN 's paraphrases significantly improves robustness to syntactic adversaries without affecting accuracy on the original test sets.", "One im-12 We did not experiment with more complex augmentation methods (e.g., downweighting the contribution of paraphrased training examples to the loss).", "portant caveat is that this experiment only shows robustness to the set of templates used by SCPN ; in real-world applications, careful template selection based on the downstream task, along with using a larger set of templates, is likely to increase robustness to less constrained syntactic adversaries.", "Augmentation with NMT-BT 's paraphrases increases robustness on SICK, but on SST, it degrades test accuracy without any significant gain in robustness; this is likely due to its lack of syntactic variation compared to SCPN .", "In the previous section, we quantitatively evaluated the SCPN 's ability to produce valid paraphrases and adversarial examples.", "Here, we take a look at actual sentences generated by the model.", "In addition to analyzing SCPN 's strengths and weaknesses compared to NMT-BT , we examine the differences between paraphrases generated by various configurations of the model to determine the impact of each major design decision (e.g., templates instead of full parses).", "Syntactic manipulation: Table 3 demonstrates SCPN 's ability to perform syntactic manipulation, showing paraphrases for two sentences generated using different templates.", "Many of the examples exhibit complex transformations while preserving both the input semantics and grammaticality, even when the target syntax is very different from that of the source (e.g., when converting a declarative to question).", "However, the failure cases demonstrate that not every template results in a valid paraphrase, as nonsensical outputs are sometimes generated when trying to squeeze the input semantics into an unsuitable target form.", "Adversarial examples: Table 5 shows that SCPN and NMT-BT differ fundamentally in the type of adversaries they generate.", "While SCPN mostly avoids lexical substitution in favor of making syntactic changes, NMT-BT does the opposite.", "These examples reinforce the results of the experiment in Section 4.2, which demonstrates NMTBT 's tendency to stick to the input syntax.", "While SCPN is able to break more validation examples than NMT-BT , it is alarming that even simple lexical substitution can break such a high percentage of both datasets we tested.", "Ebrahimi et al. (2017) observe a similar phenomenon with HotFlip, their gradient-based substitution method for generating adversarial examples.", "While NMT-BT does not receive signal from the downstream task like HotFlip, it also does not require external constraints to maintain grammaticality and limit semantic divergence.", "As future work, it would be interesting to provide this downstream signal to both NMT-BT and SCPN ; for the latter, perhaps this signal could guide the template selection process, which is currently fixed to a small, finite set.", "Templates vs. gold parses: Why does the level of syntactic control decrease when we feed SCPN parses generated from templates instead of gold parses (Table 2)?", "The first two examples in Table 6 demonstrate issues with the templated approach.", "In the first example, the template is not expressive enough for the parse generator to produce slots for the highlighted clause.", "A potential way to combat this type of issue is to dynamically define templates based on factors such as the length of the input sentence.", "In the second example, a parsing error results in an inaccurate template which in turn causes SCPN to generate a semantically-divergent paraphrase.", "The final two examples 1881 template original paraphrase (S(ADVP)(NP)(VP)) moody , heartbreaking , and filmed in a natural , unforced style that makes its characters seem entirely convincing even when its script is not .", "show instances where the templated model performs equally as well as the model with gold parses, displaying the capabilities of our parse generator.", "Removing syntactic control: To examine the differences between syntactically controlled and uncontrolled paraphrase generation systems, we train an SCPN without including z t , the attention-weighted average of the encoded parse, in the decoder input.", "This uncontrolled configuration produces outputs that are very similar to its inputs, often identical syntactically with minor lexical substitution.", "Concretely, the uncontrolled SCPN produces a paraphrase with the same template as its input 38.6% of the time, compared to NMT-BT 's 28.7% (Section 4.2).", "13 7 Related Work Paraphrase generation (Androutsopoulos and Malakasiotis, 2010; Madnani and Dorr, 2010) has been tackled using many different methods, including those based on hand-crafted rules (McKeown, 1983), synonym substitution (Bolshakov and Gelbukh, 2004), machine translation (Quirk et al., 2004), and, most recently, deep learning (Prakash et al., 2016; Mallinson et al., 2017; Dong et al., 2017).", "Our syntactically controlled setting also relates to controlled language generation tasks in which one desires to generate or rewrite a sentence with particular characteristics.", "We review related work in both 13 A configuration without the copy mechanism copies input syntax even more, with a 47.7% exact template match.", "paraphrase generation and controlled language generation below.", "Madnani and Dorr (2010) review data-driven methods for paraphrase generation, noting two primary families: template-based and translation-based.", "The first family includes approaches that use hand-crafted rules (McKeown, 1983), thesaurus-based substitution (Bolshakov and Gel-bukh, 2004; Zhang and LeCun, 2015), lattice matching (Barzilay and Lee, 2003), and template-based shake & bake paraphrasing (Carl et al., 2005).", "These methods often yield grammatical outputs but they can be limited in diversity.", "The second family includes methods that rewrite the input using methods based on parallel text (Bannard and Callison-Burch, 2005), machine translation (Quirk et al., 2004; Napoles et al., 2016; Suzuki et al., 2017), or related statistical techniques (Zhao et al., 2009).", "Of particular relevance to our work are methods that incorporate syntax to improve fluency of paraphrase output.", "Callison-Burch (2008) constrains paraphrases to be the same syntactic type as the input, though he was focused on phrase-level, not sentential, paraphrasing.", "Pang et al. (2003) learn finite-state automata from translation pairs that generate syntactic paraphrases, though this requires multiple translations into the same language and cannot be used to generate paraphrases outside this dataset.", "Shen et al. (2006) extend this to deeper syntactic analysis.", "All of these approaches use syntax to 1882 template (S(CC)(S)(,)(NP)(ADVP)(VP)) original damian encouraged me , criticized , he ... he always made me go a little deeper .", "improve grammaticality, which is handled by our decoder language model.", "Recent efforts involve neural methods.", "Iyyer et al. (2014) generate paraphrases with dependency tree recursive autoencoders by randomly selecting parse trees at test time.", "Li et al. (2017) generate paraphrases using deep reinforcement learning.", "Gupta et al. (2017) use variational autoencoders to generate multiple paraphrases.", "These methods differ from our approach in that none offer fine-grained control over the syntactic form of the paraphrase.", "There is growing interest in generating language with the ability to influence the topic, style, or other properties of the output.", "Most related to our methods are those based on syntactic transformations, like the tree-to-tree sentence simplification method of Woodsend and Lapata (2011) based on quasi-synchronous grammar (Smith and Eisner, 2006).", "Our method is more general since we do not require a grammar and there are only soft constraints.", "Perhaps the closest to the proposed method is the conditioned recurrent language model of Ficler and Goldberg (2017), which produces language with user-selected properties such as sentence length and formality but is incapable of generating paraphrases.", "For machine translation output, Niu et al. (2017) control the level of formality while Sennrich et al. (2016) control the level of politeness.", "For dialogue, Li et al. (2016a) affect the output using speaker identity, while Wang et al. (2017) develop models to influence topic and style of the output.", "Shen et al. (2017) perform style transfer on non-parallel texts, while Guu et al. (2017) generate novel sentences from prototypes; again, these methods are not necessarily seeking to generate meaning-preserving paraphrases, merely transformed sentences that have an altered style.", "We propose SCPN , an encoder-decoder model for syntactically controlled paraphrase generation, and show that it is an effective way of generating adversarial examples.", "Using a parser, we label syntactic variation in large backtranslated data, which provides training data for SCPN .", "The model exhibits far less lexical variation than existing uncontrolled paraphrase generation systems, instead preferring purely syntactic modifications.", "It is capable of generating adversarial examples that fool pretrained NLP models.", "Furthermore, by training on such examples, we increase the robustness of these models to syntactic variation.", "We thank the reviewers for their insightful comments.", "We would also like to thank Mark Yatskar for many useful suggestions on our experiments." ]
[ "objective", "abstain", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "objective", "objective", "method", "abstain", "method", "method", "method", "abstain", "method", "objective", "abstain", "objective", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "other", "method", "method", "method", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "abstain", "abstain", "abstain", "other", "other" ]
[ "Recent researches show that pre-trained models (PTMs) are beneficial to Chinese Word Segmentation (CWS).", "However, PTMs used in previous works usually adopt language modeling as pre-training tasks, lacking task-specific prior segmentation knowledge and ignoring the discrepancy between pre-training tasks and downstream CWS tasks.", "In this paper, we propose a CWS-specific pre-trained model METASEG , which employs a unified architecture and incorporates meta learning algorithm into a multi-criteria pre-training task.", "Empirical results show that METASEG could utilize common prior segmentation knowledge from different existing criteria and alleviate the discrepancy between pre-trained models and downstream CWS tasks.", "Besides, METASEG can achieve new state-of-the-art performance on twelve widely-used CWS datasets and sig-nificantly improve model performance in low-resource settings.", "Chinese Word Segmentation (CWS) is a fundamental task for Chinese natural language processing (NLP), which aims at identifying word boundaries in a sentence composed of continuous Chinese characters.", "It provides a basic component for other NLP tasks like named entity recognition (Li et al., 2020), dependency parsing (Yan et al., 2020), and semantic role labeling (Xia et al., 2019), etc.", "Generally, most previous studies model the CWS task as a character-based sequence labeling task (Xue, 2003; Zheng et al., 2013; Chen et al., 2015; Ma et al., 2018; Qiu et al., 2020).", "Recently, pre-trained models (PTMs) such as BERT (Devlin et al., 2019) have been introduced into CWS tasks, which could provide prior semantic knowledge and boost the performance of CWS systems.", "Yang (2019) directly fine-tunes BERT on several CWS benchmark datasets.", "Huang et al. (2020) fine-tunes BERT in a Criteria Li Na entered the semi-final CTB6 PKU MSRA Table 1: An example of CWS on different criteria.", "multi-criteria learning framework, where each criterion shares a common BERT-based feature extraction layer and has separate projection layer.", "Meng et al. (2019) combines Chinese character glyph features with pre-trained BERT representations.", "Tian et al. (2020) proposes a neural CWS framework WMSEG , which utilizes memory networks to incorporate wordhood information into the pre-trained model ZEN (Diao et al., 2019).", "PTMs have been proved quite effective by fine-tuning on downstream CWS tasks.", "However, PTMs used in previous works usually adopt language modeling as pre-training tasks.", "Thus, they usually lack task-specific prior knowledge for CWS and ignore the discrepancy between pre-training tasks and downstream CWS tasks.", "To deal with aforementioned problems of PTMs, we consider introducing a CWS-specific pre-trained model based on existing CWS corpora, to leverage the prior segmentation knowledge.", "However, there are multiple inconsistent segmentation criteria for CWS, where each criterion represents a unique style of segmenting Chinese sentence into words, as shown in Table 1. Meanwhile, we can easily observe that different segmentation criteria could share a large proportion of word boundaries between them, such as the boundaries between word units (Li Na), (entered) and (the semi-final), which are the same for all segmentation criteria.", "It shows that the common prior segmentation knowledge is shared by different criteria.", "segmentation knowledge of different criteria, METASEG utilizes a unified architecture and introduces a multi-criteria pre-training task.", "Moreover, to alleviate the discrepancy between pre-trained models and downstream unseen criteria, meta learning algorithm (Finn et al., 2017) is incorporated into the multi-criteria pre-training task of METASEG .", "Experiments show that METASEG could outperform previous works significantly, and achieve new state-of-the-art results on twelve CWS datasets.", "Further experiments show that METASEG has better generalization performance on downstream unseen CWS tasks in low-resource settings, and improve recalls for Out-Of-Vocabulary (OOV) words.", "To the best of our knowledge, METASEG is the first task-specific pre-trained model especially designed for CWS.", "Recently, PTMs have been used for CWS and achieve good performance (Devlin et al., 2019).", "These PTMs usually exploit fine-tuning as the main way of transferring prior knowledge to downstream CWS tasks.", "Specifically, some methods directly fine-tune PTMs on CWS tasks (Yang, 2019), while others fine-tune them in a multi-task framework (Huang et al., 2020).", "Besides, other features are also incorporated into PTMs and fine-tuned jointly, including Chinese glyph features (Meng et al., 2019), wordhood features (Tian et al., 2020), and so on.", "Although PTMs improve CWS systems significantly, their pre-training tasks like language modeling still have a wide discrepancy with downstream CWS tasks and lack CWS-specific prior knowledge.", "Task-specific pre-trained models are lately studied to introduce task-specific prior knowledge into multiple NLP tasks.", "Specifically designed pretraining tasks are introduced to obtain the task-specific pre-trained models, and then these models are fine-tuned on corresponding downstream NLP tasks, such as named entity recognition (Xue et al., 2020), sentiment analysis (Ke et al., 2020) and text summarization (Zhang et al., 2020).", "In this paper, we propose a CWS-specific pre-trained model METASEG .", "two phases: pre-training phase and fine-tuning phase.", "In pre-training phase, we design a unified architecture and incorporate meta learning algorithm into a multi-criteria pre-training task, to obtain the CWS-specific pre-trained model which has less discrepancy with downstream CWS tasks.", "In fine-tuning phase, we fine-tune the pre-trained model on downstream CWS tasks, to leverage the prior knowledge learned in pre-training phase.", "In this section, we will describe METASEG in three parts.", "First, we introduce the Transformer-based unified architecture.", "Second, we elaborate on the multi-criteria pre-training task with meta learning algorithm.", "Finally, we give a brief description of the downstream fine-tuning phase.", "In traditional CWS systems (Chen et al., 2015; Ma et al., 2018), CWS model usually adopts a separate architecture for each segmentation criterion.", "An instance of the CWS model is created for each criterion and trained on the corresponding dataset independently.", "Thus, a model instance can only serve one criterion, without sharing any segmentation knowledge with other different criteria.", "To better leverage the common segmentation knowledge shared by multiple criteria, METASEG employs a unified architecture based on the widely-used Transformer network (Vaswani et al., 2017) with shared encoder and decoder for all different criteria, as illustrated in Figure 1. The input for the unified architecture is an augmented sentence, which is composed of a specific criterion token plus the original sentence to represent both criterion and text information.", "In embedding layer, the augmented sentence is transformed into input representations by summing the token, segment and position embeddings.", "The Transformer network is used as the shared encoder layer, encoding the input representations into hidden representations through blocks of multi-head attention and position-wise feed-forward modules (Vaswani et al., 2017).", "Then a shared linear decoder with softmax is followed to map hidden representations to the probability distribution of segmentation labels.", "The segmentation labels consist of four CWS labels { B, M, E, S } , denoting the word beginning, middle, ending and single word respectively.", "Formally, the unified architecture can be concluded as a probabilistic model P ( Y | X ) , which represents the probability of the segmentation label Input Token Embedding Segment Embedding Position Embedding Encoder Output [CLS] [pku] [SEP] E [CLS] E [pku] E E E E E E E E [SEP] EAEAEAEAEAEAEAEAEAEAE 0 E 1 E 2 E 3 E 4 E 5 E 6 E 7 E 8 E 9 Transformer T [CLS] T [pku] T T T T T T T T [SEP] S S S S B E S B E S Criterion Sentence Figure 1: The unified framework of our proposed model, with shared encoder and decoder for different criteria.", "sequence Y given the augmented input sentence X .", "The model parameters are invariant of any criterion c , and would capture the common segmentation knowledge shared by different criteria.", "In this part, we describe multi-criteria pre-training with meta learning for METASEG .", "We construct a multi-criteria pre-training task, to fully mine the shared prior segmentation knowledge of different criteria.", "Meanwhile, to alleviate the discrepancy between pre-trained models and downstream CWS tasks, meta learning algorithm (Finn et al., 2017) is used for pre-training optimization of METASEG .", "Multi-Criteria Pre-training Task As mentioned in Section 1, there are already a variety of existing CWS corpora (Emerson, 2005; Jin and Chen, 2008).", "These CWS corpora usually have inconsistent segmentation criteria, where human-annotated data is insufficient for each criterion.", "Each criterion is usually used to fine-tune a CWS model separately on a relatively small dataset and ignores the shared knowledge of different criteria.", "But in our multi-criteria pre-training task, multiple criteria are jointly used for pre-training to capture the common segmentation knowledge shared by different existing criteria.", "joint multi-criteria pre-training corpus DT .", "Every sentence under each criterion is augmented with the corresponding criterion, and then incorporated into the joint multi-criteria pre-training corpus.", "To represent criterion information, we add a specific criterion token in front of the input sentence, such as [pku] for PKU criterion (Emerson, 2005).", "We also add [CLS] and [SEP] token to sentence beginning and ending respectively like Devlin et al. (2019).", "This augmented input sentence represents both criterion and text information, as shown in Figure 1. Then, we randomly pick 10% sentences from the joint multi-criteria pre-training corpus DT and replace their criterion tokens with a special token [unc] , which means undefined criterion.", "With this design, the undefined criterion token [unc] would learn criterion-independent segmentation knowledge and help to transfer such knowledge to downstream CWS tasks.", "Finally, given a pair of augmented sentence X and segmentation labels Y from the joint multi-criteria pre-training corpus DT , our unified architecture (Section 3.1) predicts the the probability of segmentation labels P ( Y | X ) .", "We use the normal negative log-likelihood (NLL) loss as objective function for this multi-criteria pre-training task: L ( ; DT ) = (cid:88) X,Y DT log P ( Y | X ) (1) Meta Learning Algorithm The objective of most PTMs is to maximize its performance on pretraining tasks (Devlin et al., 2019), which would lead to the discrepancy between pre-trained models and downstream tasks.", "Besides, pre-trained CWS model from multi-criteria pre-training task could still have discrepancy with downstream unseen criteria, because downstream criteria may not exist in pre-training.", "To alleviate the above discrepancy, we utilize meta learning algorithm (Lv et al., 2020) for pre-training optimization of METASEG .", "The main objective of meta learning is to maximize generalization performance on potential downstream tasks, which prevents pre-trained models from over-fitting on pre-training tasks.", "As shown in Figure 2, by introducing meta learning algorithm, pre-trained models would have less discrepancy with downstream tasks instead of inclining towards pretraining tasks.", "(a) Pre-training without meta learning PT DT1 DT3 DT2 PT DT1 DT3 DT2", "PT DT1 DT3 DT2", "PT represents the multi-criteria pre-training task, while solid line represents the pre-training phase.", "DT represents the downstream CWS task, while dashed line represents the fine-tuning phase.", "represents pre-trained model parameters.", "The meta learning algorithm treats pre-training task T as one of the downstream tasks.", "It tries to optimize meta parameters 0 , from which we can get the task-specific model parameters k by k gradient descent steps over the training data D trainT on task T , 1 = 0 0 LT ( 0 ; D trainT, 1 ) , ..., k = k 1 k 1 LT ( k 1 ; D trainT,k ) , (2) where is learning rate, D trainT,i is the i -th batch of training data.", "Formally, task-specific parameters k can be denoted as a function of meta parameters 0 as follows: k = f k ( 0 ) .", "To maximize the generalization performance on task T , we should optimize meta parameters 0 on the batch of test data D testT , 0 = arg min 0 LT ( k ; D testT ) = arg min 0 LT ( f k ( 0 ); D testT ) .", "(3) The above meta optimization could be achieved by gradient descent, so the update rule for meta parameters 0 is as follows: (cid:48) 0 = 0 0 LT ( k ; D testT ) , (4) where is the meta learning rate.", "The gradient in Equation 4 can be rewritten as: 0 LT ( k ; D testT ) = k LT ( k ; D testT ) k 1 k 0 1 = k LT ( k ; D testT ) k (cid:89) j =1 ( I 2 j 1 LT ( j 1 ; D trainT,j )) k LT ( k ; D testT ) , (5) where the last step in Equation 5 adopts first-order approximation for computational simplifica-tion (Finn et al., 2017).", "Specifically, the meta learning algorithm for pretraining optimization is described in Algorithm 1. It can be divided into two stages:", "i) meta train stage, which updates task-specific parameters by k gradient descent steps over training data;", "ii) meta test stage, which updates meta parameters by one gradient descent step over test data.", "Hyper-parameter k is the number of gradient descent steps in meta train stage.", "The meta learning algorithm degrades to normal gradient descent algorithm when k = 0 .", "The returned meta parameters 0 are used as the pre-trained model parameters for METASEG .", "After pre-training phase mentioned in Section 3.2, we obtain the pre-trained model parameters 0 , which capture prior segmentation knowledge and have less discrepancy with downstream CWS tasks.", "We fine-tune these pre-trained parameters 0 on downstream CWS corpus, to transfer the prior segmentation knowledge.", "For format consistency, we process the sentence from the given downstream corpus in the same way as Section 3.2, by adding the criterion token [unc] , beginning token [CLS] and ending token beginning token [SEP] .", "The undefined criterion token [unc] is used in fine-tuning phase Algorithm 1 Meta Learning for Pre-training Optimization Require: Distribution over pre-training task p ( T ) , initial meta parameters 0 , objective function L Require: Learning rate , meta learning rate , meta train steps k 1: for epoch = 1 , 2 , ... do 2: Sample k training data batches D trainT from p ( T ) 3: for j = 1 , 2 , ..., k do 4: j j 1 j 1 LT ( j 1 ; D trainT,j ) 5: end for 6: Sample test data batch D testT from p ( T ) 7: 0 0 k LT ( k ; D testT ) 8: end for 9: return Meta parameters 0 instead of the downstream criterion itself, because the downstream criterion usually doesn't exist in pre-training phase and the pre-trained model has no information about it.", "Datasets We collect twelve publicly available CWS datasets, with each dataset representing a unique segmentation criterion.", "Among all datasets, we have PKU, MSRA, CITYU, AS from SIGHAN2005 (Emerson, 2005), CKIP, NCC, SXU from SIGHAN2008 (Jin and Chen, 2008), CTB6 from Xue et al. (2005), WTB from Wang et al. (2014), UD from Zeman et al. (2017), ZX from Zhang et al. (2014) and CNC 1 .", "WTB, UD, ZX datasets are kept for downstream fine-tuning phase, while the other nine datasets are combined into the joint multi-criteria pre-training corpus (Section 3.2), which amounts to nearly 18M words.", "For CTB6, WTB, UD, ZX and CNC datasets, we use the official data split of training, development, and test sets.", "For the rest, we use the official test set and randomly pick 10% samples from the training data as the development set.", "We pre-process all these datasets following four procedures: 1. Convert traditional Chinese datasets into sim-plified, such as CITYU, AS and CKIP; 2. Convert full-width tokens into half-width; 1 http://corpus.zhonghuayuwen.org/ 3. Replace continuous English letters and digits with unique tokens; 4. Split sentences into shorter clauses by punctuation.", "Hyper-Parameters We employ METASEG with the same architecture as BERT-Base (Devlin et al., 2019), which has 12 transformer layers, 768 hidden sizes and 12 attention heads.", "In pre-training phase, METASEG is initialized with released parameters of Chinese BERT-Base model 2 and then pre-trained with the multi-criteria pre-training task.", "Maximum input length is 64, with batch size 64, and dropout rate 0.1.", "We adopt AdamW optimizer (Loshchilov and Hutter, 2019) with 1 = 0 .", "9 , 2 = 0 .", "999 and weight decay rate of 0.01.", "The optimizer is implemented by meta learning algorithm, where both learning rate and meta learning rate are set to 2e-5 with a linear warm-up proportion of 0.1.", "The meta train steps are selected to k = 1 according to downstream performance.", "Pre-training process runs for nearly 127,000 meta test steps, amounting to ( k + 1) 127 , 000 gradient descent steps, which takes about 21 hours on one NVIDIA Tesla V100 32GB GPU card.", "In fine-tuning phase, we set maximum input length to 64 for all criteria but 128 for WTB, with batch size 64.", "We fine-tune METASEG with AdamW optimizer of the same settings as pretraining phase without meta learning.", "METASEG is fine-tuned for 5 epochs on each downstream dataset.", "In low-resource settings, experiments are performed on WTB dataset, with maximum input length 128.", "We evaluate METASEG at sampling rates of 1%, 5%, 10%, 20%, 50%, 80%.", "Batch size is 1 for 1% sampling and 8 for the rest.", "We keep other hyper-parameters the same as those of fine-tuning phase.", "The standard F1 score is used to evaluate the performance of all models.", "We report F1 score of each model on the test set according to its best checkpoint on the development set as Qiu et al. (2020).", "After pre-training, we fine-tune METASEG on each pre-training criterion.", "Table 3 shows F1 scores on test sets of nine pre-training criteria in two blocks.", "The first block displays the performance of previous works.", "The second block displays three models implemented by us: BERT-Base is the fine-tuned model initialized with official BERT-Base parameters.", "METASEG (w/o fine-tune) is our proposed pre-trained model directly used for inference without fine-tuning.", "METASEG is the fine-tuned model initialized with pre-trained METASEG parameters.", "From the second block, we observe that fine-tuned METASEG could outperform fine-tuned BERT-Base on each criterion, with 0.26% improvement on average.", "It shows that METASEG is more effective when fine-tuned for CWS.", "Even without fine-tuning, METASEG (w/o fine-tune) still behaves better than fine-tuned BERT-Base model, indicating that our proposed pre-training approach is the key factor for the effectiveness of METASEG .", "Fine-tuned METASEG performs better than that of no fine-tuning, showing that downstream fine-tuning is still necessary for the specific criterion.", "Furthermore, METASEG can achieve state-of-the-art results on eight of nine pre-training criteria, demonstrating the effectiveness of our proposed methods.", "To evaluate the knowledge transfer ability of METASEG , we perform experiments on three unseen downstream criteria which are absent in pretraining phase.", "Table 4 shows F1 scores on test sets of three downstream criteria.", "The first block displays previous works on these downstream criteria, while the second block displays three models implemented by us (see Section 4.2.1 for details).", "Results show that METASEG outperforms the previous best model by 0.56% on average, achieving new state-of-the-art performance on three downstream criteria.", "Moreover, METASEG (w/o fine-tune) actually preforms zero-shot inference on downstream criteria and still achieves 87.28% average F1 score.", "This shows that METASEG does learn some common prior segmentation knowledge in pre-training phase, even if it doesn't see these downstream criteria before.", "Compared with BERT-Base, METASEG has the same architecture but different pre-training tasks.", "It can be easily observed that METASEG with fine-tuning outperforms BERT-Base by 0.46% on average.", "This indicates that METASEG could indeed alleviate the discrepancy between pre-trained models and downstream CWS tasks than BERT-Base.", "We perform further ablation studies on the effects of meta learning (ML) and multi-criteria pretraining (MP), by removing them consecutively from the complete METASEG model.", "After removing both of them, METASEG degrades into the normal BERT-Base model.", "F1 scores for ablation studies on three downstream criteria are illustrated in Table 5. We observe that the average F1 score drops by 0.12% when removing the meta learning algorithm (-ML), and continues to drop by 0.34% when removing the multi-criteria pre-training task (-ML-MP).", "It demonstrates that meta learning and multi-criteria pre-training are both significant for the effectiveness of METASEG .", "To better explore the downstream generalization ability of METASEG , we perform experiments on the downstream WTB criterion in low-resource settings.", "Specifically, we randomly sample a given rate of instances from the training set and fine-tune the pre-trained METASEG model on down-sampling training sets.", "These settings imitate the realistic low-resource circumstance where human-annotated data is insufficient.", "The performance at different sampling rates is evaluated on the same WTB test set and reported in Table 6. Results show that METASEG outperforms BERT-Base at every sampling rate.", "The margin is larger when the sampling rate is lower, and reaches 6.20% at 1% sampling rate.", "This demonstrates that METASEG could generalize better on the downstream criterion in low-resource settings.", "When the sampling rate drops from 100% to 1%, F1 score of BERT-Base decreases by 7.60% while that of METASEG only decreases by 2.37%.", "The performance of METASEG at 1% sampling rate still reaches 91.60% with only 8 instances, comparable with performance of BERT-Base at 20% sampling rate.", "This indicates that METASEG can make better use of prior segmentation knowledge and learn from less amount of data.", "It shows that METASEG would reduce the need of human annotation signifi-cantly.", "Out-of-Vocabulary (OOV) words denote the words which exist in inference phase but don't exist in training phase.", "OOV words are a critical cause of errors on CWS tasks.", "We evaluate recalls for OOV words on test sets of all twelve criteria in Table 7. Results show that METASEG outperforms BERT-Base on ten of twelve criteria and improves recalls for OOV words by 0.99% on average.", "This indicates that METASEG could benefit from our proposed pre-training methodology and recognize more OOV words in inference phase.", "To investigate the contribution of multi-criteria pretraining towards performance of METASEG , we perform experiments on a non-pretraining baseline Transformer .", "Transformer has the same architecture and is directly trained from scratch on the same nine datasets (Section 4.2.1), but doesn't have any pre-training phase as METASEG .", "Comparison of Sampling Rates 1% 5% 10% 20% 50% 80% 100% #Instances 8 40 81 162 406 650 813 BERT-Base (ours) 85.40 87.83 90.46 91.15 92.80 93.14 93.00 METASEG 91.60 92.29 92.54 92.63 93.45 94.11 93.97 Table 6: F1 scores on WTB test set in low-resource settings.", "Results show that METASEG outperforms the non-pretraining Transformer on each criterion and achieves a 2.40% gain on average, even with the same datasets and architecture.", "It demonstrates that multi-criteria pre-training is vital for the effectiveness of METASEG and the performance gain is not merely from the large dataset size.", "Moreover, METASEG has the generalization ability to transfer prior knowledge to downstream unseen criteria, which could not be achieved by the non-pretraining counterpart Transformer.", "To visualize the discrepancy between pre-trained models and downstream criteria, we plot similarities of three downstream criteria with METASEG and BERT.", "token embeddings of METASEG and BERT as representations of these two pre-trained models.", "We compute cosine similarities between three criteria embeddings and two pre-trained model embeddings, and illustrate them in Figure 3. We can observe that similarities of all three downstream criteria lie above the dashed line, indicating that all three downstream criteria are more similar to METASEG than BERT.", "The closer one criterion is to the upper left corner, the more similar it is to METASEG .", "Therefore, we can conclude that WTB is the most similar criterion to METASEG among all these criteria, which qualitatively corresponds to the phenomenon that WTB criterion has the largest performance gain in Table 4. The above visualization results show that our proposed approach could solidly alleviate the discrepancy between pre-trained models and downstream CWS tasks.", "Thus METASEG is more similar to downstream criteria.", "In this paper, we propose a CWS-specific pre-trained model METASEG , which employs a unified architecture and incorporates meta learning algorithm into a multi-criteria pre-training task.", "Experiments show that METASEG could make good use of common prior segmentation knowledge from different existing criteria, and alleviate the discrepancy between pre-trained models and downstream CWS tasks.", "METASEG also gives better generalization ability in low-resource settings, and achieves new state-of-the-art performance on twelve CWS datasets.", "Since the discrepancy between pre-training tasks and downstream tasks also exists in other NLP tasks and other languages, in the future we will explore whether the approach of pre-training with Models PKU MSRA CITYU AS CKIP NCC SXU CTB6 CNC Avg.", "meta-learning in this paper could be applied to other tasks and languages apart from Chinese word segmentation.", "Shizhe Diao, Jiaxin Bai, Yan Song, Tong Zhang, and Yonggang Wang.", "2019.", "ZEN: Pre-training Chinese Text Encoder Enhanced by N-gram Representations.", "ArXiv , abs/1911.00720.", "Sufeng Duan and Hai Zhao.", "2020.", "Attention Is All You Need for Chinese Word Segmentation.", "In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing .", "Thomas Emerson.", "2005.", "The Second International Chinese Word Segmentation Bakeoff.", "In Proceedings of the Fourth SIGHAN Workshop on Chinese Language Processing .", "Chelsea Finn, Pieter Abbeel, and Sergey Levine.", "2017.", "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks.", "In Proceedings of the 34th International Conference on Machine Learning , volume 70 of Proceedings of Machine Learning Research , pages 11261135, International Convention Centre, Sydney, Australia.", "PMLR." ]
[ "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "other", "other", "other", "other", "other", "other", "other", "objective", "abstain", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "objective", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain" ]
[ "Commonsense reasoning research has so far been limited to English.", "We aim to evaluate and improve popular multilingual language models (ML-LMs) to help advance commonsense reasoning (CSR) beyond English.", "We collect the Mickey corpus, consisting of 561k sentences in 11 different languages, which can be used for analyzing and improving ML-LMs.", "We propose Mickey Probe, a language-agnostic probing task for fairly evaluating the common sense of popular ML-LMs across different languages.", "In addition, we also create two new datasets, X-CSQA and X-CODAH, by translating their English versions to 15 other languages, so that we can evaluate popular ML-LMs for cross-lingual commonsense reasoning.", "To improve the performance beyond English, we propose a simple yet effective method multilingual contrastive pretraining (MCP).", "It significantly enhances sentence representations, yielding a large performance gain on both benchmarks (e.g., +2.7% accuracy for X-CSQA over XLM-RL ) 1 .", "Understanding natural language relies heavily on commonsense reasoning (CSR), which is the process of making inferences with commonsense knowledge.", "Commonsense knowledge is the set of general facts that reflect our natural understanding of the physical world and human behavior, which are usually seen as an implicit background when people communicate with each other using languages.", "It is thus of vital importance to evaluate and improve the commonsense reasoning capability of language models (LMs), towards building general natural language understanding (NLU) systems (Davis and Marcus, 2015).", "Many recent benchmark datasets and probing methods have been proposed to evaluate machine common sense.", "As shown in Figure 1, the LAMA probe (Petroni et al., 2019) is for analyzing LMs' zero-shot commonsense recalling ability; CommonsenseQA (CSQA) (Talmor et al., 2019) is instead a multiple-choice QA task that needs fine-tuning; CODAH (Chen et al., 2019) and SWAG (Zellers et al., 2018) focus on the ability to complete the most plausible scenes.", "However, all these works have been limited only to English .", "Consequently, follow-up analysis and reasoning methods developed (Lin et al., 2019; Feng et al., 2020; Lin et al., 2020) also focus only on English LMs like BERT (Devlin et al., 2019).", "Such English-centric trend of commonsense reasoning studies not only limits our research scope, but also tends to exacerbate English-specific bias that might prevent future methods from generalizing beyond English (Ponti et al., 2020).", "It is of pressing urgency for the community to develop NLU systems that can serve all languages in the world to bridge the gap between different cultures and eliminate language barriers (Hu et al., 2020), and multilingual language models (ML-LMs), such as XLM-R (Conneau et al., 2020), are among the most promising tools to achieve this ambitious goal.", "Although ML-LMs have been evaluated in a few NLU tasks, e.g., XNLI (Con-neau et al., 2018) and XTEMRE (Hu et al., 2020), it is still relatively unclear how ML-LMs perform in commonsense reasoning tasks, due to the lack of", "1) dedicated methods for probing common sense in ML-LMs and", "2) multilingual benchmark datasets for commonsense reasoning.", "To analyze how much common sense ML-LMs already have without any tuning , we propose MICKEYPROBE , a zero-shot probing task.", "It tasks a ML-LM to rank a set of contrastive assertions (i.e., declarative sentences) in the same language by their commonsense plausibility , for which we use pseudo-likelihood (PLL) (Salazar et al., 2020) as a proxy.", "Unlike the LAMA probe, it can study multi-token concepts which are ubiquitous in some non-English languages.", "In addition, it fairly compares performance across different languages via a language-invariant evaluation protocol.", "Alongside the probing task, we also create MickeyCorpus , a large-scale multilingual dataset, consisting of 561k sentences in 11 different languages.", "Our experiments reveal that there are always large discrepancies across different languages in the tested ML-LMs, and different ML-LMs show very different language preferences.", "Beyond supervision-free analysis of ML-LMs, we also study their performance in commonsense reasoning tasks, such as CSQA and CODAH, within a cross-lingual transfer setting (i.e., trained on English data and tested on other languages).", "We find that existing ML-LMs tend to have much lower accuracy in commonsense reasoning beyond English.", "We conjecture a major common weakness of existing ML-LMs is that their pretraining stages do not have a proper sentence-level objective.", "Therefore, we propose multilingual contrastive pre-training (MCP), which tasks a ML-LM to select the correct assertion out of a set of N contrastive assertions in N different languages.", "We re-format MickeyCorpus by sampling across languages and thus form a dedicated pre-training corpus for the MCP task.", "To fairly evaluate different ML-LMs and validate the effectiveness of MCP, we create X-CSQA and X-CODAH, two cross-lingual commonsense reasoning datasets by translating their English versions to 15 other languages 2 , including low-resource ones such as Swahili ( sw ) and Urdu ( ur ).", "Experiments show that the proposed MCP objective indeed significantly improves the performance of state-of-the-art ML-LMs in cross-lingual commonsense reasoning.", "Our contributions are as follows: Resources.", "We collect a large multilingual parallel corpus, MickeyCorpus , consisting of 561k sentences in 11 languages, which can be used for analyzing and improving ML-LMs.", "We also create X-CSQA and X-CODAH , two cross-lingual CSR benchmarks in 16 languages, for question answering and scene completion, respectively.", "Evaluation and analysis.", "We analyze multiple popular ML-LMs with MICKEYPROBE , a language-invariant , zero-shot task for probing common sense in ML-LMs; We also evaluate them on X-CSQA and X-CODAH in a cross-lingual transfer setting.", "Method to improve ML-LMs.", "We propose multilingual contrastive pretraining , a simple and effective sentence-level pretext task for enhancing ML-LMs in cross-lingual commonsense reasoning, which significantly improves the state-of-the-art ML-LMs in cross-lingual commonsense reasoning.", "In this section, we introduce important concepts, background knowledge, and related work before we present our work in following sections.", "A multilingual language model (ML-LM) aims to produce text representations for multiple languages in a unified embedding space.", "One of the unique advantages of ML-LMs is their potential ability to perform zero-shot cross-lingual transfer a model trained (or fine-tuned) on data in one language (usually English) can be directly used in other languages as well without further fine-tuning.", "Improving ML-LMs is thus believed as one of the most promising approach towards multilingual NLU at scale.", "mBERT (Devlin 2 The 16 languages for X-CSQA and X-CODAH: {en, zh, de, es, fr, it, jap, nl, pl, pt, ru, ar, vi, hi, sw , ur }. et al., 2019) is simply the BERT model (Devlin et al., 2019) trained on multilingual corpora without specific designs about multilinguality.", "The distil-mBERT (d-mBERT) (Sanh et al., 2019) is a smaller mBERT trained by knowledge distillation.", "Conneau and Lample (2019) proposed XLM(-100), which is pretrained with both masked language modeling (MLM) and translation language modeling (TLM).", "Conneau et al. (2020) further proposed XLM-R, which improves the XLM with a better sub-token vocabulary and high-quality multilingual corpora (CC100).", "We leave the analysis of recent seq2seq ML-LMs, such as mBART (Liu et al., 2020) and mT5 (Xue et al., 2021), as future work, because their architectures are significantly different from the other ML-LMs.", "Note that the above ML-LMs are pretrained only with token-level training objectives such as MLM (i.e., recovering masked tokens in monolingual text) and TLM (i.e., recovering masked tokens in a pair of parallel sentences in two different languages).", "However, most NLU tasks, including commonsense reasoning, highly rely on sentence-level representations.", "We argue that a well-designed sentence-level pre-training objective should improve ML-LMs for NLU tasks.", "This intuition motivates us to propose a sentence-level pre-training objective MCP (Section 5).", "There are a few recent multilingual benchmarks for NLU tasks, e.g., XTREME(Hu et al., 2020), TyDi QA(Clark et al., 2020), and XGLUE(Liang et al., 2020).", "XTREME and XGLUE are unified large-scale multilingual multitask benchmarks, while Ty-Di QA focuses on the QA.", "These existing cross-lingual benchmarks have not covered commonsense reasoning tasks , such as CSQA (Talmor et al., 2019), SWAG (Zellers et al., 2018), and CODAH (Chen et al., 2019).", "CSQA is a question answering task and the other two are scene completion tasks, while all have a multiple-choice selection objective, as shown in Figure 1.", "These benchmarks are widely used to evaluate LMs for commonsense reasoning.", "Unfortunately, they are limited to English, not applicable to evaluate models of multilingual commonsense knowledge, which motivates us to create X-CSQA and X-CODAH.", "The goal of the recent XCOPA (Ponti et al., 2020) dataset shares a similar goal, but it only focused on event-based causal reasoning in the scope of humans' social behavior, which is thus arguably more culturally biased.", "In contrast, the X-CSQA and X-CODAH are mainly for evaluating general world knowledge and cover more fine-grained types of reasoning (e.g., quantitative, negation), and thus engage a more language-agnostic, comprehensive understanding of ML-LMs about common sense.", "The LAMA Probe (Petroni et al., 2019) is the seminal work on probing for common sense in (English) language models.", "It has a straightforward intuition: if a pretrained language model contains more commonsense knowledge, then it should be better at recalling a masked token in a commonsense assertion (e.g., birds have [mask] ).", "Specifically, given a LAMA-probe sentence s and its masked token w t , a LM under testing uses all past and future tokens s \\ t := \u0000 w 1 , . . . , w t \u0000 1 , w t +1 , . . . , w | s | \u0000 .", "as the input to rank all tokens in the vocabulary with the probability P \u0000 w t | s \\ t \u0000 via zero-shot inference.", "One can evaluate the performance of recalling common sense by measuring the position of a correct token wing in the ranked list.", "That is, the LAMA probe method uses token-level probability as a proxy to probe for common sense in LMs via ranking all tokens in their vocabularies.", "This intuitive method, however, has several inherent limitations.", "First, in many other languages, multi-token concepts are ubiquitous, for example, f (library in Simplified Chinese).", "Jiang et al. (2020) present several methods to decode multi-token entities so that they can adapt the LAMA probe to probe a LM for language-specific analysis.", "It is however infeasible to use token-level probing tasks if we want to analyze ML-LMs across languages .", "In addition, the evaluation metric of the LAMA probe could be unfair, because there can be many correct words for a masked position (e.g., birds have legs/eyes ).", "The ranking metrics of the LAMA probe, however, tend to ignore these facts, resulting in a less trustworthy analysis.", "The vocabulary-specific ranking is unfair when comparing across different languages, so they can have very different label space.", "These limitations of the LAMA Probe prevent us from analyzing common sense in ML-LM across topologically diverse languages.", "The challenges of using the LAMA Probe for probing common sense in ML-LMs motivate us to propose a more suitable method for analyzing ML-LMs, one that can fairly compare across a diverse set of languages.", "We present MICKEYPROBE , a M ult i lingual task for probing c ommonsense k nowledg e and anal y sis.", "We design a language-agnostic probing task with a sentence-selection objective for analyzing common sense of a ML-LM: given a set of assertions (i.e., declarative sentences) that have similar words and syntactic features, select the one with highest commonsense plausibility.", "We present the task formulation in this section and then introduce how we collect the dedicated dataset in Section 4.", "Notations.", "We define a Mickey probe M as a set of K assertions in the same language, where one and only one of them (say, M i ) is the truth assertion with better commonsense plausibility than the other K \u0000 1 ones.", "Each Mickey probe M has multiple semantically equivalent versions in different languages.", "Let us denote a language by l 2 L where L = { en, fr, ru, zh, . . . } and |L| is the number of languages of interest.", "Then, M l is the probe M in the language l .", "For example, M en and M fr denote the probes with the same meaning but in English (en) and French (fr) respectively.", "We use M to denote a multilingual parallel dataset for MICKEYPROBE , which consists of T |L| K assertions.", "T is the number of MICKEYPROBE items and each item has K assertions and |L| language.", "Finally, we can formally describe a multilingual parallel dataset M for MICKEYPROBE : 8 M 2 M , 8 ( l x , l y ) 2 L 2 , 8 i 2 N K , M l x i", "We use the notation", "./ to indicate two assertions in different languages (e.g., l x and l y ) are semantically equivalent to each other.", "We leave the details of creating such an M in Section 4.", "Commonsense Probing Task.", "Given a Micky Probe M in the dataset M , and suppose the index of the truth assertion to be t , a perfect multilingual language model would produce sentence probabilities such that it always gives the truth assertion M l t the highest probability among other candidates for every language.", "It is still an open problem to properly compute sentence probabilities from masked language models, the recently proposed pseudo-log-likelihood scoring (PLLs) (Salazar et al., 2020) has shown promising results in many downstream NLP applications that need sentence re-ranking (e.g., speech recognition, and translation), suggesting it is a promising proxy of sentence probability.", "Given a sentence s , its PLL is defined as: log P ( s ) = PLL( s ) := | s | X i =1 log P \u0000 w i | s \\ i \u0000 (3) That is, we individually mask each token w i at a time and use the remaining context s \\ i to get the probability of a word w i in the sentence s .", "Finally, we aggregate them to approximate P ( s ) .", "Evaluation Metric.", "The evaluation metric for MICKEYPROBE over a multilingual parallel dataset M in a specific language l is defined as the overall hit@k accuracy of the selection results hit@ k ( l ) = PM 2 M 1 { truth-rank( M l ) k } / |M| where truth-rank( M l ) means the the position of the truth assertion M lt in M l sorted by their probabilities defined in Eq.", "(3).", "The hit@1 is just equivalent to the conventional accuracy .", "Advantages of MICKEYPROBE .", "There are two key advantages of the MICKEYPROBE for evaluating ML-LMs: (1) The sentence-level probability can be more generally applied in languages besides English, comparing with the LAMA probe which only studies single-token English words.", "(2) The task formulation creates a relatively closed-ended setting, such that we can use a language-independent evaluation metric to fairly compare across various languages within a ML-LM and compare across various ML-LMs for a particular language.", "In addition, we can see LAMA Probe as a monolingual, word-level version of the more general MICKEYPROBE : the LAMA Probe is when L = { en } , and { M en } = M 2 M is a huge number of K assertions (i.e., the vocabulary size) a fixed [mask] is replaced by all tokens in the vocabulary.", "We present a procedure for automatically creating a multilingual parallel dataset M for the probing task MICKEYPROBE .", "Our collected corpus, named MickeyCorpus , has 561k sentences in 11 languages ( T = 10.2k, K =5, |L| =11).", "For the correct commonsense assertions in English, we have an existing resource, the OMCS corpus (Singh et al., 2002) which contains human-written sentences in English that describe commonsense facts.", "Each assertion can be used as a M en t and we perform perturbations on it to create the other K \u0000 1 distractor assertions (i.e., false candidates), yielding an M en example.", "Inspired by BERT-attack method (Li et al., 2020), we use a simple method to generate false assertions that are semantically related and syntactically similar to the truth assertions.", "Given a correct assertion, we first randomly sample a few ( 1 3 ) words with a part-of-speech tag as noun, verb, or adjective, and replace them with [mask].", "Then, we use a beam-search style method to decode the [mask] tokens one by one from left to right.", "To ensure that the distractors are less plau-Figure 3: The MICKEYPROBE results in hit@1-acc.", "A larger version of this figure is in Appendix (Fig. 6).", "sible, we limit the decoding steps to only sample tokens that ranks between 200th 300th.", "We repeat the above procedure multiple times with different sets of [mask] tokens.", "Then, we use Stanza (Qi et al., 2020) to remove distractors that have sequences of POS tags or morphological features different from the truth assertions.", "Finally, we sample K \u0000 1 of them as the distractors.", "We use bidirectional translation with the Mar-ianMT models (Junczys-Dowmunt et al., 2018) pretrained on the OPUS corpora (Tiedemann, 2016).", "We translate all English probes to the 25 languages that has models in both directions and then translate them back to English.", "As the outputs from these models might contain noise and errors, we compute the semantic similarities (i.e., cosine similarity) between the original M en and the back-translated M x-en via the SentenceBERT (Reimers and Gurevych, 2019) model.", "To ensure the quality and fair comparisons, we set a similarity threshold as 0.75 and keep the intersections of probes in all languages.", "Considering some languages tend to have translations of lower quality, we finally choose the best 10 languages to build the Mickey Probe dataset for our analysis, yielding 10k examples in each language and 10.2k*5*11 561k sentences in total.", "The language set L = { en, de, fr, ru, es, hi, vi, bg, zh, nl, it } .", "Note that our purpose of checking the back-translation quality here is mainly to only keep the high-quality translations for all language pairs that we considered.", "Conventional metrics, e.g., BLUE score (Papineni et al., 2002), which focus on the exact word match, are thus less suitable: given the original sentence I have a book, the translation results I have a novel and I have a tool will be seen as equally wrong.", "Inspired by BERTScore (Zhang et al., 2020), the BT-cosine is based on SentenceBERT, which efficiently gives a higher score for the former and a lower score for the latter, due to the semantic relatedness between novel and book.", "We observed that most of our back-translations are in similar situations, and thus decide to use BT-cosine instead of others.", "We now use the MickeyCorpus to evaluate the 5 pre-trained ML-LMs introduced in Section 2.1: d-mBERT (Sanh et al., 2019), mBERT (Devlin et al., 2019), XLM (Conneau and Lample, 2019), XLM-R Base , and XLM-R Large (Conneau et al., 2020).", "All these ML-LMs pretraining objectives contain masked-word-prediction tasks, so we can easily use PPLs (Eq.", "3) to probe them a zero-shot, supervision-free manner with hit@1 accuracy.", "(The hit@2 results are shown in Appendix.)", "We present a histogram in Figure 3 and show the concrete results in Table 1.", "We find that there are always large discrepancies across different languages in all tested ML-LMs, which motivates us to analyze the following questions.", "Q1: Do different ML-LMs have similar language preferences?", "No.", "We arrange the languages in all ML-LMs with the same order for Figure 3 the monotonically descending order of XLM-RL .", "Interestingly, we find that different ML-LMs are good for different languages, resulting in a very diverse set of trends.", "For example, XLM-RB , has a higher performance in it than zh and fr , unlike XLM-R \u0000 L which are pre-trained on the same corpora with the same objectives.", "mBERT and d-mBERT has stronger performance in fr than nl and de , unlike XLM and XLM-R.", "Q2: Does length influence PLL ranking?", "Not much.", "The PLL computation indeed tends to prefer shorter sequences (see Eq. 3), so one may won-der if the length of assertions would influence the probing results.", "The Shortest row in Table 1 presents the results when we always select the shortest assertion within a probe, instead of PLL ranking.", "The gaps between these scores and XLM-R-L's suggest that the probing task indeed uses PLL as a valid proxy for evaluating common sense based on sentence-level semantics.", "Q3: Is the translation quality a key factor?", "We show BT-Cosine, the mean of the cosine scores between the original English sentences and the back-translated ones, and sort the table by these numbers.", "The first 5 languages, {de, it, es, fr, nl} have the largest BT-Cosine, i.e., the best translation quality, and they indeed have better performances in general for XLM-R models.", "However, although zh has a worse BT-score than vi , all ML-LMs perform better in zh than vi .", "Thus, we believe the translation quality of MickeyCorpus will not be a factor to influence our understanding of ML-LMs.", "Consequently, this suggests that further study must depend on pre-training corpora of each ML-LM in different languages.", "Q4: Does the size of pre-training corpora matter?", "We list the size of the monolingual corpus in each language for CC-100 that XLM-R are pretrained on (i.e., the CC-size row).", "Although ru has a much larger corpus than de , it , etc., the XLM-R performance in ru is much worse.", "In addition, fr and nl have almost the same translation quality while fr 's CC-size is twice the size of nl , but the performance in fr is still much worse than nl .", "We conjecture this would be either due to the design of sub-token vocabulary or the text quality (instead of the size) of the CC-100 corpora.", "Further implications.", "The benchmark results of five popular ML-LMs on the MICKEYPROBE task over the MickeyCorpus offer the initial and valuable understanding with a closer look at the commonsense knowledge of ML-LMs by probing them in a unified evaluation protocol.", "One can either compare a ML-LM across different languages or compare a certain language across ML-LMs in Table 1.", "These comparable results support further analysis that can benefit the development of ML-LMs in the future.", "After all, even the best ML-LM XLM-RL also degrades much in other languages, and also perform slightly worse than RoBERTa L in en (93.4%).", "We argue (culture-invariant) common sense knowledge should be seen as an important way to connect multiple languages and thus better align them in a shared embedding space induced by a ML-LM.", "In this section, we reformat the MICKEYPROBE so that we can reuse the MickeyCorpus for improving the pre-trained ML-LMs for commonsense reasoning beyond English.", "We propose a multilingual contrastive pre-training (MCP) task that focuses on enhancing the sentence-level representation of ML-LMs.", "MCP improves a ML-LM in a multilingual , contrastive environment, where the model learns to select the assertion with the best commonsense plausibility from a set of contrastive sentences in different languages .", "Each MCP example is a set of multilingual assertions while each Mickey probe is a monolingual set.", "MCP Dataset Creation from M .", "We create pretraining examples for the MCP task by converting MICKEYPROBE examples, as shown in the steps illustrated in Algorithm 1.", "Simply put, we reformat a K -way Mickey Probe M ( K |L| assertions) to a MCP example by sampling a set of V candidate assertions in V different languages.", "We convert all examples in the MickeyCorpus M to build a new cross-lingual sentence-selection dataset C for learning the MCP task.", "MCP Learning.", "Given a MCP example C 2 C , we append one dense linear layer f on top of a ML-LM with parameters denoted as ML-LM for learning to predict the commonsense plausibility score of each assertion C i 2 C as follows: h i = ML-LM( C i ) .", "VX We first get the logit o i of each assertion by projecting its [CLS] embeddings h i to a logit o i via a dense layer f with parameters f ; Then, we use SoftMax to normalize the logits as plausibility scores z i ; Finally, we compute the cross-entropy loss where 1 i =1 if C i is a correct assertion and 0 otherwise.", "We fine-tune { ML-LM , f } to minimize the overall loss over the MCP dataset C .", "To evaluate ML-LMs for commonsense reasoning in a cross-lingual zero-shot transfer setting, we create two benchmark datasets, namely X-CSQA and X-CODAH.", "Table 3 shows the statistics of the two datasets.", "Specifically, we use online commercial services such as DeepL Pro Translate to collect high-quality translations of the examples in CSQA and CODAH for 15 languages other than English.", "The size of CODAH is small (only 2.7k), so we use 7k SWAG validation examples as additional training data which share the same formulation.", "We discuss the reduction of cultural differences and quality control of automatic translations as well as other details in Ethical Considerations (the paragraph for cultural bias reduction) and Appendix (A).", "As our goal is to evaluate different ML-LMs (instead of different languages) in a unified evaluation protocol for cross-lingual commonsense reasoning, we argue that such automatically translated examples, although might contain noise, can serve as a starting benchmark for us to obtain meaningful analysis before more human-translated datasets will be available in the future.", "We focus on 4 popular ML-LMs that we introduced in Section 2.1: mBERT, XLM-100, XLM-RB and XLM-RL as well as our proposed MCP method.", "For both tasks, we concatenate each prompt (the question or first sentence) and each en de it es fr nl ru vi zh hi pl ar ja pt sw ur avg CC-size (GB) 300.8 66.6 30.2 53.3 56.8 29.3 278.0 137.3 46.9 20.2 44.6 28.0 69.3 49.1 1.6 5.7 76.10 X-CODAH [ Task: Scene Completion; Random Guess: 25.0; RoBERTa L for en: 81.6 ] mBERT 42.9 33.1 33.5 33.8 35.2 33.7 31.9 22.8 38.0 26.5 31.0 34.8 34.0 37.2 30.8 31.5 33.2 XLM-100 42.7 31.5 32.2 30.7 34.9 32.6 30.9 24.7 31.4 26.8 27.0 30.0 27.4 33.2 25.3 24.9 30.4 XLM-R-B 50.1 45.8 44.4 44.2 45.2 42.0 44.1 43.2 44.6 38.1 41.9 37.8 42.0 44.1 35.6 34.6 42.4 XLM-R-L 66.4 59.6 59.9 60.9 60.1 59.3 56.3 57.4 57.3 49.1 57.5 51.2 53.8 58.2 42.2 46.6 56.0 MCP (XLM-RB ) 52.2 47.6 46.2 44.4 48.1 44.8 42.9 43.2 45.7 37.8 41.8 41.8 42.9 44.7 37.2 36.4 43.6 MCP (XLM-RL ) 69.9 60.7 61.9 60.7 61.4 60.7 58.6 62.3 61.9 53.7 59.0 54.1 54.7 60.8 44.6 48.0 58.3 \u0000 ( XLM-RL ) +3.5 +1.1 +2.0 -0.2 +1.3 +1.4 +2.3 +4.9 +4.6 +4.6 +1.5 +2.9 +0.9 +2.6 +2.4 +1.4 +2.3 X-CSQA [ Task: Question Answering; Random Guess: 20.0; RoBERTa L for en: 70.4 ] mBERT 38.8 29.6 36.4 35.3 33.8 32.6 32.7 22.2 37.8 21.1 27.2 27.7 31.4 34.1 21.8 23.7 30.4 XLM-100 34.3 26.7 28.5 29.3 28.3 27.2 29.9 21.1 28.6 22.1 26.6 26.3 25.1 30.9 20.1 21.7 26.7 XLM-RB 51.5 44.1 42.1 44.8 44.0 43.3 39.5 42.6 40.6 34.6 40.2 38.4 37.5 43.4 29.6 33.0 40.6 XLM-RL 66.7 56.1 58.2 59.5 60.3 56.8 52.1 51.4 52.7 48.7 53.9 48.4 50.0 59.9 41.6 45.2 53.8 MCP (XLM-RB ) 52.1 46.2 45.6 44.3 44.7 45.3 42.8 45.3 44.3 36.8 41.4 36.8 37.5 44.9 28.1 33.4 41.9 MCP (XLM-RL ) 69.5 59.3 60.3 61.4 60.0 61.1 57.5 55.7 56.7 51.3 56.1 52.3 50.2 60.7 43.3 48.8 56.5 \u0000 ( XLM-RL ) +2.8 +3.3 +2.2 +1.9 -0.4 +4.3 +5.4 +4.3 +4.0 +2.6 +2.1 +3.9 +0.2 +0.8 +1.7 +3.6 +2.7 Table 2: Benchmark results for different ML-LMs and MCP-enhanced models for X-CSQA and X-CODAH in a zero-shot cross-lingual setting.", "of its options individually in the form of [CLS] prompt [SEP] option i [SEP].", "Then, we fine-tune ML-LMs over the English training dataset and test them on other languages.", "Why zero-shot cross-lingual transfer?", "It is almost impossible to collect data in all languages that an NLU system might be used for.", "Therefore, prior works mainly focus on zero-shot cross-lingual transfer (Conneau et al., 2018), which is more meaningful and can offer lower-bound performance analysis.", "It is also an ideal setting for studying CSR because most commonsense facts are language-invariant .", "Thus, an English-finetuned ML-LM for CSR should be able to transfer its ability to a wide range of other languages as well.", "Furthermore, our goal of this paper is to evaluate and improve ML-LMs, so translating back to English and then use an English-only LM is also not helpful towards to this end.", "In Table 2, we present the empirical results over X-CODAH and X-CSQA for the ML-LMs as well as two models enhanced by our proposed MCP method.", "On both tasks, the XLM-RL performs the best with a large margin.", "Enhanced by the MCP method, both XLM-RB and XLM-RL see significant improvement (e.g., 2.7% absolute improvement for XLM-RL on X-CSQA-avg).", "Can MCP's improvement generalize to unseen, low-resource languages?", "Note that MCP dataset only involves 9 languages here, and there are 6 languages that are totally unseen in the MCP training (i.e., { pl, ar, ja, pt, sw, ur }).", "The largest performance gain is in ru on X-CSQA and vi on X-CODAH.", "Surprisingly, we find the improvements on them are also large for XLM-RL (e.g., 48.4 ! 52.3 for ar ).", "In addition, for the two low-resource languages sw and ur , MCP also brings 2 3 percentage points of improvement for XLM-RL .", "It is, however, not always the case for XLM-RB , which we conjecture tends to be more likely to overfit.", "Although ML-LMs enjoy the merits of zero-shot cross-lingual transfer, their performances are usually worse than the English-only RoBERTa L on the en-test (70.4% vs 66.7% for X-CSQA).", "Although MCP can mitigate the gap (70.4% vs 69.5%) for X-CSQA, there is still a large gap (81.6% vs 69.9%) for X-CODAH.", "We use Fig. 4 to analyze how different categories of commonsense reasoning in CODAH (Chen et al., 2019) are diverse in different languages.", "We find that others , reference , and negation have relatively smaller variances across different languages, as they are more language-invariant.", "However, a few polysemous , idioms examples can be English-specific which may not generalize to other languages.", "More detailed analysis is in Appendix.", "From the curve of dev accuracy in Figure 5, we see that MCP-enhanced XLM-R models are much more sample efficient and converge much faster than vanilla versions.", "This suggests that the MCP, if used on a larger corpus with broader top-ics, can potentially produce a better ML-LM with more general usage, especially when only limited labelled is available.", "Our results on XNLI-10% (using 10% of the training data) (Conneau et al., 2018) show that MCP-enhanced XLM-RL has 1.2 percent accuracy improvement on the average of 15 languages.", "As our focus in this paper is commonsense reasoning, we leave the study on other cross-lingual NLU tasks as future work.", "Importantly, our experiments imply that a proper (con-tinual) pre-training task that has a (contrastive) sentence-level objective could improve both the fi-nal performance as well as learning efficiency.", "We evaluate and improve popular multilingual language models (ML-LMs) for advancing commonsense reasoning beyond English.", "We propose the MICKEYPROBE , a language-agnostic probing task for analyzing common sense of ML-LMs in a zero-shot manner.", "With our proposed new benchmark datasets via automatic translation, X-CSQA and X-CODAH, we evaluate ML-LMs in a cross-lingual transfer setting for commonsense reasoning.", "We also improve the state-of-the-art ML-LM with a simple yet effective method multilingual contrastive pre-training, which uses a sentence-level objective to enhance sentence representations, yielding a significant performance gain.", "All above work is based on MickeyCorpus , which can be used as both a probing dataset and a pretraining corpus for analyzing and improving ML-LMs.", "We hope our resources and pre-training method for ML-LMs can help the community advance commonsense reasoning beyond English.", "This research is supported in part by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via Contract No. 2019-19051600007, the DARPA MCS program under Contract No.", "N660011924033 with the United States Office Of Naval Research, the Defense Advanced Research Projects Agency with award W911NF-19-20271, and NSF SMA 18-29268.", "The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of ODNI, IARPA, or the U.S. Government.", "We would like to thank all the collaborators in USC INK research lab and the reviewers for their constructive feedback on the work.", "Resource Copyright This work presents three new resources: MickeyCorpus , X-CODAH, and X-CSQA, which are multilingual extension of the OMCS (Singh et al., 2002) 3 , CSQA (Talmor et al., 2019) 4 , and CODAH (Chen et al., 2019) 5 respectively.", "All these three original sources of the data are publicly available for free, and we do not add any additional requirement for accessing our resources.", "We will highlight the original sources of our data and ask users to cite the original papers when they use our extended versions for research.", "Cultural Bias Reduction Like most most multilingual parallel resources, especially in general NLU domain, there exists potential data bias due to the barrier of languages as well as cultural differences (Acharya et al., 2020; Lin et al., 2018), which could induce the labeling differences on the same situation.", "For example, a question like what do people usually drink in the morning? (cof-fee/tea/milk) or when does a wedding usually start? (morning/afternoon/evening) might be answered very differently by people from different backgrounds and cultures, not to mention different languages.", "The prior English commonsense resources which our datasets are built on are already possess such inherent bias, even with in the English language.", "Therefore, before we translate CSQA and CODAH, we intentionally remove the examples that are either labeled as non-neutral by a pre-trained sentiment classifier, or contained any keywords that are relevant to social behavior (e.g., weddings).", "We manually inspect test examples in X-CSQA and X-CODAH in the English and Chinese versions and have a strong confidence there is few strongly controversial example.", "However, we admit that such reduction of cultural differences in common sense has not been systematically measured in this work for other languages.", "The work also evaluates a few multilingual language models (ML-LMs) for cross-lingual commonsense reasoning (XCSR), and introduced a new model which outperforms them.", "This raises the question of whether harm might arise from applications of XCSRor more generally, since XCSR is intended as a step toward making English-only CSR more applicable in other languages, whether harm might arise more generally from existing ML-LMs.", "Among the risks that need to be considered in any deployment of NLP technology are that responses may be wrong or biased, in ways that would lead to improperly justified decisions.", "Although in our view the current technology is still relatively immature, and unlikely to be fielded in applications that would cause harm of this sort, it is desirable that ML-LMs provide audit trails, and recourse so that their predictions can be explained to and critiqued by affected parties." ]
[ "abstain", "objective", "result", "objective", "objective", "objective", "abstain", "abstain", "result", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "objective", "method", "abstain", "abstain", "method", "result", "method", "result", "objective", "objective", "method", "method", "abstain", "objective", "result", "method", "objective", "method", "objective", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "result", "objective", "objective", "objective", "abstain", "abstain", "other", "other", "other", "other", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method" ]
[ "Named Entity Recognition (NER) is a fundamental task in Natural Language Processing, concerned with identifying spans of text expressing references to entities.", "NER research is often focused on flat entities only (flat NER), ignoring the fact that entity references can be nested, as in [Bank of [China]] (Finkel and Manning, 2009).", "In this paper, we use ideas from graph-based dependency parsing to provide our model a global view on the input via a biaffine model (Dozat and Manning, 2017).", "The biaffine model scores pairs of start and end tokens in a sentence which we use to explore all spans, so that the model is able to predict named entities accurately.", "We show that the model works well for both nested and flat NER through evaluation on 8 corpora and achieving SoTA performance on all of them, with accuracy gains of up to 2.2 percentage points.", "Nested Entities' are named entities containing references to other named entities as in [Bank of [China]] , in which both [China] and [Bank of China] are named entities.", "Such nested entities are frequent in data sets like ACE 2004, ACE 2005 and GENIA (e.g., 17% of NEs in GENIA are nested (Finkel and Manning, 2009), altough the more widely used set such as CONLL 2002, 2003 and ONTONOTES only contain so called flat named entities and nested entities are ignored.", "The current SoTA models all adopt a neural network architecture without hand-crafted features, which makes them more adaptable to different tasks, languages and domains (Lample et al., 2016; Chiu and Nichols, 2016; Peters et al., 2018; Devlin et al., 2019; Ju et al., 2018; Sohrab and Miwa, 2018; Strakova et al., 2019).", "In this paper, we introduce a method to handle both types of NEs in one system by adopting ideas from the biaffine dependency parsing model of Dozat and Manning (2017).", "For dependency parsing, the system predicts a head for each token and assigns a relation to the head-child pairs.", "In this work, we reformulate NER as the task of identifying start and end indices, as well as assigning a category to the span defined by these pairs.", "Our system uses a biaffine model on top of a multi-layer BiLSTM to assign scores to all possible spans in a sentence.", "After that, instead of building dependency trees, we rank the candidate spans by their scores and return the top-ranked spans that comply with constraints for flat or nested NER.", "We evaluated our system on three nested NER benchmarks ( ACE 2004, ACE 2005, GENIA ) and five flat NER corpora ( CONLL 2002 (Dutch, Spanish) CONLL 2003 (English, Ger-man), and ONTONOTES ).", "The results show that our system achieved SoTA results on all three nested NER corpora, and on all five flat NER corpora with substantial gains of up to 2.2% absolute percentage points compared to the previous SoTA.", "We provide the code as open source 1 .", "Flat Named Entity Recognition.", "The majority of flat NER models are based on a sequence labelling approach.", "Collobert et al. (2011) introduced a neural NER model that uses CNNs to encode tokens combined with a CRF layer for the classification.", "Many other neural systems followed this approach but used instead LSTMs to encode the input and a CRF for the prediction (Lample et al., 2016; Ma and Hovy, 2016; Chiu and Nichols, 2016).", "These latter models were later extended to use context-dependent embeddings such as ELMo (Peters et al., 2018).", "Clark et al. (2018) quite successfully used cross-view training (CVT) paired with multi-task learning.", "This method yields impressive gains for 1 The code is available at https://github.com/ juntaoy/biaffine-ner BERT, fastText & Char Embeddings BiLSTM FFNN_Start FFNN_End Biaffine Classifier Figure 1: The network architectures of our system.", "a number of NLP applications including NER.", "Devlin et al. (2019) invented BERT, a bidirectional transformer architecture for the training of language models.", "BERT and its siblings provided better language models that turned again into higher scores for NER.", "Lample et al. (2016) cast NER as transition-based dependency parsing using a Stack-LSTM.", "They compare with a LSTM-CRF model which turns out to be a very strong baseline.", "Their transition-based system uses two transitions (shift and reduce) to mark the named entities and handles flat NER while our system has been designed to handle both nested and flat entities.", "Nested Named Entity Recognition .", "Early work on nested NER, motivated particularly by the GENIA corpus, includes (Shen et al., 2003; Beatrice Alex and Grover, 2007; Finkel and Manning, 2009).", "Finkel and Manning (2009) also proposed a constituency parsing-based approach.", "In the last years, we saw an increasing number of neural models targeting nested NER as well.", "Ju et al. (2018) suggested a LSTM-CRF model to predict nested named entities.", "Their algorithm iteratively continues until no further entities are predicted.", "Lin et al. (2019) tackle the problem in two steps: they first detect the entity head, and then they infer the entity boundaries as well as the category of the named entity.", "Strakova et al. (2019) tag the nested named entity by a sequence-to-sequence model exploring combinations of context-based embeddings such as ELMo, BERT, and Flair.", "Zheng et al. (2019) use a boundary aware network to solve the nested NER.", "Similar to our work, Sohrab and Miwa (2018) enumerate exhaustively all possible spans up to a defined length by concatenating the LSTMs outputs for the start and end position and then using this to calculate a score for each span.", "Apart from the different network and word embedding config-urations, the main difference between their model and ours is there for the use of biaffine model.", "Due to the biaffine model, we get a global view of the sentence while Sohrab and Miwa (2018) concatenates the output of the LSTMs of possible start and end positions up to a distinct length.", "Dozat and Manning (2017) demonstrated that the biaffine mapping performs significantly better than just the concatenation of pairs of LSTM outputs.", "Our model is inspired by the dependency parsing model of Dozat and Manning (2017).", "We use both word embeddings and character embeddings as input, and feed the output into a BiLSTM and finally to a biaffine classifier.", "Figure 1 shows an overview of the architecture.", "To encode words, we use both BERT Large and fastText embeddings (Bojanowski et al., 2016).", "For BERT we follow the recipe of (Kantor and Glober-son, 2019) to obtain the context dependent embeddings for a target token with 64 surrounding tokens each side.", "For the character-based word embeddings, we use a CNN to encode the characters of the tokens.", "The concatenation of the word and character-based word embeddings is feed into a BiLSTM to obtain the word representations ( x ).", "After obtaining the word representations from the BiLSTM, we apply two separate FFNNs to create different representations ( h s /h e ) for the start/end of the spans.", "Using different representations for the start/end of the spans allow the system to learn to identify the start/end of the spans separately.", "This improves accuracy compared to the model which directly uses the outputs of the LSTM since the context of the start and end of the entity are different.", "Finally, we employ a biaffine model over the sentence to create a l l c scoring tensor ( r m ), where l is the length of the sentence and c is the number of NER categories + 1 (for non-entity).", "We compute the score for a span i by: h s ( i ) = FFNN s ( x s i ) h e ( i ) = FFNN e ( x e i ) r m ( i ) = h s ( i ) (cid:62) U m h e ( i ) + W m ( h s ( i ) h e ( i )) + b m where s i and e i are the start and end indices of the span i , U m is a d c d tensor, W m is a 2 d c matrix and b m is the bias.", "The tensor r m provides scores for all possible spans that could constitute a named entity under the constrain that s i e i (the start of entity is before its end).", "We assign each span a NER category y (cid:48) : y (cid:48) ( i ) = arg max r m ( i ) We then rank all the spans that have a category other than non-entity by their category scores ( r m ( i y (cid:48) ) ) in descending order and apply following post-processing constraints: For nested NER, a entity is selected as long as it does not clash the boundaries of higher ranked entities.", "We denote a entity i to clash boundaries with another entity j if s i < s j e i < e j or s j < s i e j < e i , e.g. in the Bank of China , the entity the Bank of clashes boundary with the entity Bank of China , hence only the span with the higher category score will be selected.", "For flat NER, we apply one more constraint, in which any entity containing or is inside an entity ranked before it will not be selected.", "The learning objective of our named entity recognizer is to assign a correct category (including the non-entity) to each valid span.", "Hence it is a multi-class classification problem and we optimise our models with softmax cross-entropy: p m ( i c ) = exp ( r m ( i c )) (cid:80) C c =1 exp ( r m ( i c )) loss = N (cid:88) i =1 C (cid:88) c =1 y i c log p m ( i c ) 4 Experiments Data Set .", "We evaluate our system on both nested and flat NER, for the nested NER task, we use the ACE 2004 2 , ACE 2005 3 , and GENIA (Kim et al., 2003) corpora; for flat NER, we test our system on the CONLL 2002 (Tjong Kim Sang, 2002), CONLL 2003 (Tjong Kim Sang and De Meulder, 2003) and ONTONOTES 4 corpora.", "For ACE 2004, ACE 2005 we follow the same settings of Lu and Roth (2015) and Muis and Lu (2017) to split the data into 80%,10%,10% for train, development and test set respectively.", "To make a 2 https://catalog.ldc.upenn.edu/LDC2005T09 3 https://catalog.ldc.upenn.edu/LDC2006T06 4 https://catalog.ldc.upenn.edu/LDC2013T19 Parameter Value BiLSTM size 200 BiLSTM layer 3 BiLSTM dropout 0.4 FFNN size 150 FFNN dropout 0.2 BERT size 1024 BERT layer last 4 fastText embedding size 300 Char CNN size 50 Char CNN filter widths [3,4,5] Char embedding size 8 Embeddings dropout 0.5 Optimiser Adam learning rate 1e-3 Table 1: Major hyperparameters for our models.", "fair comparson we also used the same documents as in Lu and Roth (2015) for each split.", "For GENIA , we use the GENIA v3.0.2 corpus.", "We preprocess the dataset following the same settings of Finkel and Manning (2009) and Lu and Roth (2015) and use 90%/10% train/test split.", "For this evaluation, since we do not have a development set, we train our system on 50 epochs and evaluate on the final model.", "For CONLL 2002 and CONLL 2003, we evaluate on all four languages (English, German, Dutch and Spanish).", "We follow Lample et al. (2016) to train our system on the concatenation of the train and development set.", "For ONTONOTES , we evaluate on the English corpus and follow Strubell et al. (2017) to use the same train, development and test split as used in CoNLL 2012 shared task for coreference resolution (Pradhan et al., 2012).", "Evaluation Metric .", "We report recall, precision and F1 scores for all evaluations.", "The named entity is considered correct when both boundary and category are predicted correctly.", "5 In Sohrab and Miwa (2018), the last 10% of the training set is used as a development set, we include their result mainly because their system is similar to ours.", "6 The revised version is provided by the shared task organ-iser in 2006 with more consistent annotations.", "We confirmed with the author of Akbik et al. (2018) that they used the revised version.", "Using the constraints for nested NER, we first evaluate our system on nested named entity corpora: ACE 2004, ACE 2005 and GENIA .", "Table 2 shows the results.", "Both ACE 2004 and ACE 2005 contain 7 NER categories and have a relatively high ratio of nested entities (about 1/3 of then named entities are nested).", "Our results outperform the previous SoTA system by 2% ( ACE 2004) and 1.1% ( ACE 2005), respectively.", "GENIA differs from ACE 2004 and ACE 2005 and uses five medical categories such as DNA or RNA.", "For the GENIA corpus our system achieved an F1 score of 80.5% and improved the SoTA by 2.2% absolute.", "Our hypothesise is that for GENIA the high accuracy gain is due to our structural prediction approach and that sequence-to-sequence models rely more on the language model Model P R F1 ONTONOTES Chiu and Nichols (2016) 86.0 86.5 86.3 Strubell et al. (2017) -86.8 Clark et al. (2018) -88.8 Fisher and Vlachos (2019) -89.2 Our model 91.1 91.5 91.3 CONLL 2003 English Chiu and Nichols (2016) 91.4 91.9 91.6 Lample et al. (2016) -90.9 Strubell et al. (2017) -90.7 Devlin et al. (2019) -92.8 Strakova et al. (2019) -93.4 Our model 93.7 93.3 93.5 CONLL 2003 German Lample et al. (2016) -78.8 Strakova et al. (2019) -85.1 Our model 88.3 84.6 86.4 CONLL 2003 German revised 6 Akbik et al. (2018) -88.3 Our model 92.4 88.2 90.3 CONLL 2002 Spanish Lample et al. (2016) -85.8 Strakova et al. (2019) -88.8 Our model 90.6 90.0 90.3 CONLL 2002 Dutch Lample et al. (2016) -81.7 Akbik et al. (2019) -90.4 Strakova et al. (2019) -92.7 Our model 94.5 92.8 93.7 Table 3: State of the art comparison on CONLL 2002, CONLL 2003, ONTONOTES corpora for flat NER.", "embeddings which are less informative for categories such as DNA, RNA.", "Our system achieved SoTA results on all three corpora for nested NER and demonstrates well the advantages of a structural prediction over sequence labelling approach.", "We evaluate our system on five corpora for flat NER ( CONLL 2002 (Dutch, Spanish), CONLL 2003 (En-glish, German) and ONTONOTES .", "Unlike most of the systems that treat flat NER as a sequence labelling task, our system predicts named entities by considering all possible spans and ranking them.", "The ONTONOTES corpus consists of documents form 7 different domains and is annotated with 18 F1 Our model 89.9 biaffine 89.1 0.8 BERT emb 87.5 2.4 fastText emb 89.5 0.4 Char emb 89.8 0.1 Table 4: The comparison between our full model and ablated models on ONTONOTES development set.", "fine-grained named entity categories.", "To predict named entities for this corpus is more difficult than for CONLL 2002 and CONLL 2003.", "These corpora use coarse-grained named entity categories (only 4 categories).", "The sequence-to-sequence models usually perform better on the CONLL 2003 English corpus (see Table 3), e.g. the system of Chiu and Nichols (2016); Strubell et al. (2017).", "In contrast, our system is less sensitive to the domain and the granularity of the categories.", "As shown in Table 3, our system achieved an F1 score of 91.3% on the ONTONOTES corpus and is very close to our system performance on the CONLL 2003 corpus (93.5%).", "On the multi-lingual data, our system achieved F1 scores of 86.4% for German, 90.3% for Spanish and 93.5% for Dutch.", "Our system outperforms the previous SoTA results by large margin of 2.1%, 1.5%, 1.3% and 1% on ONTONOTES , Spanish, German and Dutch corpora respectively and is slightly better than the SoTA on English data set.", "In addition, we also tested our system on the revised version of German data to compare with the model by Akbik et al. (2018), our system again achieved a substantial gain of 2% when compared with their system.", "To evaluate the contribution of individual components of our system, we further remove selected components and use ONTONOTES for evaluation (see Table 4).", "We choose ONTONOTES for our ablation study as it is the largest corpus.", "Biaffine Classifier We replace the biaffine mapping with a CRF layer and convert our system into a sequence labelling model.", "The CRF layer is frequently used in models for flat NER, e.g. (Lample et al., 2016).", "When we replace the biaffine model of our system with a CRF layer, the performance drops by 0.8 percentage points (Table 4).", "The large performance difference shows the benefit of adding a biaffine model and confirms our hypothesis that the dependency parsing framework is an important factor for the high accuracy of our system.", "Contextual Embeddings We ablate BERT embeddings and as expected, after removing BERT embeddings, the system performance drops by a large number of 2.4 percentage points (see Table 4).", "This shows that BERT embeddings are one of the most important factors for the accuracy.", "Context Independent Embeddings We remove the context-independent fastText embedding from our system.", "The context-independent embedding contributes 0.4% towards the score of our full system (Table 4).", "Which suggests that even with the BERT embeddings enabled, the context-independent embeddings can still make quite noticeable improvement to a system.", "Character Embeddings Finally, we remove the character embeddings.", "As we can see from Table 4, the impact of character embeddings is quite small.", "One explanation would be that English is not a morphologically rich language hence does not benefit largely from character-level information and the BERT embeddings itself are based on word pieces that already capture some character-level information.", "Overall, the biaffine mapping and the BERT embedding together contributed most to the high accuracy of our system.", "In this paper, we reformulate NER as a structured prediction task and adopted a SoTA dependency parsing approach for nested and flat NER.", "Our system uses contextual embeddings as input to a multilayer BiLSTM.", "We employ a biaffine model to assign scores for all spans in a sentence.", "Further constraints are used to predict nested or flat named entities.", "We evaluated our system on eight named entity corpora.", "The results show that our system achieves SoTA on all of the eight corpora.", "We demonstrate that advanced structured prediction techniques lead to substantial improvements for both nested and flat NER.", "This research was supported in part by the DALI project, ERC Grant 695662." ]
[ "abstain", "abstain", "method", "objective", "result", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "result", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "method", "other", "abstain", "other", "method", "method", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "method", "method", "method", "method", "method", "method", "other", "method", "method", "method", "method", "method", "method", "method", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "result", "abstain", "result", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "result", "result", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "result", "method", "method", "abstain", "method", "result", "objective", "other" ]
[ "Neural coreference resolution models trained on one dataset may not transfer to new, low-resource domains.", "Active learning mitigates this problem by sampling a small subset of data for annotators to label.", "While active learning is well-defined for classification tasks, its application to coreference resolution is neither well-defined nor fully understood.", "This paper explores how to actively label coreference, examining sources of model uncertainty and document reading costs.", "We compare uncertainty sampling strategies and their advantages through thorough error analysis.", "In both synthetic and human experiments, labeling spans within the same document is more effective than annotating spans across documents.", "The findings contribute to a more realistic development of coreference resolution models.", "Linguistic expressions are coreferent if they refer to the same entity.", "The computational task of discovering coreferent mentions is coreference resolution ( CR ).", "Neural models (Lee et al., 2018; Joshi et al., 2020) are SOTA on ONTONOTES 5.0 (Pradhan et al., 2013) but cannot immediately generalize to other datasets.", "Generalization is difficult because domains differ in content, writing style, and annotation guidelines.", "To overcome these challenges, models need copiously labeled, in-domain data (Bamman et al., 2020).", "Despite expensive labeling costs, adapting CR is crucial for applications like uncovering information about proteins in biomedicine (Kim et al., 2012) and distinguishing entities in legal documents (Gupta et al., 2018).", "Ideally, we would like to quickly and cheaply adapt the model without repeatedly relying on an excessive amount of annotations to retrain the model.", "To reduce labeling cost, we investigate active learning (Settles, 2009) for CR .", "Active learning aims to reduce annotation costs by intelligently selecting examples to label.", "Prior approaches use active learning to improve the model within the same domain (Gasperin, 2009; Sachan et al., 2015) without considering adapting to new data distributions.", "For domain adaptation in CR , Zhao and Ng (2014) motivate the use of active learning to select out-of-distribution examples.", "A word like the bonds refers to municipal bonds in ONTONOTES but links to chemical bonds in another domain (Figure 1).", "If users annotate the antecedents of the bonds and other ambiguous entity mentions, then these labels help adapt a model trained on ONTONOTES to new domains.", "Active learning for CR adaptation is well-motivated, but the implementation is neither straightforward nor well-studied.", "First, CR is a span detection and clustering task, so selecting which spans to label is more complicated than choosing independent examples for text classification.", "Second, CR labeling involves closely reading the documents.", "Labeling more spans within the same context is more efficient.", "However, labeling more spans across different documents increases data diversity and may improve model transfer.", "How should we balance these competing objectives?", "Our paper extends prior work in active learning for CR to the problem of coreference model transfer (Xia and Van Durme, 2021): 1. We generalize the clustered entropy sampling strategy (Li et al., 2020) to include uncertainty in mention detection.", "We analyze the effect of each strategy on coreference model transfer.", "2. We investigate the trade-off between labeling and reading through simulations and a real-time user study.", "Limiting annotations to the same document increases labeling throughput and decreases volatility in model training.", "Taken together, these contributions offer a blueprint for faster creation of CR models across domains.", "1 1 https://github.com/forest-snow/ incremental-coref 7533 Source Target (1) (2) Figure 1: CR models are trained on source domain ONTONOTES , which contains data like news articles.", "Lee et al. (2018) introduce C 2 F-COREF , a neural model that outperforms prior rule-based systems.", "It assigns an antecedent y to mention span x .", "The set Y ( x ) of possible antecedent spans include a dummy antecedent and all spans preceding x .", "If span x has no antecedent, then x should be assigned to .", "Given entity mention x , the model learns a distribution over its candidate antecedents in Y ( x ) , P ( Y = y ) = exp { s ( x, y ) } y Y ( x ) exp { s ( x, y ) } .", "The scores s ( x, y ) are computed by the model's pairwise scorer (Appendix A.1).", "CR models like C 2 F-COREF are typically trained on ONTONOTES .", "Recent work in CR improves upon C 2 F-COREF and has SOTA results on ONTONOTES (Wu et al., 2020; Joshi et al., 2020).", "However, annotation guidelines and the underlying text differ across domains.", "As a result, these CR models cannot immediately transfer to other datasets.", "For different domains, spans could hold different meanings or link to different entities.", "Xia and Van Durme (2021) show the benefits of continued training where a model trained on ONTONOTES is further trained on the target dataset.", "For several target domains, continued training from ONTONOTES is stronger than training the model from scratch, especially when the training dataset is small.", "Their experiments use an incremental variant of C 2 F-COREF called ICOREF (Xia et al., 2020).", "While C 2 F-COREF requires ( n ) memory to simultaneously access all spans in the document and infer a span's antecedent, ICOREF only needs constant memory to predict a span's entity cluster.", "Despite using less space, ICOREF retains the same accuracy as C 2 F-COREF .", "Rather than assigning x to antecedent y , ICOREF assigns x to cluster c where c is from a set of observed entity clusters C , P ( C = c ) = exp { s ( x, c ) } c C exp { s ( x, c ) } .", "As the algorithm processes spans in the document, each span is either placed in a cluster from C or added to a new cluster.", "To learn the distribution over clusters (Equation 2), the algorithm first creates a cluster representation that is an aggregate of span representations over spans that currently exist in the cluster.", "With cluster and span representations, individual spans and entity clusters are mapped into a shared space.", "Then, we can compute s ( x, c ) using the same pairwise scorer as before.", "that labeled data already exist in the target domain.", "However, model transfer is more critical when annotations are scarce.", "Thus, the question becomes: how can we adapt CR models without requiring a large, labeled dataset?", "Our paper investigates active learning as a potential solution.", "Through active learning, we reduce labeling costs by sampling and annotating a small subset of ambiguous spans.", "Neural models achieve high accuracy for ONTONOTES but cannot quickly adapt to new datasets because of shifts in domain or annotation standards (Poot and van Cranenburgh, 2020).", "To transfer to new domains, models need substantial in-domain, labeled data.", "In low-resource situations, CR is infeasible for real-time applications.", "To reduce the labeling burden, active learning may target spans that most confuse the model.", "Active learning for domain adaptation (Rai et al., 2010) typically proceeds as follows: begin with a model trained on source data, sample and label k spans from documents in the target domain based on a strategy, and train the model on labeled data.", "This labeling setup may appear straightforward to apply to CR , but there are some tricky details.", "The first complication is thatunlike text classification CR is a clustering task.", "Early approaches in active learning for CR use pairwise annotations (Miller et al., 2012; Sachan et al., 2015).", "Pairs of spans are sampled and the annotator labels whether each pair is coreferent.", "The downside to pairwise annotations is that it requires many labels.", "To label the antecedent of entity mention x , x must be compared to every candidate span in the document.", "Li et al. (2020) propose a new scheme called discrete annotations .", "Instead of sampling pairs of spans, the active learning strategy samples individual spans.", "Then, the annotator only has to find and label first antecedent of x in the document, which bypasses the multiple pairwise comparisons.", "Thus, we use discrete annotations to minimize labeling.", "To further improve active learning for CR , we consider the following issues.", "First, the CR model has different scores for mention detection and linking, but prior active learning methods only considers linking.", "Second, labeling CR requires time to read the document context.", "Therefore, we explore important aspects of active learning for adapting CR : model uncertainty (Section 3.1), and the balance between reading and labeling (Section 3.2).", "A well-known active learning strategy is uncertainty sampling.", "A common measure of uncertainty is the entropy in the distribution of the model's predictions for a given example (Lewis and Gale, 1994).", "Labeling uncertain examples improves accuracy for tasks like text classification (Settles, 2009).", "For CR , models have multiple components, and computing uncertainty is not as straightforward.", "Is uncertainty over where mentions are located more important than linking spans?", "Or the other way around?", "Thus, we investigate different sources of CR model uncertainty.", "To sample spans for learning CR , Li et al. (2020) propose a strategy called clustered entropy .", "This metric scores the uncertainty in the entity cluster assignment of a mention span x .", "If x has high clustered entropy, then it should be labeled to help the model learn its antecedents.", "Computing clustered entropy requires the probability that x is assigned to an entity cluster.", "Li et al. (2020) use C 2 F-COREF , which only gives probability of x being assigned to antecedent y .", "So, they define P ( C = c ) as the sum of antecedent probabilities P ( Y = y ) , P ( C = c ) = y C Y ( x ) P ( Y = y ) .", "The computation of clustered entropy in Equation 4 poses two issues.", "First, summing the probabilities may not accurately represent the model's probability of linking x to c .", "There are other ways to aggregate the probabilities (e.g. taking the maxi-mum).", "C 2 F-COREF never computes cluster probabilities to make predictions, so it is not obvious how P ( C = c ) should be computed for clustered entropy.", "Second, Equation 4 does not consider mention detection.", "For ONTONOTES , this is not an issue because singletons (clusters of size 1) are not annotated and mention detection score is implicitly included in P ( Y = y ) .", "For other datasets containing singletons, the model should disambiguate singleton clusters from non-mention spans.", "To resolve these issues, we make the following changes.", "First, we use ICOREF to obtain cluster probabilities.", "ICOREF is a mention clustering model so it 7535 already has probabilities over entity clusters (Equa-tion 2).", "Second, we explore other forms of maximum entropy sampling.", "Neural CR models have scorers for mention detection and clustering.", "Both scores should be considered to sample spans that confuse the model.", "Thus, we propose more strategies to target uncertainty in mention detection.", "To generalize entropy sampling, we first formalize mention detection and clustering.", "Given span x , assume X is the random variable encoding whether x is an entity mention (1) or not (0).", "In Section 2, we assume that the cluster distribution P ( C ) is independent of X : P ( C ) = P ( C | X ) .", "2 In other words, Equation 2 is actually computing P ( C = c | X = 1) .", "We sample topk spans with the following strategies.", "HMENT ( x ) = H ( X ) (5) = 1 i =0 P ( X = i ) log P ( X = i ) .", "The probability P ( X ) is computed from normalized mention scores s m (Equation 10).", "Ment-ent may sample spans that challenge mention detection (e.g. class-ambiguous words like park).", "The annotator can clarify whether spans are entity mentions to improve mention detection.", "HCLUST ( x ) = H ( C | X = 1) (6) = c C P ( C = c | X = 1) log P ( C = c | X = 1) .", "Clust-ent looks at clustering scores without explicitly addressing mention detection.", "Like in ONTONOTES , all spans are assumed to be entity mentions.", "The likelihood P ( C = c | X = 1) is given by ICOREF (Equation 2).", "We reach the last equation because there is no uncertainty in clustering x if x is not an entity mention and H ( C | X = 0) = 0 .", "Cond-ent takes the uncertainty of mention detection into account.", "So, we may sample more pronouns because they are obviously mentions but difficult to cluster.", "Joint-ent may sample spans that are difficult to detect as entity mentions and too confusing to cluster.", "This sampling strategy most closely aligns with the uncertainty of the training objective.", "It may also fix any imbalance between mention detection and linking (Wu and Gardner, 2021).", "For CR , the annotator reads the document context to label the antecedent of a mention span.", "Annotating and reading spans from different documents may slow down labeling, but restricting sampling to the same document may cause redundant labeling (Miller et al., 2012).", "To better understand this trade-off, we explore different configurations with k , the number of annotated spans, and m , the maximum number of documents being read.", "Given source model h 0 already fine-tuned on ONTONOTES , we adapt h 0 to a target domain through active learning (Algorithm 1): Scoring To sample k spans from unlabeled data U of the target domain, we score spans with an active learning strategy S .", "Assume S scores each span through an acquisition model (Lowell et al., 2019).", "For the acquisition model, we use h t 1 , the model fine-tuned from the last cycle.", "The acquisition score quantifies the span's importance given S and the acquisition model.", "Reading Typically, active learning samples k spans with the highest acquisition scores.", "To constrain m , the number of documents read, we find the documents of the m spans with highest acquisition scores and only sample spans from those documents.", "Then, the k sampled spans will belong to at most m documents.", "If m is set to unconstrained, then we simply sample the k highest-scoring spans, irrespective of the document boundaries.", "Algorithm 1 Active Learning for Coreference Require: Source model h 0 , Unlabeled data U , Active learning strategy S , No.", "of cycles T , No.", "of labeled spans k , Max.", "no.", "of read docs m 1: Labeled data L = {} 2: for cycles t = 1 , . . . , T do 3: a x Score span x U by S ( h t 1 , x ) 4: Q Sort ( ) x U by scores a x 5: Q m Topm spans in Q 6: D { d x | x Q m } where d x is doc of x 7: Q Filter Q s.t. spans belong to d D 8: Q k Topk spans in Q 9: L k Label antecedents for Q k 10: L L L k 11: h t Continue train h 0 on L return h T tainty and continue sampling from the same document until uncertainty falls below a threshold.", "Then, they sample the most uncertain span from a new document.", "We modify their method because the uncertainty threshold will vary for different datasets and models.", "Instead, we use the number of documents read to control context switching.", "Labeling An oracle (e.g., human annotator or gold data) labels the antecedents of sampled spans with discrete annotations (Section 3).", "Continued Training We combine data labeled from current and past cycles.", "We train the source model h 0 (which is already trained on ONTONOTES ) on the labeled target data.", "We do not continue training a model from a past active learning cycle because it may be biased from only training on scarce target data (Ash and Adams, 2020).", "We run experiments to understand two important factors of active learning for CR : sources of model uncertainty (Section 3.1) and balancing reading against labeling (Sections 3.2).", "First, we simulate active learning on PRECO to compare sampling strategies based on various forms of uncertainty (Section 4.1).", "Then, we set up a user study to investigate how humans perform when labeling spans from fewer or more documents from PRECO (Sec-tion 4.2).", "Specifically, we analyze their annotation time and throughput.", "Finally, we run large-scale simulations on PRECO and QBCOREF (Section 4.3).", "Models In all experiments, the source model is the best checkpoint of ICOREF model trained on ONTONOTES (Xia et al., 2020) with SPANBERTLARGE-CASED (Joshi et al., 2020) encoder.", "For continued training on the target dataset, we optimize with a fixed parameter configuration (Ap-pendix A.2).", "We evaluate models on AVGF 1 , the averaged F 1 scores of MUC (Vilain et al., 1995), B 3 (Bagga and Baldwin, 1998), and CEAF 4 (Luo, 2005).", "For all synthetic experiments, we simulate active learning with gold data substituting as an annotator.", "However, gold mention boundaries are not used when sampling data.", "The model scores spans that are likely to be entity mentions for inference, so we limit the active learning candidates to this pool of high-scoring spans.", "For each active learning simulation, we repeat five runs with different random seed initializations.", "Baselines We compare the proposed sampling strategies (Section 3.1.2) along with li-clust-ent , which is clustered entropy from Li et al. (2020) (Equation 4).", "Active learning is frustratingly less effective than random sampling in many settings (Lowell et al., 2019), so we include two random baselines in our simulation.", "Random samples from all spans in the documents.", "Random-ment , as well as other strategies, samples only from the pool of likely (high-scoring) spans.", "Thus, random-ment should be a stronger baseline than random .", "Datasets ONTONOTES 5.0 is the most common dataset for training and evaluating CR (Pradhan et al., 2013).", "The dataset contains news articles and telephone conversations.", "Only non-singletons are annotated.", "Our experiments transfer a model trained on ONTONOTES to two target datasets: PRECO and QBCOREF .", "PRECO is a large corpus of grade-school reading comprehension texts (Chen et al., 2018).", "Unlike ONTONOTES , PRECO has annotated singletons.", "There are 37K training, 500 validation, and 500 test documents.", "Because the training set is so large, Chen et al. (2018) only analyze subsets of 2.5K documents.", "Likewise, we reduce the training set to a subset of 2.5K documents, comparable to the size of ONTONOTES .", "The QBCOREF dataset (Guha et al., 2015) contains trivia questions from Quizbowl tournaments that are densely packed with entities from academic topics.", "Like PRECO , singletons are annotated.", "Un-7537 0 100 200 300 No.", "like other datasets, the syntax is idiosyncratic and world knowledge is needed to solve coreference.", "Examples are pronouns before the first mention of named entities and oblique references like this polity for the Hanseatic League.", "These complicated structures rarely occur in everyday text but serve as challenging examples for CR .", "There are 240 training, 80 validation, and 80 test documents.", "To compare different sampling strategies, we first run experiments on PRECO .", "We sample fifty spans from one document for each cycle.", "By the end of a simulation run, 300 spans are sampled from six documents.", "For this configuration, uncertainty sampling strategies generally reach higher accuracy than the random baselines (Figure 2), but cond-ent and li-clust-ent are worse than random-ment .", "To understand the type of spans being sampled, we count entity mentions, non-entities, pronouns, and singletons that are sampled by each strategy (Figure 3).", "Random samples very few entities, while other strategies sample more entity mentions.", "Clust-ent and cond-ent sample more entity mentions and pronouns because the sampling objective prioritizes mentions that are difficult to link.", "Clust-ent , joint-ent , and ment-ent sample more singleton mentions.", "These strategies also show higher AVGF 1 (Figure 2).", "For transferring from ONTONOTES to PRECO , annotating singletons is useful because only non-singleton mentions are labeled in ONTONOTES .", "We notice ment-ent sampling pronouns, which should obviously be entity mentions, only in the first cycle.", "Many pronouns in ONTONOTES are singletons, so the mention detector has trouble distinguishing them initially in PRECO .", "Kummerfeld and Klein (2013) enumerate the ways CR models can go wrong: missing entity , extra entity , missing mention , extra mention , divided entity , and conflated entity .", "Missing entity means a gold entity cluster is missing.", "Missing mention means a mention span for a gold entity cluster is missing.", "The same definitions apply for extra entity and extra mention .", "Divided entity occurs when the model splits a gold entity cluster into multiple ones.", "Conflated entity happens when the model merges gold entity clusters.", "For each strategy, we analyze the errors of its final model from the simulation's last 7538 0 0.3 0.6 0.9 clust-entcond-entjoint-ent li-clust-entment-entrandom random-mentsource conflated entity 0 1 2 3 4 divided entity 0 3 6 9 12 clust-entcond-entjoint-ent li-clust-entment-entrandom random-mentsource extra entity 0 4 8 12 extra mention 0 10 20 clust-entcond-entjoint-ent li-clust-entment-entrandom random-mentsource missing entity 0 10 20 30 No.", "The source model makes many missing entity and missing mention errors.", "It does not detect several entity spans in PRECO , like locations (Long Island) or ones spanning multiple words (his kind acts of providing everything that I needed).", "These spans are detected by uncertainty sampling strategies and rand-ment .", "Ment-ent is most effective at reducing missing errors.", "It detects gold entity clusters like constant communication and the best educated guess about the storm.", "By training on spans that confuse the mention detector, the model adapts to the new domain by understanding what constitutes as an entity mention.", "Surprisingly, li-clust-ent makes at least twice as many extra entity and extra mention errors than any other strategy.", "For the sentence, Living in a large building with only 10 bedrooms, the gold data identifies two entities: a large building with only 10 bedrooms and 10 bedrooms.", "In both ONTONOTES and PRECO , the guidelines only allow the longest noun phrase to be annotated.", "Yet, the li-clust-ent model predicts additional mentions, a large building and only 10 bedrooms.", "We find that li-clust-ent tends to sample nested spans (Ta-ble 4).", "Due to the summed entropy computation, nested spans share similar values for clustered entropy as they share similar antecedent-linking probabilities.", "This causes the extra entity and extra mention errors because the model predicts there are additional entity mentions within a mention span.", "Finally, we see a stark difference between random-ment and random .", "Out of all the sampling strategies, random is least effective at preventing missing entity and missing mention errors.", "We are more likely to sample non-entities if we randomly sample from all spans in the document (Appendix A.7).", "By limiting the sampling pool to only spans that are likely to be entity mentions, we sample more spans that are useful to label for CR .", "Thus, the mention detector from neural models should be deployed during active learning.", "We hold a user study to observe the trade-off between reading and labeling.", "Three annotators, with minimal NLP knowledge, label spans sampled from PRECO .", "We use ment-ent to sample spans because the strategy shows highest AVGF 1 (Figure 2).", "First, the users read instructions (Appendix A.6) and practice labeling for ten minutes.", "Then, they complete two sessions: FewDocs and ManyDocs .", "In each session, they label as much as possible for at least twenty-five minutes.", "In FewDocs , they read fewer documents and label roughly seven spans per document.", "In ManyDocs , they read more documents and label about one span per document.", "For labeling coreference, we develop a user interface that is open-sourced (Figure 8).", "To label the antecedent of the highlighted span, the user clicks on a contiguous span of tokens.", "The interface suggests overlapping candidates based on the spans that are retained by the CR model.", "In the user study, participants label at least twice as much in FewDocs compared to ManyDocs (Figure 5).", "By labeling more spans in FewDocs , the mean AVGF 1 score is also slightly higher.", "Our findings show that the number of read documents should be constrained to increase labeling throughput.", "Difference in number of labeled spans between FewDocs and ManyDocs is more pronounced when two annotators volunteer to continue labeling after required duration (Appendix A.6).", "We finally run simulations to explore both sources of model uncertainty and the trade-off between reading and labeling.", "The earlier experiments have individually looked at each aspect.", "Now, we analyze the interaction between both factors to understand which combination works best for adapting CR to new domains.", "We run simulations on PRECO and QBCOREF that trade-off the number of documents read m with the number of annotated spans k (Figure 6).", "We vary m between one, five, and an unconstrained number of documents.", "For PRECO , we set k to twenty and fifty.", "For QBCOREF , we set k to twenty and forty.", "These results are also presented in numerical form (Appendix A.5).", "PRECO For PRECO , the test AVGF 1 of ICOREF trained on the full training dataset is 0.860.", "When m is constrained to one or five, AVGF 1 can reach around 0.707 from training the model on only 300 spans sampled by ment-ent .", "As m increases, fewer spans are sampled per document and all sampling strategies deteriorate.", "After training on sparsely annotated documents, the model tends to predict singletons rather than cluster coreferent spans.", "Like in the user study, we see benefits when labeling 0.4 0.5 0.6 0.7 A v g .", "more spans within a document.", "Interestingly, li-clust-ent performs better when document reading is not constrained to one document.", "The issue with li-clust-ent is that it samples nested mention spans (Section 4.1.2).", "Duplicate sampling is less severe if spans can be sampled across more documents.", "Another strategy that suffers from duplicate sampling is cond-ent because it mainly samples pronouns.", "For some documents, the pronouns all link to the same entity cluster.", "As a result, the model trains on a less diverse set of entity mentions and cond-ent drops in AVGF 1 as the simulation continues.", "QBCOREF For QBCOREF , the test AVGF 1 of ICOREF trained on the full training dataset is 0.795.", "When we constrain m to one or five, li-clust-ent , clust-ent , cond-ent , and joint-ent have high AVGF 1 .", "Clustering entity mentions in QBCOREF questions is difficult, so these strategies help target ambiguous mentions (Table 5).", "Ment-ent is less useful because demonstratives are abundant in QBCOREF and make mention detection easier.", "Li-clust-ent still samples nested entity mentions, but annotations for these spans help clarify interwoven entities in Quizbowl questions.", "Unlike PRECO , li-clust-ent does not sample duplicate entities because nested entity mentions belong to different clusters and need to be distinguished.", "Overall, the most helpful strategy depends on the domain.", "For domains like PRECO that contain long documents with many singletons, ment-ent is useful.", "For domains like QBCOREF where resolving coreference is difficult, we need to target linking uncertainty.", "Regardless of the dataset, random performs worst.", "Random-ment has much higher AVGF 1 , which shows the importance of the mention detector in active learning.", "Future work should determine the appropriate strategy for a given domain and annotation setup.", "Gasperin (2009) present the first work on active learning for CR yet observe negative results: active learning is not more effective than random sampling.", "Miller et al. (2012) explore different settings for labeling CR .", "First, they label the most uncertain pairs of spans in the corpus.", "Second, they label all pairs in the most uncertain documents.", "The first approach beats random sampling but requires the annotator to infeasibly read many documents.", "The second approach is more realistic but loses to random sampling.", "Zhao and Ng (2014) argue that active learning helps domain adaptation of CR .", "Sachan et al. (2015) treat pairwise annotations as optimization constraints.", "Li et al. (2020) replace pairwise annotations with discrete annotations and experiment active learning with neural models.", "Active learning has been exhaustively studied for text classification (Lewis and Gale, 1994; Zhu et al., 2008; Zhang et al., 2017).", "Text classification is a much simpler task, so researchers investigate strategies beyond uncertainty sampling.", "Yuan et al. (2020) use language model surprisal to cluster documents and then sample representative points for each cluster.", "Margatina et al. (2021) search for con-strastive examples, which are documents that are similar in the feature space yet differ in predictive likelihood.", "Active learning is also applied to tasks like machine translation (Liu et al., 2018), visual question answering (Karamcheti et al., 2021), and entity alignment (Liu et al., 2021).", "Rather than solely running simulations, other papers have also ran user studies or developed user-friendly interfaces.", "Wei et al. (2019) hold a user study for active learning to observe the time to annotate clinical named entities.", "Lee et al. (2020) develop active learning for language learning that adjusts labeling difficulty based on user skills.", "Klie et al. (2020) create a human-in-the-loop pipeline to improve entity linking for low-resource domains.", "Neural CR models desparately depend on large, labeled data.", "We use active learning to transfer a model trained on ONTONOTES , the de facto dataset, to new domains.", "Active learning for CR is difficult because the problem does not only concern sampling examples.", "We must consider different aspects, like sources of model uncertainty and cost of reading documents.", "Our work explores these factors through exhaustive simulations.", "Additionally, we develop a user interface to run a user study from which we observe human annotation time and throughput.", "In both simulations and the user study, CR improves from continued training on spans sampled from the same document rather than different contexts.", "Surprisingly, sampling by entropy in mention detection, rather than linking, is most helpful for domains like PRECO .", "This opposes the assump-tion that the uncertainty strategy must be directly tied to the training objective.", "Future work may extend our contributions to multilingual transfer or multi-component tasks, like open-domain QA .", "This paper involves a user study to observe the trade-off between reading and labeling costs for annotating coreference.", "The study has been approved by IRB to collect data about human behavior.", "Any personal information will be anonymized prior to paper submission or publication.", "All participants are fully aware of the labeling task and the information that will be collected from them.", "They are appropriately compensated for their labeling efforts.", "We thank Ani Nenkova, Jonathan Kummerfeld, Matthew Shu, Chen Zhao, and the anonymous reviewers for their insightful feedback.", "We thank the user study participants for supporting this work through annotating data.", "Michelle Yuan and Jordan Boyd-Graber are supported in part by Adobe Inc.", "Any opinions, findings, conclusions, or recommendations expressed here are those of the authors and do not necessarily reflect the view of the sponsors." ]
[ "abstain", "abstain", "abstain", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "method", "objective", "objective", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "objective", "abstain", "method", "objective", "objective", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other" ]
[ "While BERT is an effective method for learning monolingual sentence embeddings for semantic similarity and embedding based transfer learning (Reimers and Gurevych, 2019), BERT based cross-lingual sentence embeddings have yet to be explored.", "We systematically investigate methods for learning multilingual sentence embeddings by combining the best methods for learning monolingual and cross-lingual representations including: masked language modeling (MLM), translation language modeling (TLM) (Conneau and Lample, 2019), dual encoder translation ranking (Guo et al., 2018), and additive margin softmax (Yang et al., 2019a).", "We show that introducing a pre-trained multilingual language model dramatically reduces the amount of parallel training data required to achieve good performance by 80%.", "Composing the best of these methods produces a model that achieves 83.7% bi-text retrieval accuracy over 112 languages on Tatoeba, well above the 65.5% achieved by Artetxe and Schwenk (2019b), while still performing competitively on monolingual transfer learning benchmarks (Con-neau and Kiela, 2018).", "Parallel data mined from CommonCrawl using our best model is shown to train competitive NMT models for en-zh and en-de.", "We publicly release our best multilingual sentence embedding model for 109+ languages at https://tfhub.dev/ google/LaBSE .", "In this paper, we systematically explore using pretraining language models in combination with the best of existing methods for learning cross-lingual sentence embeddings.", "Such embeddings are useful for clustering, retrieval, and modular use of text representations for downstream tasks.", "While Equal contributions.", "existing cross-lingual sentence embedding models incorporate large transformer models, using large pretrained language models is not well explored.", "Rather in prior work, encoders are trained directly on translation pairs (Artetxe and Schwenk, 2019b; Guo et al., 2018; Yang et al., 2019a), or on translation pairs combined with monolingual input-response prediction (Chidambaram et al., 2019; Yang et al., 2019b).", "In our exploration, as illustrated in figure 1, we make use of dual-encoder models, which have been demonstrated as an effective approach for learning bilingual sentence embeddings (Guo et al., 2018; Yang et al., 2019a).", "However, diverging from prior work, rather than training encoders from scratch, we investigate using pre-trained encoders based on large language models.", "We contrast models with and without additive margin softmax (Yang et al., 2019a) 1 .", "Figure 2 illustrates where our work stands (shaded) in the field of LM pre-training and sentence embedding learning.", "Our massively multilingual models outperform the previous state-of-the-art on large bi-text retrieval tasks including the United Nations (UN) 1 We also investigate the impact of mining hard negatives (Guo et al., 2018), but found it doesn't provide additional gain on top of other approaches.", "See supplemental material for details.", "corpus (Ziemski et al., 2016) and BUCC (Zweigen-baum et al., 2018).", "Table 1 compares our best model with other recent multilingual work.", "Both the UN corpus and BUCC cover resource rich languages (fr, de, es, ru, and zh).", "We further evaluate our models on the Tatoeba retrieval task (Artetxe and Schwenk, 2019b) that covers 112 languages.", "Compare to LASER (Artetxe and Schwenk, 2019b), our models perform significantly better on low-resource languages, boosting the overall accuracy on 112 languages to 83.7%, from the 65.5% achieved by the previous state-of-art.", "Surprisingly, we observe our models performs well on 30+ Tatoeba languages for which we have no explicit monolingual or bilingual training data.", "Finally, our embeddings perform competitively on the SentEval sentence embedding transfer learning benchmark (Conneau and Kiela, 2018).", "The contributions of this paper are: A novel combination of pre-training and dual-encoder finetuning to boost translation ranking performance, achieving a new state-of-the-art on bi-text mining.", "A publicly released multilingual sentence embedding model spanning 109+ languages .", "Thorough experiments and ablation studies to understand the impact of pre-training, negative sampling strategies, vocabulary choice, data quality, and data quantity.", "Dual encoder models are an effective approach for learning cross-lingual embeddings (Guo et al., 2018; Yang et al., 2019a).", "Such models consist of paired encoding models that feed a scoring function.", "The source and target sentences are encoded separately.", "Sentence embeddings are extracted from each encoder.", "Cross-lingual embeddings are trained using a translation ranking task with in-batch negative sampling: L = 1 NN (cid:88) i =1 log e ( x i ,y i ) e ( x i ,y i ) + (cid:80) Nn =1 ,n (cid:54) = i e ( x i ,y n ) (1) The embedding space similarity of x and y is given by ( x, y ) , typically ( x, y ) = xy T .", "The loss attempts to rank y i , the true translation of x i , over all N 1 alternatives in the same batch.", "Notice that L is asymmetric and depends on whether the softmax is over the source or the target sentences.", "For bidirectional symmetry, the final loss can sum the source-to-target, L , and target-to-source, L (cid:48) , losses (Yang et al., 2019a): L = L + L (cid:48) (2) Dual encoder models trained using a translation ranking loss directly maximize the similarity of translation pairs in a shared embedding space.", "Additive margin softmax extends the scoring function by introducing margin m around positive pairs (Yang et al., 2019a):", "The margin, m , improves the separation between translations and nearby non-translations.", "Using (cid:48) ( x i , y j ) with the bidirectional loss L s , we obtain the additive margin loss L = 1 NN (cid:88) i =1 e ( x i ,y i ) m e ( x i ,y i ) m + (cid:80) Nn =1 ,n (cid:54) = i e ( x i ,y n ) (4) 879 2.2 MLM and TLM Pre-training Only limited prior work has combined dual encoders trained with a translation ranking loss with encoders initialized using large pre-trained language models (Yang et al., 2021).", "We contrast using a randomly initialized transformer, as was done in prior work (Guo et al., 2018; Yang et al., 2019a), with using a large pre-trained language model.", "For pre-training, we combined Masked language modeling (MLM) (Devlin et al., 2019) and Translation language modeling (TLM) (Con-neau and Lample, 2019).", "MLM is a variant of a cloze task, whereby a model uses context words surrounding a [MASK] token to try to predict what the [MASK] word should be.", "TLM extends this to the multilingual setting by modifying MLM training to include concatenated translation pairs.", "Multilingual pre-trained models such as mBERT (Devlin et al., 2019), XLM (Conneau and Lample, 2019) and XLM-R (Conneau et al., 2019) have led to exceptional gains across a variety of cross-lingual natural language processing tasks (Hu et al., 2020).", "However, without a sentence level objective, they do not directly produce good sentence embeddings.", "As shown in Hu et al. (2020), the performance of such models on bitext retrieval tasks is very weak, e.g XLM-R Large gets 57.3% accuracy on a selected 37 languages 2 from the Tatoeba dataset compared to 84.4% using LASER (see performance of more models in table 5).", "We contribute a detailed exploration that uses pre-trained language models to produce useful multilingual sentence embeddings.", "Monolingual Data We collect monolingual data from CommonCrawl 4 and Wikipedia 5 .", "We use the 2019-35 version of CommonCrawl with heuristics from Raffel et al. (2019) to remove noisy text.", "Additionally, we remove short lines < 10 characters and those > 5000 characters.", "6 The wiki 2 The number is counted from official evaluation script despite the original paper says 33 languages.", "data is extracted from the 05-21-2020 dump using WikiExtractor 7 .", "An in-house tool splits the text into sentences.", "The sentences are filtered using a sentence quality classifier.", "8 After filtering, we obtain 17B monolingual sentences, about 50% of the unfiltered version.", "The monolingual data is only used in custom pre-training.", "Bilingual Translation Pairs The translation corpus is constructed from web pages using a bitext mining system similar to the approach described in Uszkoreit et al. (2010).", "The extracted sentence pairs are filtered by a pre-trained contrastive-data-selection (CDS) scoring model (Wang et al., 2018).", "Human annotators manually evaluate sentence pairs from a small subset of the harvested pairs and mark the pairs as either GOOD or BAD translations.", "The data-selection scoring model threshold is chosen such that 80% of the retained pairs from the manual evaluation are rated as GOOD.", "We limit the maximum number of sentence pairs to 100 million for each language to balance the data distribution.", "Many languages still have far fewer than 100M sentences.", "The final corpus contains 6B translation pairs.", "9 The translation corpus is used for both dual encoder training and custom pre-training.", "In this section, we describe the training details for the dual encoder model.", "A transformer encoder is used in all experiments (Vaswani et al., 2017).", "We train two versions of the model, one uses the public BERT multilingual cased vocab with vocab size 119,547 and a second incorporates a customized vocab extracted over our training data.", "For the customized vocab, we employ a wordpiece tok-enizer (Sennrich et al., 2016), with a cased vocabulary extracted from the training set using TF Text.", "10 The language smoothing exponent for the vocab generation tool is set to 0.3 to counter imbalances in the amount of data available per language.", "The final vocabulary size is 501,153.", "The encoder architecture follows the BERT Base model, with 12 transformer blocks, 12 attention 7 https://github.com/attardi/ wikiextractor 8 The quality classifier is trained using sentences from the main content of webpages as positives and text from other areas as negatives.", "heads and 768 per-position hidden units.", "The encoder parameters are shared for all languages.", "Sentence embeddings are extracted as the l 2 normalized [CLS] token representations from the last transformer block.", "11 Our models are trained on Cloud TPU V3 with 32-cores using a global batch size of 4096 with a maximum sequence length of 128, using the AdamW (Loshchilov and Hutter, 2019) optimizer with initial learning rate 1e-3, and linear weight decay.", "We train for 50k steps for pre-trained models, and 500k steps for models without pre-training.", "We observe that additional training did not change the performance significantly.", "The default margin value for additive margin softmax is set to 0.3.", "Hy-perparameters are tuned on a held-out development set.", "Cross-lingual embedding models trained with in-batch negative samples benefit from large training batch sizes (Guo et al., 2018).", "Resource intensive models like BERT, are limited to small batch sizes due to memory constraints.", "While data-parallelism does allow us to increase the global batch size by using multiple accelerators, the batch-size on individual cores remains small.", "For example, a 4096 batch run across 32 cores results in a local batch size of 128, with each example then only receiving 127 negatives.", "sam-11 During training, the sentence embeddings after normalization are multiplied by a scaling factor.", "Following Chidambaram et al. (2018), we set the scaling factor to 10.", "We observe that the scaling factor is important for training a dual encoder model with the normalized embeddings.", "pling , which is illustrated in figure", "3. 12 Under this strategy each core encodes its assigned sentences and then the encoded sentence representations from all cores are broadcast as negatives to the other cores.", "This allows us to fully realize the benefits of larger batch sizes while still distributing the computationally intensive encoding work across multiple cores.", "Note the dot-product scoring function makes it efficient to compute the pairwise scores in the same batch with matrix multiplication.", "In figure 3, the value in the grids indicates the ground truth labels, with all positive labels located in diagonal grids.", "A softmax function is applied on each row.", "The encoder is pre-trained with Masked Language Model (MLM) (Devlin et al., 2019) and Translation Language Model (TLM) (Conneau and Lample, 2019) 13 training on the monolingual data and bilingual translation pairs, respectively.", "For an L layer transformer encoder, we train using a 3 stage progressive stacking algorithm (Gong et al., 2019), where we first learn a L 4 layers model and then L 2 layers and finally all L layers.", "The parameters of the models learned in the earlier stages are copied to the models for the subsequent stages.", "Pre-training uses TPUv3 with 512-cores and a batch size of 8192.", "The max sequence length is set to 512 and 20% of tokens (or 80 tokens at most) per sequence are masked for MLM and TLM predictions.", "For the three stages of progressive stacking, 12 While our experiments use TPU accelerators, the same strategy can also be applied to models trained on GPU.", "We evaluate models on three bitext retrieval tasks: United Nations (UN), Tatoeba, and BUCC.", "All tasks are to retrieve the correct English translation for each non-English sentence.", "United Nations (UN) contains 86,000 sentence aligned bilingual documents over five language pairs: en-fr, en-es, en-ru, en-ar and en-zh (Ziemski et al., 2016).", "A total of 11.3 million 14 aligned sentence pairs can be extract from the document pairs.", "The large pool of translation candidates makes this data set particularly challenging.", "Tatoeba evaluates translation retrieval over 112 languages (Artetxe and Schwenk, 2019b).", "The dataset contains up to 1,000 sentences per language along with their English translations.", "We evaluate performance on the original version covering all 112 languages, and also the 36 languages version from the XTREME benchmark (Hu et al., 2020).", "BUCC is a parallel sentence mining shared task (Zweigenbaum et al., 2018).", "We use the 2018 shared task data, containing four language pairs: fren, de-en, ru-en and zh-en.", "For each pair, the task provides monolingual corpora and gold true translation pairs.", "The task is to extract translation pairs from the monolingual data, which are evaluated against the ground truth using F1.", "Since the ground truth for the BUCC test data is not released, we follow prior work using the BUCC training set for evaluation rather than training (Yang et al., 2019b; Hu et al., 2020).", "Sentence embedding cosine similarity is used to identify the translation pairs.", "15 4.2 Downstream Classification We also evaluate the transfer performance of multilingual sentence embeddings on downstream classification tasks from the SentEval benchmark (Con-neau and Kiela, 2018).", "We evaluate on select tasks from SentEval including: ( MR ) movie reviews (Pang and Lee, 2005)), ( SST ) sentiment 14 About 9.5 million after de-duping.", "15 Reranking models can further improve performance (e.g. margin based scorers (Artetxe and Schwenk, 2019a) and BERT based classifiers (Yang et al., 2019a)).", "However, this is tangential to assessing the raw embedding retrieval performance.", "analysis (Socher et al., 2013), ( TREC ) question-type (Voorhees and Tice, 2000), ( CR ) product reviews (Hu and Liu, 2004), ( SUBJ ) subjectiv-ity/objectivity (Pang and Lee, 2004), ( MPQA ) opinion polarity (Wiebe et al., 2005), and ( MRPC ) paraphrasing detection (Dolan et al., 2004).", "While SentEval is English only, we make use of this benchmark in order to directly compare to prior work on sentence embedding models.", "Table 2 shows the performance on the UN and Tatoeba bitext retrieval tasks and compares against the prior state-of-the-art bilingual models Yang et al. (2019a), LASER (Artetxe and Schwenk, 2019b), and the multilingual universal sentence encoder ( m -USE) (Yang et al., 2019b) 16 .", "Row 1-3 show the performance of baseline models, as reported in the original papers.", "Row 4-7 shows the performance of models that use the public mBERT vocabulary.", "The baseline model shows reasonable performance on UN ranging from 57%-71% P@1.", "It also perform well on Tatoeba with 92.8% and 79.1% accuracy for the 36 language group and all languages, respectively.", "Adding pre-training both helps models converge faster (see details in section 6.2) and improves performance on the UN retrieval task using both vocabularies.", "Pre-training also helps on Taoeba, but only using the customized vocabulary.", "17 Additive margin softmax significantly improves the performance on all model variations.", "The last two rows contain models using the customized vocab.", "Both of them are trained with additive margin softmax given the strong evidence from the experiments above.", "Both models outperform the mBERT vocabulary based models, and the pre-trained model performs best of all.", "The top model (Base w/ Customized Vocab + AMS + PT) achieves a new state-of-the-art on 3 of the 4 languages, with P@1 91.1, 88.3, 90.8 for en-es, en-fr, en-ru, respectively.", "It reaches 87.7 on zh-en, only 0.2 lower than the best bilingual en-zh model and nearly 9 points better than the previous best multilingual model .", "On Tatoeba, the best model also outperform the baseline model by a large margin, with +10.6 accuracy on the 36 language group 16 universal-sentence-encoder-multilingual-large/3 17 The coverage of the public mBERT vocabulary on the tail languages is bad with many [UNK] tokens for such languages, e.g. the [UNK] token rate is 71% for language si , which could be the reason pre-training doesn't help on the tatoeba task.", "from XTREME and +18.2 on all languages.", "It is worth noting that all our models perform similarly on Tatoeba but not on UN.", "This suggests it is necessary to evaluate on large scale bitext retrieval tasks to better discern differences between competing models.", "For the rest of the paper we refer to LaBSE as the best performing model here, Base w/ Customized Vocab + AMS + PT , unless otherwise specified.", "Table 3 provides LaBSE's retrieval performance on BUCC, comparing against strong baselines from Artetxe and Schwenk (2019a) and Yang et al. (2019a).", "Following prior work, we perform both forward and backward retrieval.", "Forward retrieval treats en as the target and the other language as the source, and backward retrieval is vice versa.", "LaBSE not only systematically outperforms prior work but also covers all languages within a single model.", "The previous state-of-the-art required four separate bilingual models (Yang et al., 2019a).", "Table 4 gives the transfer performance achieved by LaBSE on the SentEval benchmark (Conneau and Kiela, 2018), comparing against other state-of-the-art sentence embedding models.", "Despite its massive language coverage in a single model, LaBSE still obtains competitive transfer performance with monolingual English sentence embedding models and the 16 language m -USE model.", "The above experiments show that additive margin softmax is a critical factor in learning good crosslingual embeddings, which is aligned with the find-ings from Yang et al. (2019a).", "We further investi-0.1 0.0 0.1 0.2 0.3 0.4 0.5 Margin value 50 60 70 80 90 100 P @ 1 ( % ) UN P@1 (Averaged) with different margin value Base w/ mBERT vocab + AMS Base w/ mBERT vovab + AMS + PT Base w/ Customized vocab + AMS + PT (LaBSE) Figure 4: Average P@1 (%) on UN retrieval task of models trained with different margin values.", "gate the effect of margin size on our three model variations, as shown in figure", "4. The model with an additive margin value 0 performs poorly on the UN task with 60 average P@1 across all three model variations.", "With a small margin value of 0.1, the model improves significantly compare to no margin with mid 70 to mid 80 average P@1.", "Consistently across models, increasing the margin value improves performance until it reaches 0.3.", "To better understand the effect of MLM/TLM pretraining on the final LaBSE model, we explore training a variant of this model using our customized vocab but without pre-training.", "The results are shown in figure", "5. We experiment with varying the number of training steps for both models, including: 50k, 100K, 200K, and 500K steps.", "A model with pre-trained encoders achieves excellent performance when trained for only 50K steps and further training doesn't increase the performance significantly.", "However, the model without pre-training performs poorly when only trained 50k steps.", "Its performance increases with additional steps and approaches the model with pre-training at 500k steps.", "The overall performance is, how-883 Models fr-en de-en ru-en zh-en P R F P R F P R F P R FF o r w a r d Artetxe and Schwenk (2019a) 82.1 74.2 78.0 78.9 75.1 77.0 ---Yang et al. (2019a) 86.7 85.6 86.1 90.3 88.0 89.2 84.6 91.1 87.7 86.7 90.9 88.8 LaBSE 86.6 90.9 88.7 92.3 92.7 92.5 86.1 91.9 88.9 88.2 89.7 88.9 B ac k w a r d Artetxe and Schwenk (2019a) 77.2 72.7 74.7 79.0 73.1 75.9 ---Yang et al. (2019a) 83.8 85.5 84.6 89.3 87.7 88.5 83.6 90.5 86.9 88.7 87.5 88.1 LaBSE 87.1 88.4 87.8 91.3 92.7 92.0 86.3 90.7 88.4 87.8 90.3 89.0 Table 3: [P]recision, [R]ecall and [F]-score of BUCC training set score with cosine similarity scores.", "ever, still slightly worse.", "Moreover, further training past 500k steps doesn't increase the performance significantly.", "Pre-training thus both improves performance and dramatically reduces the amount of parallel data required.", "Critically, the model sees 1B examples at 500K steps, while the 50K model only sees 200M examples.", "18 6.3 Low Resource Languages and Languages without Explicit Training Data We evaluate performance through further experiments on Tatoeba for comparison to prior work and 18 We note that it is relative easy to get 200M parallel examples for many languages from public sources like Paracrawl, TED58, while obtaining 1B examples is generally much more challenging.", "to identify broader trends.", "Besides the 36 language group and all-languages group, two more groups of 14 languages (selected from the languages covered by m -USE), and 82 languages (covered by the LASER training data) are evaluated.", "Table 5 provides the macro-average accuracy achieved by LaBSE on the four language groupings drawn from Tatoeba, comparing against LASER and m -USE.", "All three models perform well on the 14 major languages support by m -USE, with each model achieving an average accuracy > 93%.", "Both LaBSE and LASER perform moderately better than m -USE, with an accuracy of 95.3%.", "As more languages are included, the averaged accuracy for both LaBSE and LASER decreases, but with a notably more rapid decline for LASER.", "LaBSE systematically outperforms LASER on the groups of 36 languages (+10.6%), 82 languages (+11.4%), and 112 languages (+18.2%).", "Figure 6 provides the Tatoeba accuracies for languages where we don't have any explicit training data.", "There are a total of 30+ such languages.", "The performance is surprisingly good for most of the languages with an average accuracy around 60%.", "Nearly one third of them have accuracy greater than 75%, and only 7 of them have accuracy lower than 25%.", "One possible reason is that language mapping is done manually and some languages are close to those languages with training data but are treated differently according to ISO-639 standards.", "Additionally, since automatic language detection is used, some limited amount of data for the missing languages might be included during training.", "We suspect that the well performing zero-shot languages are close to some language(s) that we have in the training data.", "For example, yue and wuu are related to zh (Chinese) and fo has similarities to is (ICELANDIC).", "Multilingual generalization across so many languages is only possible due to 884 Model 14 Langs 36 Langs 82 Langs All Langs m -USE Trans.", "The Semantic Textual Similarity (STS) benchmark (Cer et al., 2017) measures the ability of models to replicate fine-grained human judgements of pairwise English sentence similarity.", "Models are scored according to their Pearson correlation, r , on gold labels ranging from 0, unrelated meaning, to 5, semantically equivalent, with intermediate values capturing carefully defined degrees of meaning overlap.", "STS is used to evaluate the quality of sentence-level embeddings by assessing the degree to which similarity between pairs of sentence embeddings aligns with human perception of sentence meaning similarity.", "Table 6 reports performance on the STS benchmark for LaBSE versus existing sentence embedding models.", "Following prior work, the semantic similarity of a sentence pair according to LaBSE is computed as the arccosine distance between the pair's sentence embeddings.", "19 For comparison, we include numbers for SentenceBERT when it is fine-tuned on the STS task as well as ConvEmbed when an additional affine transform is trained to fit the embeddings to STS.", "We observe that LaBSE performs worse on pairwise English semantic similarity than other sentence embedding models.", "We sus-19 Within prior work, m -USE, USE and ConvEmbed use arccos distance to measure embedding space semantic similarity, while InferSent and SentenceBERT use cosine similarity.", "pect training LaBSE on translation pairs biases the model to excel at detecting meaning equivalence, but not at distinguishing between fine grained degrees of meaning overlap.", "Recently, Reimers and Gurevych (2020) demonstrated that an English sentence embedding model can be distilled to a multilingual student model using a language alignment loss.", "The distilled model performs well on multilingual STS benchmarks, but underperforms on bitext retrieval tasks when compared to state-of-the-art models.", "Our approach is complimentary and can be combined with their method to distill better student models.", "We use the LaBSE model to mine parallel text from CommonCrawl, a large-scale multilingual web corpus, and then train NMT models on the mined data.", "We experiment with two language pairs: English-to-Chinese (en-zh) and English-to-German (en-de).", "We mine translations from monolingual CommonCrawl data processed as described above for self-supervised MLM pretraining.", "After processing, there are 1.17B, 0.6B, 7.73B sentences for Chinese (zh), German (de), and English (en), respectively.", "LaBSE embeddings are used to pair each non-English sentence with its nearest English neighbor, dropping pairs with a similarity score < 0 .", "6 .", "20 For en-de and en-zh, we train a model with Transformer-Big (Vaswani et al., 2017) in the following way: First we train the model on the mined data as is for 120k steps with batch size 10k.", "Then we select the best 20% using Wang 20 The threshold 0.6 is selected by manually inspecting sampled data.", "We found pairs greater or equal to this threshold are likely to be translations or partial translations of each other.", "This results in 715M and 302M sentence pairs for en-zh and en-de, respectively.", "Note that the pairs may still be noisy, which is why we perform additional filtering before training NMT models (Wang et al., 2018) .", "et al. (2018)'s data selection method, and train for another 80k steps.", "Results in table 7 show the effectiveness of the mined training data.", "By referencing previous results (Edunov et al., 2018), we see that the model using the en-de mined data yields performance that is only 2.8 BLEU away from performance of the best system that made use of the official WMT17 en-de parallel data.", "Compare to prior en-zh results (Sennrich et al., 2017), we see that our model using mined en-zh training data is as good as a WMT17 NMT model that is trained on the official WMT en-zh parallel data.", "The table also gives BLEU performance on the TED test set (Qi et al., 2018), with performance of models trained on our mined training data being comparable with models trained using CCMatrix (Schwenk et al., 2019).", "21 8 Conclusion This paper presents a language-agnostic BERT sentence embedding (LaBSE) model supporting 109 languages.", "The model achieves state-of-the-art performance on various bi-text retrieval/mining tasks compare to the previous state-of-the-art, while also providing increased language coverage.", "We show the model performs strongly even on those languages where LaBSE doesn't have any explicit training data, likely due to language similarity and the massively multilingual natural of the model.", "Extensive experiments show additive margin softmax is a key factor for training the model, parallel data quantity matters, but the effect of increased amounts of parallel data diminishes when a pre-trained language model is used.", "The pretrained model is released at https://tfhub.", "dev/google/LaBSE .", "21 CCMatrix is another dataset contains billions of parallel sentences mined from CommonCrawl using a embedding based mining approach, with an additional cleaning step.", "We thank our teammates from Descartes, Translate and other Google groups for their feedback and suggestions.", "Special thanks goes to Sidharth Mud-gal, and Jax Law for help with data processing; as well as Jialu Liu, Tianqi Liu, Chen Chen, and Anosh Raj for help on BERT pretraining." ]
[ "abstain", "abstain", "result", "abstain", "result", "other", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "result", "result", "method", "objective", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "other", "other" ]
[ "Parallel cross-lingual summarization data is scarce, requiring models to better use the limited available cross-lingual resources.", "Existing methods to do so often adopt sequence-to-sequence networks with multi-task frameworks.", "Such approaches apply multiple decoders, each of which is utilized for a specific task.", "However, these independent decoders share no parameters, hence fail to capture the relationships between the discrete phrases of summaries in different languages, breaking the connections in order to transfer the knowledge of the high-resource languages to low-resource languages.", "To bridge these connections, we propose a novel Multi-Task framework for Cross-Lingual Abstractive Summarization (MCLAS) in a low-resource setting.", "Employing one unified decoder to generate the sequential concatenation of monolingual and cross-lingual summaries, MCLAS makes the monolingual summarization task a prerequisite of the cross-lingual summarization (CLS) task.", "In this way, the shared decoder learns interactions involving alignments and summary patterns across languages, which encourages attaining knowledge transfer.", "Experiments on two CLS datasets demonstrate that our model significantly outperforms three baseline models in both low-resource and full-dataset scenarios.", "Moreover, in-depth analysis on the generated summaries and attention heads ver-ifies that interactions are learned well using MCLAS, which benefits the CLS task under limited parallel resources.", "Cross-lingual summarization (CLS) helps people efficiently grasp salient information from articles in a foreign language.", "Neural approaches to CLS require large scale datasets containing millions of cross-lingual document-summary pairs (Zhu et al., Corresponding author. Figure 1: An example of the alignments across summaries in different languages. Each color represents phrases with one specific meaning. 2019; Cao et al., 2020; Zhu et al., 2020).", "However, two challenges arise with these approaches: 1) most languages are low-resource, thereby lacking document-summary paired data; 2) large parallel datasets across different languages for neural-based CLS are rare and expensive, especially under the current trend of neural networks.", "Therefore, a low-resource setting is more realistic, and challenging, one for cross-lingual summarization.", "To our best knowledge, cross-lingual summarization under low-resource settings has not been well investigated and explored.", "Therefore, in this paper, we will develop a new model for cross-lingual abstractive summarization under limited supervision.", "For low-resource settings, multi-task learning has been shown to be an effective method since it can borrow useful knowledge from other relevant tasks to use in the target task (Yan et al., 2015; Wang et al., 2020; Motiian et al., 2017).", "Cross-lingual summarization can be viewed as the combination of two tasks, i.e., monolingual summarization (MS) and cross-lingual translation (Zhu et al., 2019).", "A wealth of relationships exist across the target summaries of MS and CLS tasks, such as translation alignments and summarization patterns.", "Illustrated in Figure 1, is mapped to Syria, and similar maping is done with the other aligned phrases.", "Obviously, leveraging these relationships is crucial for the task of transferring summarization knowledge from high-resource languages to low-resource languages.", "Unfortunately, existing multi-task frameworks simply utilize independent decoders to conduct MS and CLS task separately (Zhu et al., 2019; Cao et al., 2020), which leads to failure in capturing these relationships.", "To solve this problem, we establish reliant connections between MS and CLS tasks, making the monolingual task a prerequisite for the cross-lingual task.", "Specifically, one decoder is shared by both MS and CLS tasks; this is done by setting the generation target as a sequential concatenation of a monolingual summary and the corresponding cross-lingual summary.", "Sequentially generating monolingual and cross-lingual summaries, the decoder also conducts the translation task between them, which enhances the interactions between different languages.", "These interactions implicitly involve translation alignments, similarity in semantic units, and summary patterns across different lingual summaries.", "To demonstrate these decoder interactions, we further visualize them by probing Transformer attention heads in the model.", "Based on this process, the new structure with these advanced interactions enhances low-resource scenarios which require the model to be capable of transferring summary knowledge from high-resource languages to low-resource language.", "We name our model Multi-task Cross-Lingual Abstractive Summarization (MCLAS) under limited resources.", "In terms of a training strategy under limited resources, we first pre-train MCLAS on large-scale monolingual document-summary parallel datasets to well-equip the decoder with general summary capability.", "Given a small amount of parallel cross-lingual summary samples, the model is then fine-tuned and is able to transfer the learned summary capability to the low-resource language, leveraging the interactions uncovered by the shared decoder.", "Experiments on Zh2EnSum (Zhu et al., 2019) and a newly developed En2DeSum dataset demonstrate that MCLAS offers significant improvements when compared with state-of-the-art cross-lingual summarization models in both low-resource scenarios and full-dataset scenario.", "At the same time, we also achieved competitive performances in the En2ZhSum dataset (Zhu et al., 2019).", "Human evaluation results show that MCLAS produces more fluent, concise and informative summaries than baselines models under limited parallel resources.", "In addition, we analyzed the length of generated summaries and the success of monolingual generation to verify advantages offered by identifying interactions between languages.", "We further investigate the explainability of the proposed multi-task structure by probing the attention heads in the unified decoder, proving that MCLAS learns the alignments and interactions between two languages, and this facilitates translation and summarization in the decoder stage.", "Our analysis provides a clear explanation of why MCLAS is capable of supporting CLS under limited resources.", "Our implementation and data are available at https://github.com/WoodenWhite/MCLAS .", "Recently, cross-lingual summarization has received attention in research due to the increasing demand to produce cross-lingual information.", "Traditional CLS systems are based on a pipeline paradigm (Wan et al., 2010; Wan, 2011; Zhang et al., 2016).", "These pipeline systems first translate the document and then summarize it or vice versa.", "Shen et al. (2018) propose the use of pseudo summaries to train the cross-lingual abstractive summarization model.", "In contrast, Duan et al. (2019a) and Ouyang et al. (2019) generate pseudo sources to construct the cross-lingual summarization dataset.", "The first large-scale cross-lingual summarization datasets are acquired by use of a round-trip translation strategy (Zhu et al., 2019).", "Additionly, Zhu et al. (2019) propose a multi-task framework to improve their cross-lingual summarization system.", "Following Zhu et al. (2019), more methods have been proposed to improve the CLS task.", "Zhu et al. (2020) use a pointer-generator network to exploit the translation patterns in cross-lingual summarization.", "Cao et al. (2020) utilize two encoders and two decoders to jointly learn to align and summarize.", "In contrast to previous methods, MCLAS generates the concatenation of monolingual and cross-lingual summaries, thereby modeling relationships between them.", "Natural language generation (NLG) for low-resource languages or domains has attracted lots of attention.", "Gu et al. (2018) leverage meta-learning to improve low-resource neural machine translation.", "Meanwhile, many pretrained NLG models have been proposed and adapted to low-resource scenarios (Song et al., 2019; Chi et al., 2020; Radford et al., 2019; Zhang et al., 2019a).", "However, these models require large-scale pretraining.", "Our work does not require any large pretrained generation models or translation models, enabling a vital decreases in training cost.", "Given a source document DA = { x A 1 , x A 2 , . . . , x Am } in language A , a monolingual summarization system converts the source into a summary SA = { y A 1 , y A 2 , . . . , y An } , where m and n are the lengths of DA and SA , respectively.", "A cross-lingual summarization system produces a summary SB = { y B 1 , y B 2 , . . . , y Bn (cid:48) } consisting of tokens y B in target language B , where n (cid:48) is the length of SB .", "Note that the mentioned x A , y A , and y B are all tokens.", "Zhu et al. (2019) propose using the Transformer (Vaswani et al., 2017) to conduct cross-lingual summarization tasks.", "The Transformer is composed of stacked encoder and decoder layers.", "The encoder layer is comprised of a self-attention layer and a feed-forward layer.", "The decoder layer shares the same architecture as the encoder except for an extra encoder-decoder attention layer, which performs multi-head attention over the output of stacked encoder layers.", "The whole Transformer model is trained to maximize the conditional probability of the target sequence SB as follows: LNCLS = N (cid:88) t =1 log P ( y Bt | y B<t , DA ) (1) 3.2 Improving NCLS with Multi-Task Frameworks Considering the relationship between CLS and MS, in which they share the same goal to summarize important information in a document, Zhu et al. (2019) proposed employing a one-to-many multitask framework to enhance the basic Transformer model.", "In this framework, one encoder is employed to encode the source document DA .", "Two separate decoders simultaneously generate a monolingual summary SA and a cross-lingual summary SB , leading to a loss as follows: L NCLS+MS = (cid:80) nt =1 log P ( y At | y A<t , DA ) + (cid:80) n (cid:48) t =1 log P ( y Bt | y B<t , DA ) (2) Figure 2: An overview of our proposed MCLAS.", "This multi-task framework shares encoder representation to enhance cross-lingual summarization.", "However, independent decoders in this model are incapable of establishing alignments and connections between cross-lingual summaries.", "To strengthen the connections mentioned, we propose making the monolingual task a prerequisite for the cross-lingual task through modeling interactions.", "According to previous work (Wan et al., 2010; Yao et al., 2015; Zhang et al., 2016), interactions between cross-lingual summaries (important phrase alignments, sentence lengths, and summary patterns, etc) are crucial for the final summary's quality.", "We leverage these interactions to further transfer the rich-resource language knowledge.", "Detailed descriptions of this step are presented in following sections.", "To model interactions between languages, we need to share the decoder's parameters.", "Inspired by Dong et al. (2019), we propose sharing the whole decoder to carry out both the translation and the summarization tasks.", "Specifically, we substitute the generation target SA with the sequential concatenation of SA and SB : SAB = { [ BOS ] , y A 1 , y A 2 , . . . , y An , [ LSEP ] , y B 1 , y B 2 , . . . , y Bn (cid:48) , [ EOS ] } (3) where [ BOS ] and [ EOS ] are the beginning and end token of the output summaries, respectively.", "And [ LSEP ] is the special token used as the separator of SA and SB .", "With the new generation target, the decoder learns to first generate SA , and then generate SB conditioned on SA and DA .", "The whole generation process is illustrated in Figure", "2. Formally, we maximize the joint probability for monolingual and cross-lingual summarization: LMCLAS = (cid:80) nt =1 log P ( y At | y A<t , DA ) + (cid:80) n (cid:48) t =1 log P ( y Bt | y B<t , SA , DA ) (4) The loss function can be divided into two terms.", "When generating SA , the decoder conducts the MS task based on DA , corresponding to the first term in Equation (4).", "When generating SB , the decoder already knows the information of corresponding monolingual summaries.", "In this way, it performs the translation task (for SA ) and the CLS task (for DA ), achieved by optimizing the second term in Equation (4).", "With the modification of the target, our model can easily capture interactions between cross-lingual summaries.", "The trained model shows effectiveness in aligning the summaries.", "Not only the output tokens, but also the attention distributions are aligned.", "The model we designed leverages this phenomenon to enable monolingual knowledge to be transferred under low-resource scenarios.", "Detailed investigation is presented in Section 6.", "We adopt Transformers as our base model.", "In addition, we use multilingual BERT (Devlin et al., 2019) to initialize the encoder, improving its ability to produce multilingual representations.", "Additionally, having tried many different position embedding and language segmentation embedding methods, we find that [ LSEP ] is enough for the model to distinguish whether it is generating SB .", "Hence keeping the original position embedding (Vaswani et al., 2017) and employing no segmentation embedding are best for performance and efficiency.", "Since our proposed framework enforces interactions between cross multilingual summaries, it has further benefits to the low-resource scenario, as only a few training summary samples are available in a cross-language.", "Yet, simply training from scratch can not make the best of our proposed model in low-resource scenarios.", "Hence we use a pre-training and fine-tuning paradigm to transfer the rich-resource language knowledge.", "First, we train the model in a monolingual summarization dataset.", "In this step, the model learns how to produce a monolingual summary for a given document.", "Then, we jointly learn MS and CLS with few training samples, optimizing Equation (4).", "We adopt similar initialization to existing CLS methods, which is introduced in Section 5.3.", "we conduct experiments on the En2ZhSum, Zh2EnSum CLS datasets 1 (Zhu et al., 2019) and a newly constructed En2DeSum dataset.", "En2ZhSum is an English-to-Chinese dataset containing 364,687 training samples, 3,000 validation, and 3,000 testing samples.", "The dataset is converted from the union set of CNN/DM (Hermann et al., 2015) and MSMO (Zhu et al., 2018) using a round-trip translation strategy.", "Converted from the LCSTS dataset, Zh2EnSum contains 1,693,713 Chinese-to-English training samples, 3,000 validation, and 3,000 testing samples.", "To better verify the CLS ability of MCLAS, we construct a new English-to-German dataset (En2DeSum), using the same methods proposed by Zhu et al. (2019).", "We use WMT'19 English-German winner 2 as our translation model to process the English Gigaword dataset.", "3 We set the threshold T 1 = 0 .", "6 and T 2 = 0 .", "2 .", "The final En2DeSum contains 429,393 training samples, 4,305 validation samples, and 4,099 testing samples.", "All the training samples contain a source document, a monolingual summary, and a cross-lingual summary.", "For the full-dataset scenario, we train the model with the whole dataset.", "For low-resource scenarios, we randomly select 3 different amounts 1 www.nlpr.ia.ac.cn/cip/dataset.htm 2 https://github.com/pytorch/fairseq/ tree/master/examples/translation 3 LDC2011T07 (minimum, medium, and maximum) of training samples for all datasets to evaluate our model's performance under low-resource scenarios.", "Detailed numbers are presented in Table", "1. 5.2 Training and Inference We use multilingual BERT (mBERT) (Devlin et al., 2019) to initialize our Transformer encoder.", "The decoder is a Transformer decoder with 6 layers.", "Each attention module has 8 different attention heads.", "The hidden size of the decoder's self-attention is 768 and that of the feed-forward network is 2048.", "The final model contains 296,046,231 parameters.", "Because the encoder is pretrained when the decoder is randomly initialized, we use two separate optimizers for the encoder and the decoder (Liu and Lapata, 2019).", "The encoder's learning rate e is set as 0.005, while the decoder's learning rate d is 0.2.", "Warmup-steps for the encoder are 10,000 and 5,000 for the decoder.", "We train the model on two TITAN RTX GPUs for one day with gradient accumulation every 5 steps.", "Dropout with a probability 0.1 is applied before all the linear layers.", "We find that the target vocabulary type doesn't have much influence on the final result.", "Therefore, we directly use mBERT's subwords vocabulary as our target vocabulary.", "Nevertheless, in case tokens would be produced in the wrong language, we constructe a target token vocabulary for each target language.", "In the inference period, we only generate tokens from the corresponding vocabulary.", "During the decoding stage, we use beam search (size 5) and trigram block to avoid repetition.", "Length penalty is set between 0.6 and", "1. All the hyperparameters are manually tuned using PPL and accuracy metric on the validation set.", "NCLS CLS model proposed by Zhu et al. (2019).", "In low-resource scenarios, we initialize our model with the pretrained MS model and then use a few samples to optimize Equation (1).", "NCLS+MS Multi-task framework proposed by Zhu et al. (2019).", "We find that NCLS+MS fails to converge when it is partly initialized by the pretrained MS model (the CLS decoder is randomly initialized).", "Hence, we fully initialize the multitask model using the pretrained MS model.", "Specifically, the two separate decoders are both initialized by the pretrained monolingual decoder.", "TLTran Transformer-based Late Translation is a pipeline method.", "First, a monolingual summarization model summarizes the source document.", "A translation model is then applied to translate the summary.", "The summarization model is trained with monolingual document-summary pairs in three datasets.", "Specifically, we continue using WMT'19 English-German winner as the translation model for En2DeSum.", "Some recent proposed models improve the performance of CLS task.", "Methods NCLS+MT , TETran (Zhu et al., 2019), and the system proposed by Ouyang et al. (2019) require external long document machine translation (MT) corpora.", "The method proposed by Cao et al. (2020) requires not only parallel summaries but also document pairs translated by MT systems.", "Another method proposed by Zhu et al. (2020) requires bilingual lexicons extracted from large parallel MT datasets (2.08M sentence pairs from eight LDC corpora).", "We choose not to use these models as baselines since comparing MCLAS with them is unfair.", "The overall results under low-resource scenarios and full-dataset scenario are shown in Table", "2. We reimplement a variety of models and evaluate them using F1 scores of the standard ROUGE metric (Lin, 2004) (ROUGE-1, ROUGE-2, and ROUGE-L) and BERTScore 4 (Zhang et al., 2019b).", "The following analysis is from our observations.", "In the Zh2EnSum and En2DeSum datasets, MCLAS achieves significant improvements over baselines in all the low-resource scenarios.", "It is worth noting that combining NCLS+MS in our experiments does not bring much improvement to the NCLS model.", "We consider that this is because mBERT has already provided multilingual encoding for our models.", "However, we find that in the En2ZhSum dataset, MCLAS did not perform as well as that in the other two datasets.", "We speculate that is due to the imbalance of English reference and Chinese reference.", "The average length of SA and SB in En2ZhSum is 55.21 and 95.96, respectively (Zhu et al., 2019).", "This condition largely breaks the alignment between languages, leading to MCLAS 4 https://github.com/Tiiiger/bert_score Models Zh2EnSum En2DeSum En2ZhSum R-1 R-2 R-L BERTScore R-1 R-2 R-L BERTScore R-1 R-2 R-L BERTScore MinimumLow-resourceScenario NCLS 20.93 5.88 17.58 0.5041 17.59 5.01 16.58 0.7202 34.14 12.45 21.20 0.7096 NCLS+MS 20.50 5.45 17.25 0.5025 17.52 5.27 16.57 0.7198 33.96 12.38 21.07 0.7102 MCLAS 21.03 6.03 18.16 0.5023 19.19 5.91 18.43 0.7282 32.03 13.17 21.17 0.6529 MediumLow-resourceScenario NCLS 26.42 8.90 22.05 0.5373 23.55 8.09 22.13 0.7400 35.98 15.88 23.79 0.7298 NCLS+MS 26.86 9.06 22.47 0.5377 23.60 8.35 22.14 0.7431 38.95 18.09 25.39 0.7172 MCLAS 27.84 10.41 24.12 0.5464 27.22 10.09 26.00 0.7575 37.28 18.10 25.26 0.6839 MaximumLow-ResourceScenario NCLS 29.05 10.88 24.32 0.5492 25.84 9.78 24.25 0.7483 40.18 19.86 26.52 0.7435 NCLS+MS 28.63 10.63 24.00 0.5485 25.59 9.58 23.96 0.7484 39.86 19.87 26.64 0.7445 MCLAS 30.73 12.26 26.51 0.5633 30.31 12.32 28.88 0.7682 38.35 19.75 26.41 0.6921 FulldatasetScenario TLTran 33.64 15.58 29.74 -28.57 13.31 26.34 -30.20 12.20 27.02 NCLS 35.60 16.78 30.27 0.5835 31.61 14.24 29.63 0.7680 44.16 24.28 30.23 0.7407 NCLS+MS 34.84 16.05 29.47 0.5807 31.33 13.86 29.31 0.7675 42.68 23.51 29.24 0.7361 MCLAS 35.65 16.97 31.14 0.5770 36.48 17.21 34.86 0.7897 42.27 24.60 30.09 0.7069 Table 2: F1 scores of ROUGE and BERTScore in Zh2EnSum, En2DeSum and En2ZhSum dataset.", "the performing slightly weaker.", "Despite this, results in En2DeSum and Zh2EnSum demonstrate that our proposed MCLAS model is effective for CLS under limited resources.", "Finally, our proposed model also has superior performance compared to baseline models given the full training dataset, achieving the best ROUGE score in En2DeSum and Zh2EnSum datasets.", "In addition to automatic evaluation, we conduct a human evaluation to verify our model's performance.", "We randomly chose 60 examples (20 for each low-resource scenario) from the Zh2EnSum test dataset.", "Seven graduate students with high levels of fluency in English and Chinese are asked to assess the generated summaries and gold summaries from independent perspectives: informativeness , fluency , and conciseness .", "We follow the Best-Worst Scaling method (Kiritchenko and Mohammad, 2017).", "Participants are asked to indicate Scenarios Models En2DeSum Zh2EnSum MinimumLow-resourceScenario NCLS 13.48 ( + 4.69) 18.49 ( + 3.51) NCLS+MS 12.83 ( + 4.04) 18.68 ( + 3.70) MCLAS 7.80 ( 0.90) 13.16 ( 1.82) MediumLow-resourceScenario NCLS 13.13 ( + 4.34) 18.60 ( + 3.62) NCLS+MS 12.90 ( + 4.11) 18.57 ( + 3.59) MCLAS 8.65 ( 0.14) 13.10 ( 1.88) MaximumLow-resourceScenario NCLS 13.37 ( + 4.58) 18.44 ( + 3.46) NCLS+MS 13.37 ( + 4.58) 18.75 ( + 3.77) MCLAS 8.46 ( 0.33) 12.83 ( 2.15) Gold 8.79 14.98 Table 5: Target summary length generated by various models.", "the best and worst items from each perspective.", "The result scores are calculated based on the percentage of times each system is selected as best minus the times it is selected as worst.", "Hence, final scores range from -1 (worst) to 1 (best).", "Results are shown in Table", "3. As the data size increases, all the models achieve better results.", "Our proposed MCLAS outperformed NCLS and NCLS+MS in all the metrics.", "We notice that MCLAS is especially strong in conciseness.", "This phenomenon will be analyzed in Section 5.7 We show Fleiss' Kappa scores of our conducted human evaluation in Table 4, which demonstrates a good inter-agreement among the participants.", "We use a monolingual summarization model to initialize our model.", "However, whether this initialization method works is still in question.", "Therefore we compare our models with non-initialized models, shown in Figure", "3. Among the three datasets, the initialization methods bring a huge improvement to all of the models.", "One of the goals of automatic summarization is to produce brief text.", "Yet many neural auto-regressive models tend to produce a longer summary to improve the recall metric.", "Results in Table 5 show that interactions enable MCLAS to generate shorter summaries than other models, which more closely resembles human summaries.", "We can safely conclude that MCLAS can keep the summary in a fairly appropriate length, leading to concise generated summaries.", "We speculate that this is due to its ability to capture interactions between languages, Figure 4: An example of generated cross-lingual summaries.", "Modeling interactions between languages brings many advantages.", "Specifically, we find that MCLAS can preserve more monolingual summarization knowledge than the NCLS+MS model during low-resource fine-tuning, or even promote its performance.", "We generate monolingual summaries with models trained in the maximum low-resource scenario.", "In Table 6, we can clearly see that MCLAS retains more monolingual summarization knowledge in the Zh2EnSum dataset.", "In the En2DeSum dataset, monolingual summarization performance is even significantly improved.", "We speculate that this is due to MCLAS's ability to provide the interactions between languages.", "We focus specifically on digging into results in En2DeSum, evaluating its detailed ROUGE and average summary length, presented in Table", "7. We find that ROUGE improvement mainly resulted from precision while recall barely decrease the performances.", "This and the Avg.", "length metric shows that MCLAS produces more precise summaries while retaining most of the important information, leading to the metric increase in ROUGE.", "In Figure 4, on the Zh2EnSum dataset, there is a list comparing the reference summary and outputs of models trained in the maximum low-resource scenario.", "Clearly, the NCLS model loses the information two cars and generates the wrong information No.2 factory.", "The NCLS+MS model is not accurate when describing the number of injured people, dropping important information more than.", "Additionally, the NCLS+MS model also has fluency and repetition issues: in zhengzhou appears twice in its generated summary.", "In contrast, MCLAS captures all of this information mentioned in both its Chinese and English output, and the English summary is well aligned with the Chinese summary.", "Finally, all of the models ignore the information foxconn printed on the body of the car.", "See Appendix A for more examples.", "We have observed a successful alignment between SA and SB produced by our model in Section 5.9.", "In this section, we dig into this and analyze how the model learns the relationships.", "For a CLS task from document DA to SB , our hypotheses are: (1) the unified decoder is implicitly undertaking translation from SA to SB ; (2) the unified decoder also conducts both monolingual and cross-lingual summarization.", "To verify these hypotheses, we visualize attention distributions of the Transformer decoders trained on En2ZhSum.", "Neural models Figure 6: Different types of encoder-decoder attention heads in MCLAS's decoder.", "can be explicitly explained using probing into the attention heads (Michel et al., 2019; Voita et al., 2019).", "We follow the previous work and visualize the function of all attention heads in the decoder to verify the relationships of the concatenated cross-lingual summaries (i.e., translation) and cross-lingual document-summary pairs (i.e., summarization).", "We assume that the decoder translates only if the source summary SA and the target summary SB align well.", "This means that MCLAS is transferring knowledge from SA to SB .", "We visualize and probe all 48 self-attention heads in the unified decoder.", "We find 23 (47.9%) translation heads , defined as the heads attending from y Bj to the corresponding words in language A .", "These heads undertake a translation function.", "19 (39.6%) heads are local heads , attending to a few words before them and modeling context information.", "12 (25%) heads are self heads , which only attend to themselves to retain the primary information.", "Some of the heads can be categorized into two types.", "Note that all of the heads behave similarly across different samples.", "We find that most of the heads are translation heads, indicating that our unified decoder is translating SA into SB .", "We sample some representative heads in Figure 5 to show their functionalities.", "6.2 Analysis on Summarization To analyze whether the decoder for SB is simply translating from SA or that it also summarizes the source document, we visualize the distribution of 48 encoder-decoder attention heads.", "We find 28 (58.3%) summarization heads that attend to the document's important parts when generating both the monolingual summary and the cross-lingual summary.", "We also find 20 (41.7%) translation heads , which focus on the source document when generating SA , while focusing on nothing when generating SB .", "We speculate that summarization heads are responsible for the summarization function and that translation heads cut down the relation between SB and source document DA , leaving space for translation.", "Again, all the heads behave similarly across different samples.", "We select two representative samples in Figure 6.", "The existence of both summarization and translation heads in encoder-decoder attention components supports our views: the unified decoder simultaneously conducts translation and summarization.", "Therefore, our model enhances the interactions between different languages, being able to facilitate cross-lingual summarization under low-resource scenarios.", "See Appendix B for detailed visualization results.", "An ideal low-resource experiment should be conducted with real low-resource languages.", "Although possible, it takes much effort to acquire such datasets.", "Hence, it is the second-best choice that we simulate our low-resource scenarios by artifi-cially limiting the amount of the available data.", "Some may question it about the feasibility of our method in real low-resource languages since machine translation systems, which is used to generate document-summary pairs, would be of lower quality for truly low-resource languages.", "For this concern, we consider it still possible to acquire thousands of high-quality human translated parallel summaries, as Duan et al. (2019b) adopt on their test set, to apply our method.", "In this paper, we propose a novel multi-task learning framework MCLAS to achieve cross-lingual abstractive summarization with limited parallel resources.", "Our model shares a unified decoder that sequentially generates both monolingual and cross-lingual summaries.", "Experiments on two cross-lingual summarization datasets demonstrate that our framework outperforms all the baseline models in low-resource and full-dataset scenarios.", "This work is supported by the Joint Funds of the National Natural Science Foundation of China (Grant No. U19B2020), the Funds of the Integrated Application Software Project.", "We appreciate the helpful discussions with Sanxing Chen, Jia-Ao Zhan, Xuyang Lu, Xiao Liu, and Yuxiang Zhou.", "We also thank all the anonymous reviewers for their insightful suggestions." ]
[ "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "objective", "abstain", "abstain", "result", "abstain", "result", "objective", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "objective", "method", "objective", "other", "other", "other" ]
[ "In this paper, we introduce a novel methodology to efficiently construct a corpus for question answering over structured data.", "For this, we introduce an intermediate representation that is based on the logical query plan in a database called Operation Trees (OT).", "This representation allows us to invert the annotation process without losing flexibility in the types of queries that we generate.", "Furthermore, it allows for fine-grained alignment of query tokens to OT operations.", "In our method, we randomly generate OTs from a context-free grammar.", "Afterwards, annotators have to write the appropriate natural language question that is represented by the OT.", "Finally, the annotators assign the tokens to the OT operations.", "We apply the method to create a new corpus OTTA (Operation Trees and Token Assignment), a large semantic parsing corpus for evaluating natural language interfaces to databases.", "We compare OTTA to Spider and LC-QuaD 2.0 and show that our methodology more than triples the annotation speed while maintaining the complexity of the queries.", "Finally, we train a state-of-the-art semantic parsing model on our data and show that our corpus is a challenging dataset and that the token alignment can be leveraged to increase the performance significantly.", "Question Answering (QA) over structured data, also called Natural Language Interfaces to Databases (NLI2DB) or Text-to-SQL, is a key task in natural language processing and the semantic web.", "It is usually approached by mapping a natural language question (NL question) into executable queries in formal representations such as logical forms, SPARQL or SQL.", "The state-of-the-art in this problem uses machine learning techniques to learn the mapping.", "Unfortunately, the construction of labeled corpora to train and evaluate NLI2DB systems is timeand cost-intensive, which is slowing down progress in this area.", "In particular, it usually requires recruiting SQL or SPARQL experts to write queries for natural language questions.", "For instance, in Spider (Yu et al., 2018), the authors recruited students to write SQL queries.", "They worked 500 person-hours to generate 5,600 queries, which corresponds to more than 5 minutes per question.", "As a more cost-effective alternative to writing formal queries manually, some authors propose to use templates to generate them automatically.", "For instance, LC-QUAD 2.0 (Dubey et al., 2019) used 22 templates based on the structure of the target knowledge graph.", "Constructing templates is also time-consuming, and the expressiveness of the automatically produced queries is limited.", "Apart from the high cost of generating queries, the natural language questions in current datasets do not necessarily cover the whole range of data present in the database.", "In Spider, the coverage is limited by the creativity of the students, and in LC-QUAD 2.0 by the templates.", "In this paper, we propose a new procedure to increase the speed of the annotation process.", "For this, we first introduce an intermediate representation of the structured queries, which we call Operation Trees (OTs, see Figure 1).", "Our OTs follow a context-free grammar and are based on logical query plans that can easily be mapped to SPARQL or SQL, making our system more versatile.", "In addition, it has been shown that working on abstract tree representations instead of sequences yields better results (Guo et al., 2019).", "Recent work by (Cheng et al., 2019) shows the successful use of tree-like abstractions as an intermediate representation to parse text into semantic representations, reinforcing our choice of operation trees as the main representation language.", "Our annotation process works as follows.", "First, we use the context-free grammar to sample random OTs for a given database.", "We then let annotators in a first round write the corresponding NL questions for the sampled OTs.", "In a second, optional, round the annotators perform an assignment of tokens from the NL question to the operations in the OT.", "This additional annotation enriches the information in the dataset, and, as we will show below, allows for performance gains, especially in low data regimes.", "Our approach to producing datasets has the following advantages with respect to the methodology used in previous work: 1) It reduces the time needed for an annotation (less than 2 minutes, compared to more than 5 in Spider).", "2) It allows us to cover the whole range of data present in the database structure and not to focus on the most prominent examples.", "3) Our annotation procedure provides alignments between operations in the formal language and words in the question, which are an additional source of supervision when training.", "We applied our approach 1 to five datasets, yielding a large corpus called OTTA 2 which consists of 3,792 complex NL questions plus their corresponding OTs as well as the token assignment for one of our domains.", "Besides, we have adapted a state-of-the-art system (Yin and Neubig, 2017) to work on to operation trees, and included a mechanism to profit from token alignment annotations when training.", "The system yields better results with up to 7 point increase when trained on aligned OTs.", "In this section, we first review the related work in the area of Natural Language Interfaces to Databases (NLI2DB).", "Afterwards, we focus on the data resources that are currently available to evaluate these systems.", "Natural Language Interfaces to Databases.", "There is a vast amount of literature on NLI2DB.", "A recent survey on methods and technologies is provided by (Affolter et al., 2019).", "Early systems use a keyword-based approach with inverted indexes to query the databases (Simitsis et al., 2008; Blunschi et al., 2012; Bast and Haussmann, 2015).", "Pattern-based approaches are able to handle more 1 The annotation tool can be found here: https:// github.zhaw.ch/semql/annotation_tool 2 The corpus can be found here: https: //github.zhaw.ch/semql/semql-data/tree/master/annotated_tree_files/single_files complex NL questions (Damljanovic et al., 2010; Zheng et al., 2017).", "Parsing-based approaches use a natural language parser to analyze and reason about the grammatical structure of a query (Li and Jagadish, 2014; Saha et al., 2016).", "Grammar-based approaches only allow the user to formulate queries according to certain pre-defined rules, thus focus primarily on increasing the precision of answers (Song et al., 2015; Ferre, 2017).", "More recent systems use a neural machine translation approach similar to translating natural languages, say, from French to English (Iyer et al., 2017a; Basik et al., 2018; Cheng et al., 2019; Liu et al., 2019; Guo et al., 2019; Cheng et al., 2019).", "Data Resources.", "We will now review the major data resources that have recently been used for evaluating NLI2DB systems.", "These resources are mainly created following two approaches: (1) Both NL and structured queries are manually created, and (2) structured queries are automatically generated, and then humans create the corresponding NL questions.", "Regarding fully manually created resources, (Yu et al., 2018) provided Spider, a dataset with 5,600 SQL queries, over 200 databases and 10,181 NL questions annotated by 11 students, where some questions were manually paraphrased to increase the variability.", "(Finegan-Dollak et al., 2018) released Advising, with 4.5k questions about university course advising and SQL queries.", "(Dahl et al., 1994) created ATIS, a dataset with 5k user questions about flight-booking manually annotated with SQL queries and modified by (Iyer et al., 2017b) to reduce nesting.", "(Zelle and Mooney, 1996) created GeoQuery, with 877 questions about US geography annotated with Prolog and converted to SQL by (Popescu et al., 2003) and (Giordani and Mos-chitti, 2012).", "There are also smaller datasets about restaurants with 378 questions (Tang and Mooney, 2000), the Yelp website with 196 questions and IMDB with 131 questions (Yaghmazadeh et al., 2017).", "Resources using an automatic step usually rely on generating structured queries using templates created by experts.", "(Zhong et al., 2017) created WikiSQL, a collection of 80k pairs of SQL queries and NL questions made using Wikipedia.", "However, SQL queries are relatively simple because each of the databases consists of only a single table without foreign keys.", "Hence, the queries do not contain joins.", "(Dubey et al., 2019) developed LC-QuAD 2.0, with 30,000 complex NL questions and SPARQL queries over DBpedia and Wikidata.", "They used templates to generate SPARQL queries for seed entities and relations, which are lexicalized automatically using other templates.", "NL questions of both datasets were created by crowdsourcing workers.", "All the resources mentioned above required a large amount of effort.", "In each case, the annotators need an in-depth knowledge of SQL or a similarly structured language.", "Our approach simplifies the process of generating question-answering corpora while ensuring a large coverage of the underlying database without forfeiting any complexity in the queries.", "On the other hand, (Wang et al., 2015) developed a method similar to ours.", "They begin with a lexicon linking natural utterances with predicates in the database.", "Then, they use domain-specific grammar to create several canonical phrases associated with queries.", "Finally, crowdsourcing workers rewrite the canonical phrases and create natural utterances used for training a semantic parser.", "Similar to our approach, they combine an automatic method with crowdsourcing workers.", "However, they have to create the lexicon and the grammar for each database, while our method can be applied to any database without creating new resources.", "In our setting, the goal is to generate an Operation Tree (OT) that finds the correct answer for a given question in natural language.", "An OT is a binary tree that is closely related to a logical query plan in SQL database engines.", "An OT is composed of a sequence of operations that can be mapped to a database query language such as SQL or SPARQL to retrieve the proper result.", "Example.", "Assume that we have a database about movies that we want to query in natural language.", "In Figure 1, an example of an OT is depicted for the question Who starred in 'The", "Notebook'?.", "In order to answer this question, the tables person and movie are selected, then the table movie is filtered by movie title The Notebook .", "In the next step, the tables are joined via the bridge-table cast .", "Finally, the person.name column is extracted.", "We enhance these OTs by associating a reasonable subset of tokens from the NL question to each operation in the tree.", "For instance, the token starred could be associated to the Join operation, TableScan (person) TableScan (cast) TableScan (movie) Join(person.id, cast.person_id) Join(movie.id, cast.movie_id) Projection(person.name) Select(movie.title = The Notebook)", "as this operation implies that an actor starred in a movie, whereas the tokens How many could be associated to the Count operation.", "This mapping between tokens and operations will help later on to train machine learning algorithms to generate OTs automatically from natural language questions with better quality.", "Definition.", "More formally, the OTs follow a predefined context-free grammar.", "In the current state, the set of operations includes major operations from the relational algebra with specific extensions.", "The full grammar is shown in Figure", "2. S ::= done (R) | isEmpty (R) | sum (T,A) | average (T,A) | count(R) R ::= projection (T, A) T ::= tableScan (TN) | selection(T, A, OP ,V) | min (T, A) | max(T, A) | distinct (T) | join (T, T, A, A) | union (T,T,A, A) | intersection (T, T, A, A) | difference (T, T, A, A) | averageBy (T ,A) | sumBy (T ,A) | countBy (T ,A) TN ::= table name A ::= attributes OP ::= < | > | <= | >= | == |", "!= V ::= values Figure 2: The set of production rules for the context-free grammar of the operation trees, where table name denotes the set of all entity types in the database, attributes denotes the set of all attributes of entity types, and values denotes the set of all entries in the database.", "The OTs can be used to represent queries for any entity-relationship data paradigm.", "For instance, in SQL databases the entity types are the tables, the attributes are the columns, and the relationships are represented as tables as well.", "Similar mapping is possible for other paradigms.", "Properties.", "The OTs have several features: Question Types : There are different types of questions that can be asked.", "For instance, 1) yes/no questions ( IsEmpty ), 2) questions about a list of items ( Projection followed by Done ), 3) questions about the cardinality of a result set ( Count ), and 4) questions about an aggregation ( Sum, Avg, etc.).", "Result Types : The type of results is defined by the entity types in the result set.", "For instance, a question can ask about the list of directors that satisfy certain constraints (e.g., all directors that were born in France).", "In this case, the result type would be the person type.", "Constraints : The constraints represent the filters that are applied onto the attributes of the entities.", "For instance, All directors born in France sets a constraint on the birth place attribute.", "Entity Types : They define which entity types are involved in the query.", "The selected entity types are combined, usually via a Join operation.", "For instance, in Figure 1 the entity types are movie and person , which are combined with the table cast .", "Aggregation Types: They define reduction operations, which are applied to the data.", "This includes Min/Max operations on an attribute, Set operations on two sets of relations, and Group By operations .", "Complexity.", "In order to categorize the OTs, we define a complexity score similar to (Yu et al., 2018), which is based on the number of components in the tree.", "The more Joins and Group By operations, Aggregations or Filters are in the query, the higher the score.", "Like (Yu et al., 2018), we define four categories: Easy , Medium , Hard , and Extra Hard .", "The evident way to construct a corpus with NL questions and their corresponding OT queries would consist of two main parts: first, collect a set of NL questions, and then create the corresponding OT queries to these questions.", "However, this approach is very time-consuming and has a major issue.", "In essence, questions tend to be very narrow in scope, i.e., they do not necessarily cover the whole range of entity types, attributes and relationships that are present in the database.", "Moreover, writing the corresponding OT queries for the NL questions requires sufficient SQL skills as well as a mechanism to verify that the OT statements actually correspond to the question.", "Thus, we decided to invert the process .", "That is, we first randomly sample an OT using the above-defined context-free grammar, and then annotators write a corresponding question in natural language.", "In the last step, annotators manually map tokens of the question to the operations.", "There are several advantages to this procedure: 1) It allows for controlling the characteristics of the OTs, i.e., we can control the question type, the response type, the constraints, and the entity type.", "2) It allows them to create more complex questions that better cover the variety of the underlying data.", "3) The annotation process is less time consuming, as the annotators do not have to build the trees or write queries.", "Rather they can focus on writing the question and assigning tokens.", "We now describe the process of automatic sampling and manual annotation in more detail.", "The tree sampling procedure is composed of the following steps:", "Question Type : This can be sampled at random or be manually set if a certain type is desired.", "Result Type : First, an entity type is randomly sampled.", "Then a specific set of attributes is sampled from the chosen entity type.", "Alternatively, the result type can be manually set.", "Entity Types : The entity types are sampled based on the graph structure of the entities and relationships in the database schema.", "For this, we sample from all the possible join-paths, which contain the table of the result type.", "This is also controllable, as we can specify the length of the paths we want to consider.", "Constraints : In the constraints, the filter arguments are sampled.", "First, the entity types are randomly selected on which the constraints are to be applied.", "Then we sample an operation and a value at random for each entity type and each attribute.", "We can limit the number of overall constraints and the number of maximum constraints for each entity type.", "Group By : The Group By operations ( AvgBy, SumBy, CountBy ) are chosen at random.", "For a Group By operation, two attributes need to be selected: a group-attribute, which defines on which attribute to group, and an aggregation-attribute, which defines on which column to apply the aggregation.", "For instance, we could group by genre and aggregate over the movie budget.", "Tree structure : The tree structure is sampled as follows.", "First, the Join operations are applied on the sampled entity types.", "Second, the set operations ( Union, Intersect, Diff ) are inserted.", "Third, the Selection operations are inserted.", "Next, the aggregation operations are inserted, i.e., Group By, Min, Max operations.", "Finally, the operations for the question type are sampled.", "For instance, if the question type is a list of entities, then we use the Projection operation, but if it is a cardinality question, we use the Count operation.", "This procedure may create trees that make no sense semantically.", "We handle those trees during the annotation phase, which we describe below.", "Furthermore, we make sure that the trees are executable.", "For this, we translate the trees into SQL and run them on the database.", "We also omit trees that return an empty result, as they can lead to confusions during the evaluation, as two different queries that both return an empty result would be counted as being equal.", "The annotation process, i.e., writing natural language questions and assigning query tokens to operations in the OT, is performed in two phases.", "For each phase, we developed a graphical user interface to facilitate the annotation process (for more details, see Appendix D).", "Phase", "1. In the first phase, the annotator is presented with an OT, which is automatically sampled as described in the previous section.", "The task of the annotator is to formulate an appropriate NL question for the sampled OT.", "In some cases, the sampled tree has contradicting or nonsensical constraints (e.g., compute the average year).", "For these cases, the annotators can either skip or adapt the OT by changing the constraints.", "Phase", "2. In the second phase, the annotators perform the token assignment as well as quality control.", "The annotators are presented with an OT and the NL question, which was written by a different annotator in phase", "1. First, they check and correct the NL question, then they assign the tokens to the operations.", "In order to achieve consistent annotation results, we set up a guideline on how the tokens are to be assigned (more information in the Appendix).", "We applied our corpus construction procedure to a set of five databases and produced a new corpus with NL questions and corresponding OTs, called OTTA.", "In order to compare our results with previous work, we used four databases from the Spider corpus (CHINOOK, COLLEGE, DRIVING SCHOOL, and FORMULA I), which we extended with a dump from IMDB 3 that we refer to as MOVIEDATA.", "For the annotations, we employed 22 engineers with basic knowledge in SQL-databases.", "Table 1 summarizes the dataset.", "The number of tables per database ranges from 6 to 18, and the number of attributes ranges from 45 to 93 columns per database.", "For CHINOOK and MOVIEDATA, our corpus has more than 1000 annotated OTs, while it has around 500 annotated OTs for the other three databases.", "For MOVIEDATA, we also performed the token annotation procedure.", "For each database, we computed the average complexity score.", "Except for MOVIEDATA, which is Hard , all other databases have a Medium average query complexity.", "The average time per question annotation ranges from 77 to 104 seconds (average 97.7 seconds).", "The token assignment and question correction, on the other hand, took on average 101 seconds per OT.", "In order to examine our corpus, we compare its characteristics to the Spider corpus and to the LC-QuAD 2.0 corpus.", "We compare the coverage of the queried data, the complexity of the natural language questions and the complexity of the corresponding SPARQL/SQL queries.", "Coverage.", "Table 2 shows the major characteristics of the three corpora.", "We compare the coverage of the databases in terms of the ratio of tables and attributes which appear in the queries.", "The average attribute coverage of Spider over all databases equals 62 .", "1% .", "However, more than half of the databases in Spider contain 5 tables or less.", "Thus, we also report the coverage of attributes 3 https://www.imdb.com/ MOVIEDATA CHINOOK COLLEGE DRIVING SCHOOL FORMULA1 #T ABLES 18 11 11 6 13 #A TTRIBUTES 64 63 45 39 93 #Q UERIES 1148 1067 462 547 568 TIME PERANNOTATION ( SEC ) 104 104 77 78 104 AVG .", "only considering the databases which have more than 5 tables, where Spider only covers 49 .", "6% of attributes.", "Corpus OTTA, in contrast, covers 54 .", "4% of all attributes.", "Furthermore, the divide becomes more apparent when we consider databases with larger amounts of tables.", "For instance, for the FORMULA-1 database, our corpus covers 44 .", "2% of all attributes, in contrast to Spider, where only 22 .", "1% of attributes are covered.", "LC-QuaD 2.0 covers 1,310 out of 7,005 properties 4 (i.e. attributes in SQL), which corresponds to 18 .", "7% .", "This is an extensive coverage, considering the high amount of properties.", "The table coverage shows a similar picture: our approach covers 94 .", "9% of all tables in the databases, whereas Spider covers 91 .", "7% .", "This number drops down to 87% when considering only databases with more than 5 tables.", "Again, this effect is most pronounced for the FORMULA1 database, where we cover 92% of the tables, whereas Spider only covers 69 .", "2% .", "This shows that our method better scales to larger databases, which is relevant for real-world applications, where databases with a vast number of tables exist.", "LC-QuaD 2.0 covers around 1 .", "9% of approx.", "160k classes, which makes comparison hard, as it is impossible to cover this vast amount of classes with 4 For the number of classes and properties in Wikidata, we consulted: https://tools.wmflabs.org/sqid 30k queries.", "Query Complexity.", "In order to compare the complexity of the queries, we examine the number of occurrences of different components in the queries (see Table 3).", "We first observe that our corpus OTTA does not contain any queries with Order By operators or nested queries however, they could be easily added to the grammar to fill this gap.", "Furthermore, Spider contains more aggregation operations (in particular Min, Max, Count, Average , and Sum ).", "Again, this could be easily adapted in our corpus by sampling more trees that contain these aggregations.", "On the other hand, our corpus stands out in the number of joins per query: on average OTTA has 1.19 join operations per query in contrast to Spider, which has 0.537 joins per query.", "In fact, about 40% of the queries in Spider contain joins, whereas OTTA is composed of 54% of queries, which contain at least one join operation.", "Furthermore, around 37% of our queries contain two joins in contrast to 9% in Spider.", "On the other hand, LC-QuaD 2.0 contains an average of 2 hops (equiv-alent to two joins in relational databases) per query, which lies in the nature of graph database queries that are optimized for handling queries that range over multiple triple patterns.", "However, LC-QuaD 2.0 lacks complexity when considering more complex components (e.g., Group By, Set-Operation, etc.).", "In addition to the operations in relational algebra, the OTs also support Boolean questions (i.e., yes/no questions), which make 16 .", "corpus compared to 8 .", "9% in LC-QuaD 2.0.", "Question Complexity.", "The lexical complexity of the NL questions is measured in terms of mean-segmental token-type-ratio (MSTTR) (Covington and McFall, 2010), which computes the number of different token types in relation to all tokens in a corpus.", "The MSTTR is computed over text segments of equal length, in order to avoid biases due to different lengths within the corpora.", "First, note that the average length of the questions in all three corpora is approximately the same, between 10.6-13.6 tokens on average.", "Table 2 shows that our corpus contains a much higher lexical complexity of the questions than Spider (0.67 instead of 0.52).", "Thus, our approach seems to avoid trivial or monotonous questions, which also matches with our impression from manual inspection.", "On the other hand, the lexical complexity is higher in LC-QuaD 2.0 , which is due to the open domain nature of the dataset.", "Examples.", "In Table 4, we show examples of questions from OTTA compared to questions from Spider.", "The examples show that the quality of the questions is similar.", "The easy questions in both datasets are often only simple filtering questions on one table.", "Medium complexity questions include join operations and filters.", "Hard questions in both datasets include join operations and aggregation operations such as finding the maximum or computing the average.", "The biggest difference is in the Extra complexity.", "There Spider focuses more on subqueries in the where clause.", "OTTA, on the other hand, focuses more on larger join paths, which are typical for real-world database queries as well as group-by operations and aggregations.", "Baseline model.", "As baseline model for OTs from NL questions, we follow the Syntactic Neural Model for Code Generation by (Yin and Neubig, 2017), which we refer to as Grammar-RNN 5 .", "This model is based on an encoder-decoder architecture that learns to generate a sequence of production rules of an arbitrary grammar, which in turn produces the query for a given question.", "For a more detailed discussion on this architecture, we refer 5 The IR-Net (Guo et al., 2019) is also based on the Grammar-RNN.", "the reader to (Yin and Neubig, 2017).", "In our case, it learns to generate the rules defined in Figure 2 for a given question in natural language.", "Based on the generated list of rules, an OT is created.", "We train the model in two phases a pre-training phase and a supervised phase .", "In the pre-training phase, we train a grammar-autoencoder on large amounts of randomly sampled OTs.", "In the supervised phase, we replace the grammar-encoder by a text encoder and train on the labelled dataset, i.e., the samples with NL question and corresponding OT.", "Encoder.", "For the NL question, we use a standard Gated-Recurrent Unit (GRU) (Chung et al., 2014) to encode the question.", "If w i denotes the representation of the i-th token in the question, then the encoder produces a corresponding hidden state h Ei .", "Let HE RN h denote the concatenation of all hidden states produced by the GRU for one question, where N is the number of tokens and h the size of the hidden state.", "Decoder.", "The decoder learns to generate a sequence of production rules with which a tree y is generated for a given encoding x of the NL question.", "The generation process is formalized as: p ( y | x ) = T (cid:89) t =1 p ( a t | x, a <t , a p t ) (1) a t is the action taken at time t, a <t are the actions taken before time t, a p t are the parent actions taken, and x is the encoded input question.", "There are two different types of rules that the model applies during decoding: 1) If the current rule generates a non-terminal symbol, then ApplyRule[r] is executed, which applies a production rule to the current tree.", "2) If the next symbol is a terminal, then GenToken[v] is applied, which selects the token from a vocabulary.", "In our case, we have different types of tokens to be generated: table-names, attribute-names and filter operations.", "Similar to Grammar-RNN , we implement the decoder using a recurrent neural network, where the internal state is given by: h t = GRU ([ a t 1 : c t : a p t : n f t ] , (cid:101) h t 1 ) (2) n f t is the embedding of the current node type (e.g. average, union, ...), c t is a context vector that is computed by applying soft-attention over the input hidden states HE , and h t 1 is the hidden vector of the last state.", "In contrast to (Yin and Neubig, Hardness Spider OTTA easy Find the number of albums. Where were the invoices with the total sum of 1.99 or smaller issued? What is the average unit price of all the tracks? What are the unit prices of tracks composed by Alfred Ellis/James Brown? Find all the customer information in state NY. To which country belongs the 89503 postal code? medium Count the number of tracks that are part of the rock genre. What is the average length of the tracks in the Grunge playlist? Please show the employee first names and ids of employees who serve at least 10 customers. When did we sell tracks larger than 8675345 bytes? Find the name of the artist who made the album Balls to the Wall. To which postal codes did we sell a track named Headspace? hard What is the average duration in milliseconds of tracks that belong to Latin or Pop genre? How many different playlists with a track that is bigger than 7045314 bytes do exist? What are the names of artists who have not released any albums? What is the album title having the track with the lowest length in milliseconds in the genre name Sci Fi & Fantasy? What are the last names of customers without invoice totals exceeding 20? What are the genres from artists not named Scholars Baroque Ensemble? extra What is the name of the media type that is least common across all tracks? Whats the total unit price sold to customers with the email [email protected] and Argentina as billing country? Count the number of artists who have not released an album. How many different genres do the tracks have, which were bought by customers who live in France? What are the album titles for albums containing both Reggae and Rock genre tracks? Which customers made at least 35 purchases, excluding titles from the Chico Science & Nacao Zumbi album? Table 4: Example questions from OTTA and Spider. We grouped the examples by the hardness scores. The examples are for the Chinook domain, which is an online music store database. 2017), we apply attention based on (Luong et al., 2015), where (cid:101) h t 1 = tanh ( W c [ h t 1 : c t ]) .", "For the selection of the terms, we have four output matrices WR , WT , WA , WC , where WR encodes the grammar rules (i.e. for the nonterminal symbols), and WT , WA , WC encode the table names, attributes and comparison operations, respectively.", "Depending on the current frontier node, the next output is computed by: a t = argmax ( softmax ( WR h t )) (3) Grammar Encoder.", "The tree encoder, which we use for the pre-training, is based on the same GRU architecture as the decoder.", "The hidden states for each rule are computed by: h t = GRU ([ a t 1 : a p t : n f t ] , h t 1 ) (4) In contrast to the encoder, there is no context vector c t .", "Moreover, h t 1 is the last hidden state computed by the GRU.", "The output of the encoder is a sequence of all states: HR RR h , where R denotes the number of rules in the encoded tree.", "Token Attention.", "A straight-forward method to include the explicit token alignment, which is created in the second annotation phase, is to force the attention mechanism to learn the alignment.", "For this, we add an extra loss function, which computes the binary cross entropy for each attention weight.", "More formally, let t = softmax ( h t 1 HE ) RN be the attention weights computed for timestep t (during the pre-training phase HE is replaced by HR ).", "Then let ( i ) t be the attention weight for the i-th token.", "For each token we add the loss g i log ( ( i ) t ) + (1 g i ) log (1 ( i ) t ) , (5) where g i [0 , 1] denotes if the token is assigned to the current node or not.", "We now report the results of our model.", "The details of the experimental setup can be found in Appendix A. Each experiment is repeated five times with different random seeds.", "Table 5 shows the precision of the Grammar-RNN on the 5 datasets of OTTA.", "The precision is defined as the exact result set matching between the gold standard query and the generated query.", "Furthermore, the table shows the average precision for each query complexity category.", "The column Weighted", "Avg. refers to the mean average precision over all queries irrespective of the query complexity category.", "Precision.", "For all the databases, except FORMULA-1, the model achieves a precision between 45 .", "1% and 47 .", "5% .", "For FORMULA-1 the model only achieves a score of 26 .", "3% .", "This could be explained by the fact that the FORMULA-1 database contains 93 different attributes, and our data only covers 42 of these attributes.", "Furthermore, each attribute appears only 17.1 times per query on average.", "In contrast, for the COLLEGE database the attributes appear in 56 queries on average.", "Thus, it is harder for the model to learn attributes, which do not appear often in the training set.", "For most of the databases, the model cannot handle the extra hard questions, which often contain multiple joins, aggregations, and/or group by operators.", "Note that without the pre-training phase, the scores drop by a large margin.", "For instance, the scores for Moviedata drop below 30% precision.", "Benefit from Token Assignments.", "We now evaluate whether the token assignments can help to train better models.", "Figure 3 displays the learning curves for the MOVIEDATA database with and without the token assignment.", "The model is trained with 20%, 40%, 60%, 80%, and 100% of the data.", "The results show that using the token assignment increases the scores by around 2% .", "In the case of 20% training data, the gain is even as high as 7% , thus showing that the model can benefit from the additional information that is provided in the token assignments.", "In this paper, we introduced a fast annotation procedure to create NL queries and corresponding database queries (in our case, Operation Trees).", "Our procedure more than triples the velocity of annotation in comparison to previous methods, while ensuring a larger variety of different types of queries and covering a larger part of the underlying databases.", "Furthermore, our procedure allows a fine-grained alignment of tokens to operations.", "We then used our new method to generate OTTA , a novel corpus for semantic parsing based on operation trees in combination with token assignments.", "Generating this corpus was more timeand cost-efficient than with previous approaches.", "Our statistical analysis showed that the corpus yields a higher coverage of attributes in the databases and more complex natural language questions than other existing methods.", "Furthermore, we implemented a baseline system for automatically generating OTs from NL queries.", "This baseline achieves scores of up to 48% precision, which are already reasonable while also leaving large potential for improvement in future research.", "Finally, we showed that the inclusion of the token alignment results in an increase of precision of up to 7% .", "Based on these results, we will explore ways to leverage the token assignment to domain adaption and few-shot learning.", "We also plan to enhance the annotation process by automatically generating proposals for the NL questions and token assignments and letting the annotators only perform corrections.", "We hope that this increases annotation efficiency even more.", "This work has been partially funded by the LIH-LITH project supported by the EU ERA-Net CHIST-ERA; the Swiss National Science Foundation [20CH21 174237]; the Agencia Estatal de Investigacin (AEI, Spain) projects PCIN-2017-118 and PCIN-2017-085; the INODE project supported by the European Unions Horizon 2020 research and innovation program under grant agreement No 863410." ]
[ "objective", "abstain", "method", "abstain", "method", "abstain", "abstain", "objective", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "method", "abstain", "result", "method", "objective", "objective", "abstain", "result", "method", "method", "method", "method", "abstain", "abstain", "objective", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "method", "objective", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "other", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "objective", "abstain", "result", "method", "abstain", "result", "objective", "method", "abstain", "other" ]
[ "In a context of offensive content mediation on social media now regulated by European laws, it is important not only to be able to automatically detect sexist content but also to identify if a message with a sexist content is really sexist or is a story of sexism experienced by a woman.", "We propose: (1) a new characterization of sexist content inspired by speech acts theory and discourse analysis studies, (2) the first French dataset annotated for sexism detection, and (3) a set of deep learning experiments trained on top of a combination of several tweet's vectorial representations (word embeddings, linguistic features, and various generalization strategies).", "Our results are encouraging and constitute a first step towards offensive content moderation.", "Sexism is prejudice or discrimination based on a person's gender.", "It is based on the belief that one sex or gender is superior to another.", "It can take several forms from sexist remarks, gestures, behaviours, practices, insults to rape or murder.", "Sexist hate speech is a message of inferiority usually directed against women at least in part because they are women, some authors refer to it as: words that wound (Matsuda et al., 1993; Waldron, 2012; Delgado et al., 2015).", "As defined by the Council of Europe, The aim of sexist hate speech is to humiliate or objectify women, to undervalue their skills and opinions, to destroy their reputation, to make them feel vulnerable and fearful, and to control and punish them for not following a certain behaviour 1 .", "Its psychological, emotional and/or physical impacts can be severe.", "In several countries, sexist behaviours are now prohibited.", "See for example the French law of 27 January 2017 related to equality and citizenship, where penalties due to 1 https://rm.coe.int/1680651592 discrimination are doubled (sexism is now considered as an aggravating factor), law that extends to the internet and social media.", "Although overall misogyny and sexism share the common purpose of maintaining or restoring a patriarchal social order, Manne (2017) illustrates the contrast between the two ideologies.", "A sexist ideology (which often consists of assumptions, beliefs, theories, stereotypes and broader cultural narratives that represent men and women) will tend to discriminate between men and women and has the role of justifying these norms via an ideology that involves believing in men's superiority in highly prestigious domains (i.e., represents the justificatory branch of a patriarchal order).", "A misogynistic ideology does not necessarily rely on people's beliefs, values, and theories, and can be seen as a mechanism that has the role of upholding the social norms of patriarchies (i.e., represents the law enforcement branch of a patriarchal order) by differentiating between good women and bad women and punishing those who take (or attempt to take) a man's place in society.", "Considering these definitions, misogyny is a type of sexism.", "In this paper, as we target French sexist messages detection, we consider sexism in its common French usage, i.e. discrimination or hate speech against women.", "Social media and web platforms have offered a large space to sexist hate speech (in France, 10% of sexist abuses come from social media (Bous-quet et al., 2019)) but also allow to share stories of sexism experienced by women (see The Everyday Sexism Project 2 available in many languages, Paye ta shnek 3 in French, or hashtags such as #metoo or #balancetonporc).", "In this context, it is important to automatically detect sexist messages 2 https://everydaysexism.com/ 3 https://payetashnek.tumblr.com/ on social platforms and possibly to prevent the wide-spreading of gender stereotypes, especially towards young people, which is a first step towards offensive content moderation (see the recommendations of the European commission (COM, 2017).", "However, we believe that it is important not only to be able to automatically detect messages with a sexist content but also to distinguish between real sexist messages that are addressed to a woman or describing a woman or women in general (e.g., The goalkeeper has no merit in stopping this pregnant woman shooting ), and messages which relate sexism experiences (e.g., He said who's gonna take care of your children when you are at ACL? ).", "Indeed, whereas messages could be reported and moderated in the first case as recommended by European laws, messages relating sexism experiences should not be moderated.", "As far as we are aware, the distinction between reports/denunciations of sexism experience and real sexist messages has not been addressed.", "Previous work considers sexism either as a type of hate speech, along with racism, homophobia, or hate speech against immigrants (Waseem and Hovy, 2016; Golbeck et al., 2017; Davidson et al., 2017; Basile et al., 2019; Schrading et al., 2015) or study it as such.", "In this latter case, detection is casted as a binary classification problem (sexist vs. nonsexist) or a multi-label classification by identifying the type of sexist behaviours (Jha and Mamidi, 2017; Sharifirad et al., 2018; Fersini et al., 2018b; Karlekar and Bansal, 2018; Parikh et al., 2019).", "English is dominant, although Italian and Spanish have already been studied (see the IberEval 2018 (Fersini et al., 2018b), EvalIta 2018 (Fersini et al., 2018a) and HateEval 2019 (Basile et al., 2019) shared tasks).", "This paper proposes the first approach to detect different types of reports/denunciations of sexism experiences in French tweets, based on their impact on the target.", "Our contributions are: (1) A novel characterization of sexist content-force relation inspired by speech acts theory (Austin, 1962) and discourse studies in gender (Lazar, 2007; Mills, 2008).", "We distinguish different types of sexist content depending on the impact on the addressee (called perlocutionary force'): sexist hate speech directly addressed to a target, sexist descriptive assertions not addressed to the target, or reported assertions that relate a story of sexism experienced by a woman.", "This is presented in Section 3. Our guiding hypothesis is that indirect acts establish a distancing effect with the reported content and are thus less committal on behalf of the addressee (Giannakidou and Mari, 2021).", "Our take on the issue is language-driven: reported speech is indirect, and it does not discursively involve a call on the addressee to endorse the content of the act.", "(2) The first French dataset of about 12 , 000 tweets annotated for sexism detection according to this new characterization 4 .", "Data and manual annotation are described in Section 4. (3) A set of experiments to detect sexist content in three configurations: binary classification (sex-ist content vs. non-sexist), three classes (reporting content vs. non-reporting vs. non-sexist), and a cascade classifier (first sexist content and then re-porting).", "We rely on deep learning architectures trained on top of a combination of several tweet's vectorial representations: word embeddings built from different sources, linguistic features, and various generalization strategies to account for sexist stereotypes and the way sexist contents are linguistically expressed (see Section 5).", "Our results, presented in Section 6, are encouraging and constitute a first step towards automatic sexist content moderation.", "Gender in discourse analysis.", "Discourse analysis studies have shown that sexism may be expressed at different linguistic granularity levels going from lexical to discursive (Cameron, 1992): e.g., women are often designated through their relationship with men or motherhood (e.g., A man killed in shooting vs. Mother of 2 killed in crash ) or by physical characteristics (e.g., The journalist who presents the news vs. The blonde who presents the news ).", "Sexism can also be hostile (e.g., The world would be a better place without women ) or benevolent where messages are subjectively positive, and sexism is expressed in the form of a compliment (e.g., Many women have a quality of purity that few men have ) (Glick and Fiske, 1996).", "In communication studies, the analysis of political discourse (Bonnafous, 2003; Coulomb-Gully, 2012), sexist abuse or media discourse (Dai and Xu, 2014; Biscarrat et al., 2016) show that political women presentations are stereotyped: use of physical or clothing character-4 https://github.com/patriChiril/An-Annotated-Corpus-for-Sexism-Detection-in-French-Tweets istics, reference to private life, etc.", "From a sociological perspective, studies focus on social media contents (tweets) or SMS in order to analyze public opinion on gender-based violence (Purohit et al., 2016) or violence and sexist behaviours (Barak, 2005; Megarry, 2014).", "Gender bias in word embeddings.", "Bolukbasi et al. (2016) have shown that word embeddings trained on news articles exhibit female/male gender stereotypes.", "Several algorithms have then been proposed to attenuate this bias (Dev and Phillips, 2019) or to make embeddings gender-neutral (Zhao et al., 2018), although Gonen and Goldberg (2019) consider that bias removal techniques are insuffi-cient.", "Debiased embeddings were used by Park et al. (2018) observing a decrease in sexism detection performance compared to the non-debiased model.", "To overcome this limitation, Badjatiya et al. (2019) propose neural methods for stereotypical bias removal for hate speech detection (i.e., hateful vs. non-hateful).", "They first identify a set of bias sensitive words, then mitigate their impact by replacing them with their POS, NER tags, K-nearest neighbours and hypernyms obtained via WordNet.", "Automatic sexism detection.", "To our knowledge, the automatic detection of sexist messages currently deals only with English, Italian and Spanish.", "For example in the Automatic Misogyny Identification (AMI) shared task at IberEval and EvalIta 2018, the tasks consisted in detecting sexist tweets and then identifying the type of sexist behaviour according to a taxonomy defined by (An-zovino et al., 2018): discredit, stereotype, objec-tification, sexual harassment, threat of violence, dominance and derailing.", "Most participants used SVM models and ensemble of classifiers for both tasks with features such as n-grams and opinions (Fersini et al., 2018b).", "These datasets have also been used in the Multilingual Detection of Hate Speech Against Immigrants and Women in Twitter shared task at SemEval 2019.", "Best results were obtained with an SVM model using sentence embeddings as features (Indurthi et al., 2019).", "There are also a few notable neural network techniques.", "Jha and Mamidi (2017) employ an LSTM model to classify messages as: benevolent, hostile and non-sexist.", "Zhang and Luo (2018) implement two deep neural network models (CNN + Gated Recurrent Unit layer and CNN + modified CNN layers for feature extraction) in order to classify social media texts as racist, sexist, or non-hateful.", "Karlekar and Bansal (2018) use a single-label CNN-LSTM model with character-level embeddings to classify three forms of sexual harassment: commenting, ogling/staring, and touching/groping.", "Sharifirad et al. (2018) focus on diverse forms of sexist harassment (indirect, information threat, sexual, physical) using LSTM and CNN on augmented dataset obtained via ConceptNet is-a relationships and Wiki-data.", "Finally, (Parikh et al., 2019) consider messages of sexism experienced by women in the Ev-eryday Sexism Project web site and classify them according to 23 non mutually exclusive categories using LSTM, CNN, CNN-LSTM and BERT models trained on top of several distributional representations (character, subwords, words and sentence) along with additional linguistic features.", "In this paper, we propose different deep learning architectures to detect reporting of sexist acts and, more importantly, distinguishing them from real sexist messages.", "We explore BERT contextual-ized word embeddings trained from several sources (tweets, Wikipedia) complemented with both linguistic features and generalization strategies.", "These strategies are designed to force the classifier to learn from generalized concepts rather than words, which may be rare in the corpus.", "We, therefore, adopt several replacement combinations based on a taxonomy of stereotyped gendered words coupled with additional sexist vocabularies extending Badjatiya et al. (2017) approach designed for hate speech detection to sexism content detection.", "Propositional content can be introduced in discourse by acts of varying forces (Austin, 1962): it can be asserted (e.g., Paul is cleaning up his room ), questioned (e.g., Is Paul cleaning up his room ?), or asked to be performed as with imperatives (e.g., Paul, clean up your room !).", "In philosophy of language, on the one hand, and feminist philosophy on the other, speech acts have already been advocated in a variety of manners.", "Most accounts however either focus on the type of act (assault-like, pro-paganda, authoritative, etc.) that derogatory language performs (Langton, 2012; Bianchi, 2014) or concentrate on the analytical level at which the derogatory content is interpreted, whether it provides meaning at the level of the presupposition (or more largely non at-issue content (Potts, 2005)) or of the assertion (Cepollaro, 2015).", "We have chosen to distinguish cases where the addressee is directly addressed from those in which she is not, as done in hate speech analysis.", "For example, Waseem et al. (2017) and ElSherief et al. (2018) consider that directed hate speech is explicitly directed at a person while generalized hate speech targets a group.", "For (Ousidhoum et al., 2019), a hateful tweet is direct when the target is explicitly named, or indirect when less easily discernible.", "Unlike these approaches and the defi-nitions of target used in (Basile et al., 2019; Fersini et al., 2018a), we do not consider the number of targets of a sexist message (it can indifferently be a woman, a group of women or all women) but rather distinguish the target from the addressee.", "Our use of the notions of directness and indirectness are also transverse to the ones used in (Lazar, 2007; Chew and Kelley-Chew, 2007) or (Mills, 2008), who resort to the label indirectness for subtle forms of sexism that perpetuate gender stereotypes through humor, presuppositions, metaphors, etc.", "We newly consider three different stages in the scale of directedness' of an assertion: assertions directed to the addressee, descriptive assertions not directed to a particular addressee and reported assertions.", "All these three types of acts can contain subtle and non-subtle sexist content.", "The main goal of our classification is thus to focus on the impact of the content by resorting to the force of the act and not only to its content.", "Sexist content in directed assertions is explicitly addressed at a target, but contrary to other approaches cited above, the target can be a woman, a group of women or all women.", "Across the different classifications of speech acts (Portner, 2018), di-rect' speech acts such as imperatives are addressee-oriented and they require that the addressee performs an action (responding (with questions) or acting (with imperatives)).", "Indirect speech acts are not addressee-oriented.", "Assertions themselves can be direct or indirect.", "They are direct when they are in the second person (you'), as shown in (1) and (2) (linguistic clues are underlined) 5 .", "They require that the addressee be committed to the truthfulness of their content.", "Since a direct sexist assertion is a type of speech act that immediately involves the addressee and triggers a request of commitment, 5 The translations might not feel natural.", "Indeed, we kept the same words in English as in French in order to better illustrate the type/semantic of words that are used, keeping in mind that tweets are often not well-written in French as well as in English.", "direct assertions of sexism have been ranked as the most prominent expressions of sexism with a greater impact on the victim.", "Most prominently, with assertions, directedness is the trigger of perlocutionary content, rendering the assertion an in-sult'.", "(1) T'es une femme je serai jamais d'accord avec toi pour du foot (You're a woman I'll never agree with you about football) (2) les femmes qui sont en plus Dijonnaise ne parlez pas de foot sivouplai c'est comme si un aveugle manchot parler de passer le permis (women who are also from Dijon please don't talk about football it's as if a one-handed blind person was thinking about getting a driving license) Descriptive assertions are not directed to an addressee: the target can be a woman, a group of women, or all women, it can be named but is not the addressee.", "Descriptive assertions are in the third person and thus may have a lower impact on the receiver in comparison with second person assertions.", "They do not commit the addressee to the truth of the content by soliciting a response.", "They report generic content (Mari et al., 2012).", "Linguistic clues can be the presence of a named entity as the target or use of generalizing terms, as shown in (3) and (4).", "(3) Anne Hidalgo est une femme.", "Les femmes ai-ment faire le menage.", "Anne Hidalgo devrait donc nettoyer elle-meme les rues de Paris (Anne Hidalgo is a woman. Women love cleaning the house. Anne Hidalgo should clean the streets of Paris herself) (4) une femme a besoin d'amour de remplir son frigo, si l'homme peut le lui apporter en contrepartie de ses services (m enages, cuisine, etc) j'vois pas elle aurait besoin de quoi d'autre (A woman needs love, to fill the fridge, if a man can give this to her in return for her services (housework, cooking, etc), I don't see whatelse she needs) Finally, in reported assertions , the sexist content is a report of an experience or a denunciation of a sexist behaviour.", "They may elicit an even lower commitment on behalf of the addressee.", "The speaker is not committed to the truth of a reported content (as in I heard that you were coming too ).", "However, when reporting sexist content, the speaker is still conveying a lack of commitment, and a general sense of disapproval or dismissal may emerge.", "In these messages, we observe the presence of reporting verbs, quotation, locations (as reports often mention public spaces where the experience happened) or specific hashtags, as shown in (5), (6) and (7).", "(5) je m'assoupis dans le metro, je rouvre les yeux en sentant quelque chose de bizarre : la main de l'homme assis `a cote de moi sur ma cuisse.", "#balancetonporc (I doze in the subway, I open my eyes feeling something weird: the hand of the man sat next to me on my leg #SquealOnYourPig) (6) Mon patron m'a demande : qui va cuisiner pour ton mari quand tu seras pas l`a", "? (My boss asked me: who's going to cook for your husband when you're away?) (7) Je ne suis pas une grande fan de Theresa May mais pourquoi parler de ses escarpins et ses cuissardes vernies et la traiter d'allumeuse ?", "#vincenthervouet #sexisme http://eur1.fr/nADYIMw (I am not a fan of Theresa May but why talking about her shoes and varnished boots and call her a tease? #vincenthervouet #sexism) As it appears, the three types of assertions have a sexist content, but only the first two ones are really sexist.", "Indeed, direct and descriptive assertions are first-hand information, whereas reported ones are second-hand information.", "As such, they may trigger a different reaction from the receiver: in the first two cases, a female receiver can be immediately involved as the target of the sexist dismissal; in the third case, she is the witness of a sexist report.", "Our corpus is new and contains French tweets collected between October 2017 and May 2018.", "In order to collect sexist and non sexist tweets, we followed Anzovino et al. (2018) approach using:", "(i) a set of representative keywords: femme, fille (woman, girl) , enceinte (pregnant) , some activities ( cuisine (cooking), football, ... ), insults, etc.,", "(ii) the names of women/men potentially victims or guilty of sexism (mainly politicians),", "(iii) specific hashtags to collect stories of sexism experiences 6 : 6 The distribution of these hashtags is very similar in both non sexist and sexist tweets which reduces considerably the bias while collecting the data.", "#balancetonporc, #sexisme, #sexiste, #SexismeOr-dinaire, #EnsembleContreLeSexisme, #payetash-nek, #payetontaf, etc.", "The tweets collected with these hashtags may contain reported sexist acts towards both men and women.", "Thus, we collected around 205 , 000 tweets, among which about 70 , 000 contain the specific hashtags.", "Given a tweet, annotation consists in assigning it one of the following five categories: direct, descriptive, reporting (as defined in the previous section), non-sexist and no decision.", "A tweet is non sexist when it has no sexist content (it may contain a specific hashtag, but the content is not sexist), as in (8).", "No decision refers to cases where the tweet lacks context, or when the sexist content is not in the text but only in a photo, video, or URL (because we cannot process them).", "(8) La creatrice du #balancetonporc attaquee en justice pour diffamation (France's #MeToo creator on trial for defamation) 300 tweets have been used for the training of 5 annotators (they are master's degree students (3 female and 2 male) in Communication and Gender) and then removed from the corpus.", "Then, 1,000 tweets have been annotated by all annotators so that the inter-annotator agreement could be computed.", "Although the perception of sexism is often considered as subjective, the average Cohen's Kappa is 0.72 for sexist content/non-sexist/no decision categories and 0.71 for direct/descriptive/reporting/non-sexist/no decision categories which means a strong agreement.", "We noticed that the kappa scores between female annotators are very close to the one between male annotators.", "For these 1,000 tweets, the final labels have been assigned according to a majority vote.", "Finally, a total of 11 , 834 tweets have been annotated according to the guidelines after removing 1,053 tweets annotated as no decision.", "Among them, 65.80% are non-sexist and 34.20% with sexist content (79.61% reporting, 1.12% are direct and 19.27% descriptive).", "We then divided the corpus into train and test sets 7 (cf. Table 1).", "To identify reported assertions, we performed three classification tasks: (BIN ) sexist content vs. non-7", "non-7 All the hyperparameters were tuned on the validation set (20% of the training dataset), such that the best validation error was produced.", "sexist, (3-C LASS ) sexist tweets (i.e., direct and descriptive) vs. reporting tweets vs. non-sexist; and (CASC ) a cascade classification with sexist content vs. non-sexist in the first stage, followed by reporting vs. non-reporting in the second stage.", "To this end, we experiment with several deep learning models 8 including best performing state of the art models for sexism detection.", "CNN.", "This model has already been used in Karlekar and Bansal (2018).", "It uses pre-trained on Wikipedia and Common Crawl FastText French word vectors and three 1D Convolutional layers, each one using 100 filters and a stride of 1, but different window sizes (2, 3, and 4 respectively) with a ReLU activation function.", "We further downsam-ple the output of these layers by a 1D max pooling layer (with a pool size of 4), and we feed its output to the final softmax layer.", "CNN-LSTM.", "This model is similar to Karlekar and Bansal (2018) and (Parikh et al., 2019) except that we used word-level embeddings instead of character/sentence-level as the results were lower.", "It is based on the previous CNN model by adding an LSTM layer 9 (capable of capturing the order of a sequence) that takes its input from the max pooling layer.", "Next, a global max pooling layer feeds the highest value in each timestep dimension to a final softmax layer.", "BiLSTM with attention.", "This model, also used by (Parikh et al., 2019), relies on a Bidirectional LSTM with an attention mechanism that attends over all hidden states and generates attention coefficients.", "The hidden states were then averaged using the attention coefficients in order to generate the final state, which was then fed to a one-layer feed-forward network in order to obtain the final label prediction.", "We experimented with different hidden state vector sizes, dropout values and attention vector sizes.", "The results reported in this paper 8 We also experiment with standard feature-based models, but the results were lower.", "9 We also experimented with GRU following (Zhang and Luo, 2018), but the results were not conclusive.", "were obtained by using 300 hidden units, an 150 attention vector, a dropout of 50% and the Adam optimizer with a learning rate of 10 3 .", "BERT base .", "It uses the pre-trained BERT model (BERT-Base, Multilingual Cased) (Devlin et al., 2019) on top of which we added an untrained layer of neurons.", "We then used the HuggingFace's Py-Torch implementation of BERT (Wolf et al., 2019) that we trained for 3 epochs.", "BERTR .", "We observed that about 47% of the tweets embed at least one URL.", "Due to the short length of a tweet, this is useful for amplifying the message, while also minimizing the time it takes to compose it.", "In order to feed more information to the classifier, instead of removing or replacing the URLs with replacement tokens as usually done in hate speech detection, we propose to substitute them with the title found at the given URL 10 .", "In addition, and based on the assumption that word embeddings capture the meaning of words better than emoji embeddings capture the meaning of emojis, we followed the strategy proposed by (Singh et al., 2019) and replaced all the emojis with their detailed descriptions 11 .", "Replacing URLs and emojis improved the results for all the models we have tested, so we give here only the results obtained after these replacements.", "BERTR own emb + base .", "Following (Parikh et al., 2019), we also experiment stacking multiple embeddings.", "We tailored a pre-trained BERT model 12 for which we used the whole non annotated dataset (i.e., 205 , 000 tweets).", "The original BERT model uses a WordPiece tokenizer, which is not available in OpenSource.", "Instead, we used a SentencePiece 13 tokenizer in unigram mode.", "Training the model using the Google Cloud infrastructure with the default parameters for 1 million steps took approximately 3 days.", "BERTR features .", "We relied on state of the art features that have shown to be useful for the task of hate speech detection: Surface features (tweet length in words, the presence of personal 10 In case a particular web page is not available anymore, the URL is removed from the tweet.", "We relied on a manually built emoji lexicon that contains 1,644 emojis along with their polarity and detailed description.", "12 We experimented with different configurations by incorporating different French pre-trained embeddings available: Glove (Pennington et al., 2014), FastText (Grave et al., 2018), Flair (Akbik et al., 2018) and CamemBERT (Martin et al., 2019) but none of the configurations were able to achieve results better than BERT base .", "13 https://github.com/google/ sentencepiece pronoun and third-person pronoun, punctuation marks, URLs, images, hashtags, @userMentions and the number of words written in capital), Emoji features 11 (number of positive and negative emo-jis), Opinion features (number of positive, negative and neutral words in each tweet relying on opinion (Benamara et al., 2014), emotion (Piolat and Bannour, 2009) and slang French lexicons.", "We also account for hedges (negation and modality), reporting verbs, imperative verbs, and verbs used for giving advice.", "BERTR gen .", "Sexism is often expressed by using gender stereotypes, i.e., ideas whereby women and men are arbitrarily assigned characteristics and roles determined and limited by their gender.", "In order to force the classifier to learn from generalized concept rather than words which may be rare in the corpus, we adopt several replacement combinations extending (Badjatiya et al., 2017)'s approach consisting in replacing some words/expressions that trigger sexist content by their generalized term.", "However, instead of using a flat list composed of most frequent words that appear in a particular class and then replace them by similarity relationships, we rather rely on manually built lists of words 14 often used in sexist language (hereafter <SexistVocabulary> ): designations (around 10 words such as femme (woman), fille (girl), nana (doll), ... ), insults (around 400 words/expressions extracted from GLAWI (Hathout and Sajous, 2016), a machine-readable French Dictionary); and 130 gender stereotyped words grouped according to the following taxonomy as usually defined in gender studies (see Section 2): physical characteristics (e.g. petite (little), bouche (mouth), robe (dress), ... for women; petit (little), gros (fat), ... for men), behavioural characteristics (e.g. bavarde (gossipy), jalouse (jealous), tendre (loving), ... for women; macho, viril (virile) , ... for men), and type of activities (e.g. m ` ere (mother), cuisine (cooking), in-firmi ` ere (nurse), ... for women; football, m edecin (doctor), ... for men).", "Only 1% of all these words have been used as keywords to collect the corpus.", "In addition, we also built two other lists: names (952/832 female/male firstnames to detect named entities) and around 170 words/expressions for places as they are mainly useful for detection of reporting messages since they represent public spaces 14 Following (Badjatiya et al., 2017), we also experiment with automatic word lists but the results were not conclusive as frequent words were too generic and not representative of the problem we want to solve.", "rue (street), bureau (office), ... ).", "We experimented with distinct generalization strategies: hypernym replacement gen(Hypernym) (e.g., little is replaced by <PhysicalCharacteristics> ), gendered hypernym replacement gen(Hypernym gendered) (e.g., dress is replaced by <femalePhysicalCharacteristics> ) as well as generic replacement gen(SexistVocabulary) (e.g., both little and doll are replaced by the same tag <SexistVocabulary> ), etc., where X in BERTR features+X indicates the adopted replacement strategy.", "Table 2 presents the results for the best state of the art models for the task of sexism detection (CNN, BiLSTM with attention, CNN-LSTM) applied on the BIN task in terms of accuracy (A), macro-averaged F-score (F), precision (P) and recall (R) with the best results in bold.", "None of these models were able to achieve results better than BERT base .", "For this reason, we chose BERT base as our baseline and trained it on top of several vectorial representations, as explained in Section 5. CLASSIFIERA F P R CNN 0.684 0.601 0.635 0.571 CNN+LSTM 0.676 0.640 0.623 0.657 BiLSTM attention 0.695 0.527 0.501 0.554 BERT base 0.773 0.723 0.726 0.721 Table 2: Results for BIN classification.", "As shown in Table 3, we observe that training BERT with stacked embeddings did not improve over BERT base .", "Replacing URLs and emojis with respectively the words within the title link and emoji description boosts the results by 1 .", "7 % and 1 .", "2 % in terms of accuracy while adding linguistic features to the embeddings increases the results for both the BIN and 3-C LASS configurations.", "We, therefore, keep BERTR features as basis for the rest of the models.", "Concerning the generalization strategies, all replacements were productive and outperformed all the previous models, observing that gendered replacements are better.", "This shows that forcing the classifier to learn from general concepts is a good strategy for sexism content detection.", "In particular, we observe that the best replacement depends on the task: For BIN , it is place and gen-C LASSIFIERBIN 3-C LASSA F P R A F P R BERT base 0.773 0.723 0.726 0.721 0.714 0.540 0.572 0.515 BERTR 0.790 0.762 0.767 0.759 0.726 0.567 0.609 0.531 BERTR own emb+base 0.768 0.751 0.712 0.795 0.708 0.526 0.605 0.513 BERTR features 0.795 0.787 0.819 0.761 0.754 0.588 0.625 0.556 BERTR features+gen(Hypernym) 0.806 0.804 0.835 0.776 0.763 0.614 0.649 0.598 BERTR features+gen(Hypernym gendered) 0.809 0.807 0.840 0.777 0.767 0.635 0.663 0.620 BERTR features+gen(Name) 0.790 0.796 0.830 0.766 0.755 0.620 0.656 0.606 BERTR features+gen(Name gendered) 0.815 0.806 0.841 0.775 0.760 0.643 0.665 0.630 BERTR features+gen(SexistVocabulary gendered) 0.801 0.807 0.836 0.781 0.764 0.635 0.654 0.627 BERTR features+gen(Place) 0.826 0.813 0.848 0.782 0.769 0.655 0.673 0.646 BERTR features+gen(Place+Hypernym) 0.803 0.799 0.836 0.766 0.758 0.622 0.654 0.610 BERTR features+gen(Place+Hypernym gendered) 0.819 0.811 0.846 0.779 0.771 0.652 0.689 0.630 BERTR features+gen(Place+Name gendered) 0.837 0.824 0.865 0.787 0.769 0.629 0.657 0.615 BERTR features+gen(Place+Hypernym gendered+Name gendered) 0.819 0.818 0.857 0.783 0.764 0.634 0.662 0.618 Table 3: Results for most productive models for BIN and 3-C LASS classification.", "dered names whereas for 3-C LASS it is place and gendered hypernym.", "In both cases, replacing only public spaces with the generic <location> was one of the best strategy with 0 .", "826 and 0 .", "769 accuracy for respectively BIN and 3-C LASS .", "Multiple replacements (cf. last line in the table) were however, less productive.", "Table 4 further details the results per class for the best performing systems for each task (i.e., those in bold in Table 3).", "For the 3-C LASS , we observe that the results are lower for the sexist content (direct and descriptive) class, but this might also be a consequence of the low number of instances annotated as such 15 .", "Cascading models are known for being very accurate and can be used in the context of moderation", "15 We tried augmenting the number of instances in these classes by replacing the words/phrases that belong to the sexist vocabulary and stereotyped words list (cf. Section5) with the top 10 word2vec neighbours (i.e., for each instance we obtain 10 more) but the results were not conclusive.", "More accurate data augmentation techniques can be investigated.", "as we cannot afford to take actions against users that are following the guidelines and policies.", "In the first stage we used the best performing model for sexist content vs. non sexist classification (i.e., BERTR gen(Place+Name gendered) ).", "The instances clas-sified as containing a sexist content by the first model were further used as the testing set for the second model (the best performing model for the 3-C LASS classification task in terms of F-score, i.e., BERTR gen(Place) ).", "In Table 4, the results corresponding to the non-sexist class of CASC classifier present the improvement brought by the second stage classifier, i.e., it was able to correct (predict as non-sexist) instances that were misclassified during the first stage.", "The last line of Table 4 presents the overall results obtained after the two stages of classification.", "The results show an improvement over the best system of 3-C LASS , proving the usefulness of a cascading approach with an increasing system complexity.", "A manual error analysis shows that misclassifica-tion cases are due to several factors, among which humor and satire (as in (9)) or the use of stereotypes (as in (10)), mainly because they are not expressed by a single word or expression but by metaphors.", "In the examples below, the underlined words highlight the leading cause of misclassification.", "(9) Ma femme est hystorique.", "C'est comme hyst erique, sauf que lorsqu'elle p ` ete un c able elle me sort des vieux dossiers.", "( My wife is hystorical. That's like hysterical, except that when she's angry she pulls out old files ) (10) je demande pas ce qu'elle a fait sous le bureau pour arriver `a se plateau ( I'm not asking what she did under the desk to be on this TV set ) In particular for reporting tweets, we found many misclassified messages without any reporting verb or quotes as in (11), but also messages denunciating sexism using situational irony as in (12).", "(11)", "Royal les rendrait elle tous fous?", "Alain De-strem (UMP): Segol`ene Royal en boubou bleu, ca me rappelle ma femme de menage !", "( Does Royal make them all crazy?", "Alain De-strem (UMP): Segol`ene Royal wearing a blue boubou, it reminds me my cleaning woman!", ") (12) Continuons `a communier...", "Notre heros national avait des comptes en Suisse et n'etait pas loin du #balancetonporc...", "Mais bon communions, rassemblons nous... ( Let's keep on be united... Our national hero had bank accounts in Switzerland and was not far from #SquealOnYourPig... But OK let's be united, let's get together... ) 7 Conclusion In this paper, we have presented the first approach to detect reports/denunciations of sexism from real sexist content that are directly addressed to a target or describes a target.", "We proposed a new dataset of about 12 , 000 French tweets annotated according to a new characterization of sexist content inspired from both speech act theory and discourse studies in gender.", "We then experimented with several deep learning models in binary, three classes and a cascade classifier configurations, showing that BERT trained on word embeddings, linguistic features and generalization strategies (i.e., place and hypernym replacements) achieved the best results for all the configurations, and that cascade classification allows to successfully correct misclassified non-sexist messages.", "These results are encouraging and demonstrate that detecting reporting assertions of sexism is possible, which is a first step towards automatic offensive content moderation.", "In the future, we plan to develop more complex models to be added in the next stages of the cascade classifier as well as automatically identify irony, gender stereotypes and sexist vocabulary.", "This work is funded by the Institut Carnot Cognition under the project SESAME." ]
[ "abstain", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "result", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "objective", "other", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "other", "method", "abstain", "abstain", "abstain", "other", "abstain", "other", "abstain", "method", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "other" ]
[ "We introduce Span-ConveRT , a light-weight model for dialog slot-filling which frames the task as a turn-based span extraction task.", "This formulation allows for a simple integration of conversational knowledge coded in large pretrained conversational models such as ConveRT (Henderson et al., 2019a).", "We show that leveraging such knowledge in Span-ConveRT is especially useful for few-shot learning scenarios: we report consistent gains over 1) a span extractor that trains representations from scratch in the target domain, and 2) a BERT-based span extractor.", "In order to inspire more work on span extraction for the slot-filling task, we also release RESTAURANTS -8 K , a new challenging data set of 8,198 utterances, compiled from actual conversations in the restaurant booking domain.", "Conversational agents are finding success in a wide range of well-defined tasks such as customer support, restaurant, train or flight bookings (Hemphill et al., 1990; Williams, 2012; El Asri et al., 2017; Budzianowski et al., 2018), language learning (Raux et al., 2003; Chen et al., 2017), and also in domains such as healthcare (Laranjo et al., 2018) or entertainment (Fraser et al., 2018).", "Scaling conversational agents to support new domains and tasks, and particular system behaviors is a highly challenging and resource-intensive task: it critically relies on expert knowledge and domain-specific labeled data (Williams, 2014; Wen et al., 2017b,a; Liu et al., 2018; Zhao et al., 2019).", "Slot-filling is a crucial component of any task-oriented dialog system (Young, 2002, 2010; Belle-garda, 2014).", "For instance, a conversational agent for restaurant bookings must fill all the slots date , Both authors contributed equally to the work.", "time and number of guests with correct values given by the user (e.g. tomorrow , 8pm , 3 people ) in order to proceed with a booking.", "A particular challenge is to deploy slot-filling systems in low-data regimes (i.e., few-shot learning setups), which is needed to enable quick and wide portability of conversational agents.", "Scarcity of in-domain data has typically been addressed using domain adaption from resource-rich domains, e.g. through multitask learning (Jaech et al., 2016; Goyal et al., 2018) or ensembling (Jha et al., 2018; Kim et al., 2019).", "In this work, we approach slot-filling as a turn-based span extraction problem similar to Rastogi et al. (2019): in our Span-ConveRT model we do not restrict values to fixed categories, and simultaneously allow the model to be entirely independent of other components in the dialog system.", "In order to facilitate slot-filling in resource-lean settings, our main proposal is the effective use of knowledge coded in representations transferred from large general-purpose conversational pretraining models, e.g., the ConveRT model trained on a large Reddit data set (Henderson et al., 2019a).", "To help guide other work on span extraction-based slot-filling, we also present a new data set of 8,198 user utterances from a commercial restaurant booking system: RESTAURANTS -8 K .", "The data set spans 5 slots ( date , time , people , first name , last name ) and consists of actual user utterances collected in the wild.", "This comes with a broad range of natural and colloquial expressions, 1 as illustrated in Figure 1, which makes it both a natural and challenging benchmark.", "Each training example is a dialog turn annotated with the slots requested by the system and character-based span indexing for all occurring values.", "pre-1 For instance, a value for the slot people can either be a number like 7 , or can be expressed fully in natural language, e.g., me and my husband .", "training is instrumental to span extraction performance in few-shot setups.", "By using subword representations transferred from ConveRT (Hen-derson et al., 2019a), we demonstrate that: 1) our ConveRT-backed span extraction model outperforms the model based on transferred BERT representations, and 2) it also yields consistent gains over a span extraction model trained from scratch in the target domains, with large gains reported in few-shot scenarios.", "We verify both findings on the new RESTAURANTS -8 K data set, as well as on four DSTC 8-based data sets (Ras-togi et al., 2019).", "All of the data sets used in this work are available online at: https://github.", "com/PolyAI-LDN/task-specific-datasets .", "Before we delve into describing the core methodology, we note that in this work we are not concerned with the task of normalizing extracted spans to their actual values: this can be solved effectively with rule-based systems after the span extraction step for cases such as times, dates, and party sizes.", "There exist hierarchical rule-based parsing engines (e.g., Duckling) that allow for parsing times and dates such as the day after next Tuesday .", "Further, phrases such as Me and my wife and 2 kids can be parsed using singular noun and number counts in the span with high precision.", "Span Extraction for Dialog.", "We have recently witnessed increasing interest in intent-restricted approaches (Coucke et al., 2018; Goo et al., 2018; Chen et al., 2019) for slot-filling.", "In this line of work, slot-filling is treated as a span extraction problem where slots are defined to occur only with certain intents.", "This solves the issue of complex categorical modeling but makes slot-filling dependent on an intent detector.", "Therefore, we propose a framework that treats slot-filling as a fully intent-agnostic span extraction problem.", "Instead of using rules to constrain the co-occurrence of slots and intents, we identify a slot as either a single span of text or entirely absent.", "This makes our approach more flexible than prior work; it is fully independent of other system components.", "Regardless, we can explicitly capture turn-by-turn context by adding an input feature denoting whether a slot was requested for this dialog turn (see Figure 1).", "Pretrained Representations.", "Large-scale pretrained models have shown compelling benefits in a plethora of NLP applications (Devlin et al., 2019; Liu et al., 2019): such models drastically lessen the amount of required task/domain-specific training data with in-domain fine-tuning.", "This is typically achieved by adding a task-specific output layer to a large pretrained encoder and then fine-tuning the entire model (Xie et al., 2019).", "However, this process requires a fine-tuned model for each slot or domain, rather than a single model shared across all slots and domains.", "This adds a large memory and computational overhead and makes the approach impractical in real-life applications.", "Therefore, we propose to keep the pretrained encoder models fixed in order to emulate a production system where a single encoder model is used.", "2 Underlying Representation Model: ConveRT.", "ConverRT (Henderson et al., 2019a) is a lightweight sentence encoder implemented as a dual-encoder network that models the interaction between inputs/contexts and relevant (follow-up) responses.", "In other words, it performs conversational pretraining based on response selection on the Reddit corpus (Henderson et al., 2019a,b).", "It utilizes subword-level tokenization and is very compact and resource-efficient (i.e. it is 59MB in size and can be trained in less than 1 day on 12 GPUs) while achieving state-of-the-art performance on conversational tasks (Casanueva et al., 2020; Bunk et al., 2020).", "Through pretrained ConveRT representa-2 In other words, we do not fine-tune the parameters of the pretrained encoders which would require running a separate encoder for each slot.", "This would mean, for example, we would need 100 fine-tuned encoders running in production to support 100 different slots.", "As the encoder models have both high memory and runtime requirements, this would drastically increase the running costs of a conversational system.", "Span ConveRT: Final Model.", "We now describe our model architecture, illustrated in Figure", "2. Our approach builds on established sequence tagging models using Conditional Random Fields (CRFs) (Ma and Hovy, 2016; Lample et al., 2016).", "We propose to replace the LSTM part of the model with fixed ConveRT embeddings.", "4 We take contextualized subword embeddings from ConveRT, giving a sequence of the same length as the subword-tokenized sentence.", "For sequence tagging, we train a CNN and CRF on top of these fixed subword representations.", "We concatenate three binary features to the subword representations to emphasize important textual characteristics: (1) whether the token is alphanumeric, (2) numeric, or (3) the start of a new word.", "In addition, we concatenate the character length of the token as another integer feature.", "To incorporate the requested slots feature, we concatenate a binary feature representing if the slot is requested to each embedding in the sequence.", "To contextualize the modified embeddings, we apply a dropout layer followed by a series of 1D convolutions of increasing filter width.", "Spans are represented using a sequence of tags , indicating which members of the subword token sequence are in the span.", "We use a tag representation similar to the IOB format annotating the span with a sequence of before , begin , inside and after tags, see Figure 2 for an example.", "The distribution of the tag sequence is modeled with a CRF, whose parameters are predicted by a CNN that runs over the contextualized subword embeddings v .", "At each step t , the CNN outputs a 4 4 matrix of transition scores W t and a 4 dimensional vector of unary potentials u t .", "The probability of a predcited tag sequence y is then modeled as: p ( y | v ) T 1 (cid:89) t =1 exp ( W t | y t +1 , y t ) T (cid:89) t =1 exp ( u t | y t ) The loss is the negative log-likelihood, equal to minus the sum of the transition scores and unary 3 As we show later in 4, we can also leverage BERT-based representations in the same span extraction framework, but our ConveRT-based span extractors result in higher performance.", "4 LSTMs are known to be computationally expensive and require large amounts of resources to obtain any notable success (Pascanu et al., 2013).", "By utilizing ConveRT instead, we arrive at a much more lightweight and efficient model.", "potentials that correspond to the true tag labels, up to a normalization term.", "The top scoring tag sequences can be computed efficiently using the Viterbi algorithm.", "New Evaluation Data Set: RESTAURANTS -8 K .", "Data sets for task-oriented dialog systems typically annotate slots with exclusively categorical labels (Budzianowski et al., 2018).", "While some data sets such as SNIPS (Coucke et al., 2018) or ATIS (Tr et al., 2010) do contain span annotations, they are built with single-utterance voice commands in mind rather than a natural multi-turn dialog.", "To fill this gap and enable more work on span extraction for dialog, we introduce a new data Hyperparameter ConveRT BERT Vanilla Dimensionality of the input subword embeddings 512 768 32 Size of minibatches during training 16 16 64 The learning rate for the SGD optimizer 0.01 0.01 0.1 Keep probability of elements in the sub-word embedding 0.5 0.9 0.5 Keep probability of elements in the sub-word feature embeddings 0.6 0.6 0.5 The size of the subword-CNN filters (128, 64) (128, 64) (100, 100, 100) Width of the subword CNN filters (1, 5) (1, 5) (8, 4, 1) Activation function for subword CNN swish swish swish Table 2: The final hyper-parameters used for different subword representations; swish refers to swish activation taken from Ramachandran et al. (2017).", "set called RESTAURANTS -8 K .", "It comprises conversations from a commercial restaurant booking system, and covers 5 slots essential for the booking task: date , time , people , first name , last name .", "The data statistics are provided in Table", "1. 5 DSTC8 Data Sets.", "The Schema-Guided Dialog Dataset (SGDD) (Rastogi et al., 2019) released for DSTC 8 contains span annotations for a subset of slots.", "We extract span annotated data sets from SGDD in four different domains based on their large variety of slots: (1) bus and coach booking (labelled Buses_1 ), (2) buying tickets for events ( Events_1 ), (3) property viewing ( Homes_1 ) and renting cars ( RentalCars_1 ).", "A detailed description of the data extraction protocol and the statistics of the data sets, also released with this paper, are available in appendix A. Baseline Models.", "5 The data set contains some challenging examples where multiple values are mentioned, or values are mentioned that do not pertain to a slot.", "For example, in the utterance I said 5pm not 6pm multiple times are mentioned; in I called earlier today a date is mentioned that is not the day of the booking.", "Further, there are noticeable differences compared to previous data sets such as DSTC8 (Rastogi et al., 2019): e.g., while all slots in other datasets which pertained to integers (e.g. the number of travelers for a coach journey, number of tickets for an event booking) are modeled categorically (i.e. all numbers from 1 to 10 are separate classes), we model the number of people coming for a booking using spans because people often mention this value indirectly.", "For example me and my husband , 3 adults, 4 kids , 2 couples .", "model with two strong baselines: V-CNN-CRF is a vanilla approach that uses no pretrained model and instead learns sub-word representations from scratch.", "Span-BERT uses fixed BERT subword representations.", "All use the same CNN+CRF architecture on top of the subword representations.", "For each baseline, we conduct hyper-parameter optimization similar to Span-ConveRT: this is done via grid search and evaluation on the development set of RESTAURANTS -8 K .", "The final sets of hyper-parameters are provided in Table", "2. Span-BERT relies on BERT-base, with 12 transformer layers and 768-dim embeddings.", "ConveRT uses 6 transformer layers with 512-dim embeddings, so it is roughly 3 times smaller.", "Following prior work (Coucke et al., 2018; Rastogi et al., 2019), we report the F 1 scores for extracting the correct span per user utterance.", "If the models extract part of the span or a longer span, this is treated as an incorrect span prediction.", "Few-Shot Scenarios.", "For both data sets, we measure performance on smaller sets sampled from the full data.", "We gradually decrease training sets in size whilst maintaining the same test set: this provides insight on performance in low-data regimes.", "The results across all slots are summarized in Table 3 for RESTAURANTS -8 K , and in Table 4 for DSTC 8.", "First, we note the usefulness of conversational pretraining and transferred representations: Span-ConveRT outperforms the two baselines in almost all evaluation runs, and the gain over V-CNN-CRF directly suggests the importance of transferred pretrained conversational representations.", "Second, we note prominent gains with Span-ConveRT especially in few-shot scenarios with reduced training data: e.g., the gap over V-CNN-CRF widens from 0.02 on the full RESTAURANTS -8 K training set to 0.15 when using only 64 training examples.", "Simi-Fraction Span-ConveRT V-CNN-CRF Span-BERT Buses_11(1133) 0.92 0.93 0.89 1 / 2 (566) 0.87 0.83 0.84 1 / 4 (283) 0.87 0.77 0.80 1 / 8 (141) 0.79 0.71 0.62 1 / 16 (70) 0.60 0.53 0.44 Events_11(1498) 0.92 0.92 0.79 1 / 2 (749) 0.86 0.84 0.73 1 / 4 (374) 0.81 0.77 0.70 1 / 8 (187) 0.65 0.54 0.36 1 / 16 (93) 0.66 0.52 0.42 Homes_11(2064) 0.98 0.95 0.97 1 / 2 (1032) 0.96 0.90 0.94 1 / 4 (516) 0.95 0.88 0.87 1 / 8 (258) 0.92 0.82 0.80 1 / 16 (129) 0.88 0.69 0.70 RentalCars_11(874) 0.91 0.89 0.89 1 / 2 (437) 0.87 0.83 0.82 1 / 4 (218) 0.81 0.69 0.74 1 / 8 (109) 0.75 0.59 0.56 1 / 16 (54) 0.62 0.31 0.38 Table 4: Average F 1 scores on the DSTC 8 single-domain datasets.", "Again, this indicates that general-purpose conversational knowledge coded in ConveRT can indeed boost dialog modeling in low-data regimes.", "If suf-ficient domain-specific data is available (e.g., see the results of V-CNN-CRF with full data), learning domain-specialized representations from scratch can lead to strong performance, but using transferred conversational representations seems to be widely useful and robust.", "We also observe consistent gains over Span-BERT, and weaker performance of Span-BERT even in comparison to V-CNN-CRF in some runs (see Table 3).", "These results indicate that for conversational end-applications such as slot-filling, pretraining on a conversational task (such as response selection) is more beneficial than standard language modeling-based pretraining.", "Our hypothesis is that both the vanilla baseline and ConveRT leverage some domain adaptation: ConveRT is trained on rich conversational data, while the baseline representations are learned directly on the training data.", "BERT, on the other hand, is not trained on conversational data directly and usually relies on much longer passages of text.", "This might not make the BERT representations suitable for conversational tasks such as span extraction.", "Similar findings, where ConveRT-based conversational representations outperform BERT-based baselines (even with full fine-tuning), have recently been established in other dialog tasks such as intent detection (Hen-derson et al., 2019a; Casanueva et al., 2020; Bunk et al., 2020).", "In general, our findings also call for investing more effort in investigating different pretraining strategies that are better aligned to target tasks (Mehri et al., 2019; Henderson et al., 2019a; Humeau et al., 2020).", "Error Analysis.", "To better understand the performance of Span-ConveRT on the RESTAURANTS 8 K data set, we also conducted a manual error analysis, comparing it with the best performing baseline model, V-CNN-CRF.", "In Appendix C we lay out the types of errors that occur in a generic span extraction task and investigate the distribution of these types of errors across slots and models.", "We show that when trained in the high-data setting the distribution is similar between the two models, suggesting that gains from Span-ConveRT are across all types of error.", "We also show that the distribution varies more in the low-data setting and discuss how that might impact their comparative performance in practice.", "Additionally, in Appendix D we provide a qualitative analysis on the errors the two models make for the slot first name .", "We show that the baseline model has a far greater tendency to wrongly identify generic out-of-vocabulary words as names.", "We have introduced Span-ConveRT , a light-weight model for dialog slot-filling that approaches the problem as a turn-based span extraction task.", "The formulation allows the model to effectively leverage representations available from large-scale conversational pretraining.", "We have shown that, due to pretrained representations, Span-ConveRT is especially useful in few-shot learning setups on small data sets.", "We have also introduced RESTAURANTS 8 K , a new challenging data set that will hopefully encourage further work on span extraction for dialogue.", "In future work, we plan to experiment with multi-domain span extraction architectures.", "We thank the three anonymous reviewers for their helpful suggestions and feedback.", "We are grateful to our colleagues at PolyAI, especially Georgios Spithourakis and Iigo Casanueva, for many fruitful discussions and suggestions." ]
[ "abstain", "abstain", "result", "objective", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "result", "abstain", "abstain", "result", "objective", "method", "other", "other" ]
[ "In entity linking, mentions of named entities in raw text are disambiguated against a knowledge base (KB).", "This work focuses on linking to unseen KBs that do not have training data and whose schema is unknown during training.", "Our approach relies on methods to flexibly convert entities with several attribute-value pairs from arbitrary KBs into flat strings, which we use in conjunction with state-of-the-art models for zero-shot linking.", "We further improve the generalization of our model using two regularization schemes based on shuffling of entity attributes and handling of unseen attributes.", "Experiments on English datasets where models are trained on the CoNLL dataset, and tested on the TAC-KBP 2010 dataset show that our models are 12% (absolute) more accurate than baseline models that simply flatten entities from the target KB.", "Unlike prior work, our approach also allows for seamlessly combining multiple training datasets.", "We test this ability by adding both a completely different dataset (Wikia), as well as increasing amount of training data from the TAC-KBP 2010 training set.", "Our models are more accurate across the board compared to baselines.", "Entity linking consists of linking mentions of entities found in text against canonical entities found in a target knowledge base (KB).", "Early work in this area was motivated by the availability of large KBs with millions of entities (Bunescu and Pasca, 2006).", "Most subsequent work has followed this tradition of linking to a handful of large, publicly available KBs such as Wikipedia, DBPedia (Auer et al., 2007) or the KBs used in the now decade-old TAC-KBP challenges (McNamee and Dang, 2009; Ji et al., 2010).", "As a result, previous work always assumes complete knowledge of the schema of the target KB that entity linking models are trained for, i.e. how many and which attributes are used to represent entities in the KB.", "This allows training supervised machine learning models that exploit the schema along with labeled data that link mentions to this a priori known KB.", "However, this strong assumption breaks down in scenarios which require linking to KBs that are not known at training time.", "For example, a company might want to automatically link mentions of its products to an internal KB of products that has a rich schema with several attributes such as product category, description, dimensions, etc.", "It is very unlikely that the company will have training data of this nature, i.e. mentions of products linked to its database.", "Our focus is on linking entities to unseen KBs with arbitrary schemas.", "One solution is to annotate data that can be used to train specialized models for each target KB of interest, but this is not scalable.", "A more generic solution is to build entity linking models that work with arbitrary KBs.", "We follow this latter approach and build entity linking models that link to target KBs that have not been observed during training.", "1 Our solution builds on recent models for zero-shot entity linking (Wu et al., 2020; Logeswaran et al., 2019).", "However, these models assume the same, simple KB schema during training and inference.", "We generalize these models to handle different KBs during training and inference, containing entities represented with an arbitrary set of attribute-value pairs.", "This generalization relies on two key ideas.", "First, we convert KB entities into strings that are consumed by the models for zero-shot linking.", "Central to the string representation are special tokens called attribute separators , which represent frequently occurring attributes in the training KB(s), and carry over their knowledge to unseen KBs during inference (Section 4.1).", "Second, we generate more flexible string representations by shuffling entity attributes before converting them to strings, 1 Unseen KBs\" refers to scenarios where we neither know the entities in the KB, nor its schema.", "and by stochastically removing attribute separators to generalize to unseen attributes (Section 4.2).", "Our primary experiments are cross-KB and focus on English datasets.", "We train models to link to one KB during training ( viz. Wikidata), and evaluate them for their ability to link to an unseen KB ( viz. the TAC-KBP Knowledge Base).", "These experiments reveal that our model with attribute-separators and the two generalization schemes are 1214% more accurate than the baseline zero-shot models.", "Ablation studies reveal that all components individually contribute to this improvement, but combining all of them yields the most accurate models.", "Unlike previous work, our models also allow seamless mixing of multiple training datasets which link to different KBs with different schemas.", "We investigate the impact of training on multiple datasets in two sets of experiments involving additional training data that links to", "(a) a third KB that is different from our original training and testing KBs, and", "(b) the same KB as the test data.", "These experiments reveal that our models perform favorably under all conditions compared to baselines.", "Conventional entity linking models are trained and evaluated on the same KB, which is typically Wikipedia, or derived from Wikipedia (Bunescu and Pasca, 2006; Ling et al., 2015).", "This limited scope allows models to use other sources of information to improve linking, including alias tables, frequency statistics, and rich metadata.", "Beyond Conventional Entity Linking There have been several attempts to go beyond such conventional settings, e.g. by linking to KBs from diverse domains such as the biomedical sciences (Zheng et al., 2014; D'Souza and Ng, 2015) and music (Oramas et al., 2016) or even being completely domain and language independent (Wang et al., 2015; Onoe and Durrett, 2020).", "Lin et al. (2017) discuss approaches to link entities to a KB that simply contains a list of names without any other information.", "Sil et al. (2012) use database-agnostic features to link against arbitrary databases.", "However, their approach still requires training data from the target KB.", "In contrast, this work aims to train entity linking models that do not rely on training data from the target KB, and can be trained on arbitrary KBs, and applied to a different set of KBs.", "Pan et al. (2015) also do unsupervised entity linking by generating rich context representations for mentions using Abstract Meaning Representations (Banarescu et al., 2013), followed by unsupervised graph inference to compare contexts.", "They assume a rich target KB that can be converted to a connected graph.", "This works for Wikipedia and adjacent resources but not for arbitrary KBs.", "Logeswaran et al. (2019) introduce a novel zero-shot framework to develop entity linking systems that can generalize to unseen specialized entities\".", "Table 1 summarizes differences between our framework and those from prior work.", "Linking Models in this work are based on BERT (Devlin et al., 2019).", "While many studies have tried to explain the effectiveness of BERT for NLP tasks (Rogers et al., 2020), the work by Tenney et al. (2019) is most relevant as they use probing tasks to show that BERT encodes knowledge of entities.", "This has also been shown empirically by many works that use BERT and other contextualized models for entity linking and disambiguation (Broscheit, 2019; Shahbazi et al., 2019; Yamada et al., 2020; Fvry et al., 2020; Poerner et al., 2020).", "Entity linking consists of disambiguating entity mentions M from one or more documents to a target knowledge base, KB , containing unique entities.", "We assume that each entity e KB is represented using a set of attribute-value pairs { ( k i , v i ) } ni =1 .", "The attributes k i collectively form the schema of KB .", "The disambiguation of each m M is aided by the context c in which m appears.", "1. Candidate generation : The objective of this stage is to select K candidate entities E KB for each mention m M , where K is a hyperparameter and K << |KB| .", "Typically, models for candidate generation are less complex (and hence, less precise) than those used in the following (re-ranking) stage since they handle all entities in KB .", "Instead, the goal of these models is to produce a small but high-recall candidate list E .", "Ergo, the success of this stage is measured using a metric such as recall@ K i.e. whether the candidate list contains the correct entity.", "2. Candidate Reranking : This stage ranks the candidates in E by how likely they are to be the correct entity.", "Unlike candidate generation, models for re-ranking are typically more complex and oriented towards generating a high-precision ranked list since the objective of this stage is to identify the most likely entity for each mention.", "This stage is evaluated using precision@1 (or accuracy) i.e. whether the highest ranked entity is the correct entity.", "In traditional entity linking, the training mentions M train and test mentions M test both link to the same KB.", "Even in the zero-shot settings of Logeswaran et al. (2019), while the training and target domains and KBs are mutually exclusive, the schema of the KB is constant and known.", "On the contrary, our goal is to link test mentions M test to a knowledge base KB test which is not known during training.", "The objective is to train models on mentions M train that link to KB train and directly use these models to link M test to KB test .", "The starting point (and baselines) for our work are the state-of-the-art models for zero-shot entity linking, which we briefly describe here (Wu et al., 2020; Logeswaran et al., 2019).", "2 2 We re-implemented these models and verified them by comparing results with those in the original papers.", "Candidate Generation Our baseline candidate generation approach relies on similarities between mentions and candidates in a vector space to identify the candidates for each mention (Wu et al., 2020) using two BERT models.", "The first BERT model encodes a mention m along with its context c into a vector representation v m .", "v m is obtained from the pooled representation captured by the [CLS] token used in BERT models to indicate the start of a sequence.", "In this encoder, a binary (0/1) indicator vector is used to identify the mention span.", "The embeddings for this indicator vector (in-dicator embeddings) are added to the token embeddings of the mention as in Logeswaran et al. (2019).", "The second unmodified BERT model ( i.e. not containing the indicator embeddings as in the mention encoder) independently encodes each e KB into vectors.", "The candidates E for a mention are the K entities whose representations are most similar to v m .", "Both BERT models are fine-tuned jointly using a cross-entropy loss to maximize the similarity between a mention and its corresponding correct entity, when compared to other random entities.", "Candidate Re-ranking The candidate reranking approach uses a BERT-based cross-attention encoder to jointly encode a mention and its context along with each candidate from E (Logeswaran et al., 2019).", "Specifically, the mention m is concatenated with its context on the left ( c l ), its context on the right ( c r ), and a single candidate entity e E .", "An [SEP] token, which is used in BERT to separate inputs from different segments, is used here to separate the mention in context, from the candidate.", "This concatenated string is encoded using BERT 3 to obtain, h m,e a representation for this mention/candidate pair (from the [CLS] token).", "Given a candidate list E of size K generated in the previous stage, K scores are generated for each mention, which are subsequently scored using a dot-product with a learned weight vector ( w ).", "Thus, h m,e = BERT ( [CLS] c l m c r [SEP] e [SEP] ) , score m,e = w T h m,e .", "The models in Section 3 were designed to operate in settings where the entities in the target KB were only represented using a textual description.", "For example, the entity Douglas Adams would be represented in such a database using a description as follows: Douglas Adams was an English author, screenwriter, essayist, humorist, satirist and dramatist. He was the author of The Hitchhiker's Guide to the Galaxy. However, linking to unseen KBs requires handling entities with an arbitrary number and type of attributes.", "The same entity ( Douglas Adams ) can be represented in a different KB using attributes such as name\", place of birth\", etc. (top of Figure 1). This raises the question of whether such models, that harness the power of pre-trained language models, generalize to linking mentions to unseen KBs, including those without such textual descriptions. This section presents multiple ideas to this end. 4.1 Representing Arbitrary Entities using Attribute Separators One way of using these models for linking against arbitrary KBs is by defining an attribute-to-text function f , that maps arbitrary entities with any set of attributes { k i , v i } ni =1 to a string representation e that can be consumed by BERT, i.e. e = f ( { k i , v i } ni =1 ) . If all entities in the KB are represented using such string representations, then the models described in Section 3 can directly be used for arbitrary schemas. This leads to the question: how can we generate string representations for entities from arbitrary KBs such that they can be used for BERT-based models ? Alternatively, what form can f take? A simple answer to this question is concatenation of the values v i , given by f ( { k i , v i } ni =1 ) = v 1 v 2 ... v n . We can improve on this by adding some structure to this representation by teaching our model that the v i belong to different segments. As in the baseline candidate re-ranking model, we do this by separating them with [SEP] tokens. We call this [SEP]-separation . This approach is also used by Logeswaran et al. (2019) and Mulang' et al. (2020) name : Douglas Adams place of birth : Cambridge occupation : novelist employer : BBC Douglas Adams novelist Cambridge BBC [SEP] Douglas Adams [SEP] novelist [SEP] Cambridge [SEP] BBC [NAME] Douglas Adams [OCCUPATION] novelist [SEP] Cambridge [SEP] BBC [SEP] Separation Concatenation Attribute Separation f ( ) Figure 1: Shown here are three ways of representing an entity with arbitrary attribute-values (Section 4.1).", "We capture this information using attribute separators , which are reserved tokens (in the vein of [SEP] tokens) corresponding to attributes.", "In this case, f ( { k i , v i } ni =1 ) = [ K 1 ] v 1 [ K 2 ] v 2 ... [ K n ] v n .", "These [ K i ] tokens are not part of the default BERT vocabulary.", "Hence, we augment the default vocabulary with these new tokens and introduce them during training the entity linking model(s) based on the most frequent attribute values seen in the target KB of the training data, and randomly initialize their token embeddings.", "During inference, when faced with an unseen KB, we use attribute separators for only those attributes that have been observed during training, and use the [SEP] token for the remaining attributes.", "Figure 1 illustrates the three instantiations of f .", "In all cases, attribute-value pairs are ordered in descending order of the frequency with which they appear in the training KB.", "Finally, since both the candidate generation and candidate re-ranking models we build on use BERT, the techniques discussed here can be applied to both stages, but we only focus on re-ranking.", "Building models for entity linking against unseen KBs requires that such models do not overfit to the training data by memorizing characteristics of the training KB.", "This is done by using two regularization schemes that we apply on top of the candidate string generation techniques discussed in the previous section.", "The first scheme, which we call attribute-OOV , prevents models from overtly relying on individual [ K i ] tokens and generalize to attributes that are not seen during training.", "Analogous to how out-of-vocabulary tokens are commonly handled (Dyer et al., 2015, inter alia ), every [ K i ] token is stochastically replaced with the [SEP] token during training with probability p drop .", "This encourages the model to encode semantics of the attributes in not only the [ K i ] tokens, but also in the [SEP] token, which is used when unseen attributes are encountered during inference.", "The second regularization scheme discourages the model from memorizing the order in which particular attributes occur.", "Under attribute-shuffle , every time an entity is encountered during training, its attribute/values are randomly shuffled before it is converted to a string representation using the techniques from Section 4.1.", "Our held-out test bed is the TAC-KBP 2010 data (LDC2018T16) which consists of documents from English newswire, discussion forum and web data (Ji et al., 2010).", "4 The target KB ( KB test ) is the TAC-KBP Reference KB and is built from English Wikipedia articles and their associated infoboxes (LDC2014T16).", "5 Our primary training and validation data is the CoNLL-YAGO dataset (Hoffart et al., 2011), which consists of documents from the CoNLL 2003 Named Entity Recognition task (Tjong Kim Sang and De Meulder, 2003) linked 4 https://catalog.ldc.upenn.edu/ LDC2018T16 5 https://catalog.ldc.upenn.edu/ LDC2014T16 Number of Size of mentions target KB CoNLL-YAGO (train) 18.5K 5.7M CoNLL-YAGO (val.) 4.8K Wikia (train) 49.3K 0.5M Wikia (val.) 10.0K TAC KBP 2010 (test) 1.7K 0.8M Table 2: Number of mentions in our training, validation, and test sets, along with the number of entities in their respective KBs.", "to multiple KBs.", "6 To ensure that our training and target KBs are different, we use Wikidata as our training KB.", "7 Specifically, we use the subset of entities from Wikidata with a Wikipedia page.", "We ignore all mentions without a corresponding entity in the KB, both during training and inference, leaving the task of handling such NIL entities to future work.", "Finally, we use the Wikia dataset (Lo-geswaran et al., 2019) for experiments that investigate the impact of multiple datasets (Section 5.5).", "8 Table 2 describes the sizes of these various datasets along with the number of entities in their respective KBs.", "While covering similar domains, Wikidata and the TAC-KBP Reference KB have different schemas.", "Wikidata is more structured and entities are associated with statements represented using attribute-value pairs, which are short snippets rather than full sentences.", "The TAC-KBP Reference KB contains both short snippets like these, along with the text of the Wikipedia article of the entity.", "The two KBs also differ in size, with Wikidata containing almost seven times the number of entities in TAC KBP.", "Both during training and inference, we only retain the 100 most frequent attributes in the respective KBs.", "The attribute-separators (Section 4.1) are created corresponding to the 100 most frequent attributes in the training KB.", "Candidates and mentions (with context) are represented using strings of 128 sub-word tokens each, across all models.", "6 http://resources.mpi-inf.", "mpg.de/yago-naga/aida/download/aida-yago2-dataset.zip 7 Retrieved from https://dumps.wikimedia.", "All BERT models are uncased BERT-base models with 12 layers, 768 hidden units, and 12 heads with default parameters, and trained on English Wikipedia and the BookCorpus.", "The probability p drop for attribute-OOV is set to 0.3.", "Both candidate generation and re-ranking models are trained using the BERT Adam optimizer (Kingma and Ba, 2015), with a linear warmup for 10% of the first epoch to a peak learning rate of 2 10 5 and a linear decay from there till the learning rate approaches zero.", "9 Candidate generation models are trained for 200 epochs with a batch size of 256.", "Re-ranking models are trained for 4 epochs with a batch size of 2, and operate on the top 32 candidates returned by the generation model.", "Hyperparameters are chosen such that models can be run on a single NVIDIA V100 Tensor Core GPU with 32 GB RAM, and are not extensively tuned.", "All models have the same number of parameters except the ones with attribute-separators which have 100 extra token embeddings (of size 768 each).", "Candidate generation Since the focus of our experiments is on re-ranking, we use a fixed candidate generation model for all experiments that combines the architecture of Wu et al. (2020) (Sec-tion 3) with [SEP]-separation to generate candidate strings.", "This model also has no knowledge of the test KB and is trained only once on the CoNLL-Wikidata dataset.", "It achieves a recall@32 of 91.25 when evaluated on the unseen TAC-KBP 2010 data.", "We evaluate the re-ranking model (Section 3) in several settings to answer the following questions:", "1. Do the attribute-to-text functions (Section 4.1) generate useful string representations for arbitrary entities?", "Specifically, can these representations be used with the re-ranking model (Section 3) to link to the unseen KB test", "?", "2. Do all three key components attribute-separators (Section 4.1), attribute-shuffling , and attribute-OOV (Section 4.2)contribute equally to the final model?", "3. Does training on more than one KB with different schemas help models in more accurately linking to KB test ?", "4. Do improvements for generalizing to unseen KB test also translate to scenarios where there is training data that also links to KB test ?", "For all experiments, we report the mean and standard deviation of the accuracy across five runs with different random seeds.", "Our primary experiments focus on the first two research questions and study the accuracy of the model that uses the re-ranking architecture from Section 3 with the three core components introduced in Section 4 viz. attribute-separators to generate string representations of candidates, along with attribute-OOV and attribute-shuffle for regularization.", "We compare this against two baselines without these components that use the same architecture and use concatenation and [SEP]-separation instead of attribute-separators .", "As a reminder, all models are trained as well as validated on CoNLL-Wikidata and evaluated on the completely unseen TAC-KBP 2010 test set.", "Results confirm that adding structure to the candidate string representations via [SEP] tokens leads to more accurate models compared to generating strings by concatenation (Table 3).", "Using attribute-separators instead of [SEP] tokens leads to an absolute gain of over 5% and handling unseen attributes via attribute-OOV further increases the accuracy to 56.2%, a 7.1% increase over the [SEP] baseline.", "These results show that the attribute-separators capture meaningful information about attributes, even when only a small number of attributes from the training data (15) are observed during inference.", "Shuffling attribute-value pairs before converting them to a string representation using attribute-separators also independently provides an absolute gain of 3.5% over the model which uses attribute-separators without shuffling.", "Overall, models that combine attribute-shuffling and attribute-OOV are the most accurate with an accuracy of 61.6%, which represents a 12% absolute gain over the best baseline model.", "Prior work (Raiman and Raiman, 2018; Cao et al., 2018; Wu et al., 2020; Fvry et al., 2020) reports higher accuracies on the TAC data but they are fundamentally incomparable with our numbers due to the simple fact that we are solving a different task with three key differences: (1) Models in prior work are trained and evaluated using mentions that link to the same KB.", "On the contrary, we show how far we can go without such in-KB training mentions.", "(2) The test KB used by these works is different from our test KB.", "Each entry in the KB used by prior work simply consists of the name of the entity with a textual description, while each entity in our KB is represented via multiple attribute-value pairs.", "(3) These models exploit the homogeneous nature of the KBs and usually pre-train models on millions of mentions from Wikipedia.", "This is bene-ficial when the training and test KBs are Wikipedia or similar, but is beyond the scope of this work, as we build models applicable to arbitrary databases.", "An additional benefit of being able to link to multiple KBs is the ability to train on more than one dataset, each of which links to a different KB with different schemas.", "While prior work has been unable to do so due to its reliance on knowledge of KB test , this ability is more crucial in the settings we investigate, as it allows us to stack independent datasets for training.", "This allows us to answer our third research question.", "Specifically, we compare the [SEP]-separation baseline with our full model that uses attribute-separators , attribute-shuffle , and attribute-OOV .", "We ask whether the % of TAC [SEP]-sep.", "differences observed in Table 3 also hold when these models are trained on a combination of two datasets viz. the CoNLL-Wikidata and the Wikia datasets, before being tested on the TAC-KBP 2010 test set.", "Adding the Wikia dataset to training increases the accuracy of the full model by 6%, from 61.6% to 66.8% (Table 4).", "In contrast, the baseline model observes a bigger increase in accuracy from 49.1% to 62.6%.", "While the difference between the two models reduces, the full model remains more accurate.", "These results also show that the seamless stacking of multiple datasets allowed by our models is effective empirically.", "Finally, we investigate to what extent do components introduced by us help in linking when there is training data available that links to the inference KB, KB test .", "We hypothesize that while attribute-separators will still be useful, attribute-OOV and attribute-shuffle will be less useful as there is a smaller gap between training and test scenarios, reducing the need for regularization.", "For these experiments, models from Section 5.4 are further trained with increasing amounts of data from the TAC-KBP 2010 training set.", "A sample of 200 documents is held out from the training data as a validation set.", "The models are trained with the exact same configuration as the base models, except with a smaller constant learning rate of 2 10 6 to not overfit on the small amounts of data.", "creases (Table 5).", "10 As hypothesized, the smaller generalization gap between training and test scenarios makes the model with only attribute separators more accurate than the model with both attribute separators and regularization.", "Crucially, the model with only attribute separators is the most accurate model across the spectrum.", "Moreover, the difference between this model and the baseline model sharply increases as the amount of schema-aware data decreases ( e.g. when using 13 annotated documents, i.e. 1% of the training data, we get a 9% boost in accuracy over the model that does not see any schema-aware data).", "These trends show that our models are not only useful in settings without any data from the target KB, but also in settings where limited data is available.", "Beyond the quantitative evaluations above, we further qualitatively analyze the predictions of the best model from Table 3 to provide insights into our modeling decisions and suggest avenues for improvements.", "First, we categorize all newly correct mentions , i.e. mentions that are correctly linked by the top model but incorrectly linked by the [SEP]-separation baseline by the entity type of the gold entity.", "This type is one of person (PER), organization (ORG), geo-political entity (GPE), and a catchall unknown 10 The 0% results are the same as those in Table", "category (UKN).", "11 This categorization reveals that the newly correct mentions represent about 15% of the total mentions of the ORG, GPE, and UKN categories and as much as 25% of the total mentions of the PER category.", "This distributed improvement highlights that the relatively higher accuracy of our model is due to a holistic improvement in modeling unseen KBs across all entity types.", "Why does PER benefit more than other entity types?", "To answer this, we count the fraction of mentions of each entity type that have at least one column represented using attribute separators .", "This counting reveals that approximately 5658% of mentions of type ORG, GPE, and UKN have at least one such column.", "On the other hand, this number is 71% for PER mentions.", "This suggests that the difference is directly attributable to more PER entities having a column that has been modeled using attribute separators , further highlighting the benefits of this modeling decision.", "To identify the shortcomings of our best model, we categorize 100 random mentions that are incorrectly linked by this model into six categories (demonstrated with examples in Table 6), inspired by the taxonomy of Ling et al. (2015).", "Under this taxonomy, a common error (33%) is predicting a more specific entity than that indicated by the mention (the city of Hartford, Connecticut, rather than the state).", "The reverse is also observed 11 This entity typing is present in the KB.", "( i.e. the model predicts a more general entity), but far less frequently (6%).", "Another major error category (33%) is when the model fails to pick up the correct signals from the context and assigns a similarly named entity of a similar type ( e.g. the river Mobile, instead of the city Mobile, both of which are locations).", "21% of the errors are cases where the model predicts an entity that is related to the gold entity, but is neither more specific, nor more generic, but rather of a different type (Santos Football Club instead of the city of Santos).", "Errors in the last category occur when the model predicts an entity whose name has no string overlap with that of the gold entity or the mention.", "This likely happens when the signals from the context override the signals from the mention itself.", "The primary contribution of this work is a novel framework for entity linking against unseen target KBs with unknown schemas.", "To this end, we introduce methods to generalize existing models for zero-shot entity linking to link to unseen KBs.", "These methods rely on converting arbitrary entities represented using a set of attribute-value pairs into a string representation that can be then consumed by models from prior work.", "There is still a significant gap between models used in this work and schema-aware models that are trained on the same KB as the inference KB.", "One way to close this gap is by using automatic table-to-text generation techniques to convert arbitrary entities into fluent and adequate text (Kukich, 1983; McKeown, 1985; Reiter and Dale, 1997; Wiseman et al., 2017; Chisholm et al., 2017).", "Another promising direction is to move beyond BERT to other pre-trained representations that are better known to encode entity information (Zhang et al., 2019; Guu et al., 2020; Poerner et al., 2020).", "Finally, while the focus of this work is only on English entity linking, challenges associated with this work naturally occur in multilingual settings as well.", "Just as we cannot expect labeled data for every target KB of interest, we also cannot expect labeled data for different KBs in different languages.", "In future work, we aim to investigate how we can port the solutions introduced here to multilingual settings as well develop novel solutions for scenarios where the documents and the KB are in languages other than English (Sil et al., 2018; Upadhyay et al., 2018; Botha et al., 2020).", "The authors would like to thank colleagues from Amazon AI for many helpful discussions that shaped this work, and for reading and providing feedback on earlier drafts of the paper.", "They also thank all the anonymous reviewers for their helpful feedback." ]
[ "abstain", "method", "method", "result", "result", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "method", "abstain", "abstain", "abstain", "objective", "abstain", "method", "abstain", "method", "method", "result", "abstain", "abstain", "objective", "abstain", "abstain", "result", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "objective", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other" ]
[ "Fine-tuning neural networks is widely used to transfer valuable knowledge from high-resource to low-resource domains.", "In a standard fine-tuning scheme, source and target problems are trained using the same architecture.", "Although capable of adapting to new domains, pre-trained units struggle with learning uncommon target-specific patterns.", "In this paper, we propose to augment the target-network with normalised, weighted and randomly initialised units that beget a better adaptation while maintaining the valuable source knowledge.", "Our experiments on POS tagging of social media texts (Tweets domain) demonstrate that our method achieves state-of-the-art performances on 3 commonly used datasets.", "POS tagging is a sequence labelling problem, that consists on assigning to each sentence' word, its disambiguated POS tag ( e.g. , Pronoun, Noun) in the phrasal context in which the word is used.", "Such information is useful for higher-level applications, such as machine-translation (Niehues and Cho, 2017) or cross-lingual information retrieval (Semmar et al., 2006, 2008).", "One of the best approaches for POS tagging of social media text (Meftah et al., 2018a), is transfer-learning, which relies on a neural-network learned on a source-dataset with suffi-cient annotated data, then further adapted to the problem of interest ( target-dataset ).", "While this approach is known to be very effective (Zen-naki et al., 2019), because it takes benefit from pre-trained neurons, it has one main drawback by design.", "Indeed, it has been shown in computer-vision (Zhou et al., 2018a) that, when fine-tuning on scenes a model pre-trained on objects, it is the neuron firing on the white dog object that became highly sensitive to the white waterfall scene.", "Simply said, pre-trained neurons are biased by what they have learned in the source-dataset.", "This is Figure 1: Given a word representation x i , a BiLSTM ( ) models the sequence, and a FC layer ( ) performs classification.", "also the case on NLP (see experiments).", "Consequently, pre-trained units struggle with learning patterns specific to the target-dataset ( e.g. , wanna or gonna in the Tweets domain).", "This last is non-desirable, since it has been shown recently (Zhou et al., 2018b) that such specific units are important for performance.", "To overcome this drawback, one can propose to take benefit from randomly initialised units, that are by design nonbiased.", "However, it is common to face small target-datasets that contain too few data to learn such neurons from scratch.", "Hence, in such setting, it is hard to learn random units that fire on specific patterns and generalise well.", "In this article, we propose a hybrid method that takes benefit from both worlds, without their drawbacks.", "It consists in augmenting the source-network (set of pre-trained units) with randomly initialised units and jointly learn them.", "We call our method PretRand ( Pret rained and Rand om units) and illustrate it in Fig. 1.", "The main difficulty is forcing the network to consider random units, because they have different behaviours than pre-trained ones.", "Indeed, while these last strongly fire discriminatively on many words, these first do not fire on any word at the initial stage of fine-tuning.", "Therefore, random units do not significantly contribute to the computation of gradients and are thus slowly updated.", "To overcome this problem, we proposed to independently normalise pre-trained and random layers.", "This last balances their range of activations and thus forces the network to consider them, both.", "Last but not least, we do not know which of pre-trained and random units are the best for every class-predictor, thus we propose to learn weighting vectors on top of each branch.", "Evaluation was carried on 3 POS tagging Tweets datasets in a transfer-learning setting.", "Our method outperforms SOTA methods and significantly surpasses fairly comparable baselines.", "2.1 Base Model Given an input sentence S = [ w 1 , . . . , w n ] of n successive tokens w i , the goal of a POS tagger is to predict the POS-tag c i C of every w i , with C RC being the tag-set.", "Hence, for our base model, we used a common sequence labelling model which first, computes for each token w i , a word-level embedding (denoted w ) and character-level embedding using biLSTM encoder ( c ), and concatenates them to get a final representation x i .", "Second, it feeds the later representation into a biLSTM features extractor (denoted ) that outputs a hidden representation, that is itself fed into a fully-connected (FC) layer (denoted ) for classification.", "Formally, given w i , the logits are obtained using: y w i = ( w i ) , with being the concatenation of the output of c and w for w i .", "In a standard fine-tuning scheme (Meftah et al., 2018b), and are pre-trained on the source-task and is randomly initialised.", "Then, the three modules are further jointly trained on the target-task by minimising a Softmax Cross-Entropy (SCE) loss using the SGD algorithm.", "As mentioned in the introduction, pre-trained neurons are biased by design, thus limited.", "This motivated our proposal to augment the pre-trained branch with additional random units (as illustrated in Fig. 1).", "To do so, theoretically one can add the new units in any layer of the base model.", "However in practice, we have to make a trade-off between performances and the number of parameters (model complexity).", "Thus, given that deep layers are more task-specific than shallow ones (Peters et al., 2018; Mou et al., 2016), and that word embeddings (shallow layers) contain the majority of parameters, we choose to expand only the top layers.", "With this choice, we desirably increase the complexity of the model only by 1 .", "02 compared to the base one.", "In terms of the layers expanded, we specifically add k units to resulting in an extra biLSTM layer: r ( r for rand); and C units in resulting in an extra FC layer: r .", "Hence, for every w i , the additional random branch predicts class-probabilities following: y rw i = r r ( x i ) (with x i = ( w i ) ).", "Note that, having two FC layers obviously outputs two predictions per class (one from the pre-trained FC y pw i and one from the random y rw i ), that thus need to be merged.", "Hence, to get the final predictions, we simply apply an element-wise sum between the output of both branches: y w i = y pw i y rw i .", "As in the classical scheme, SCE is minimised but here, both branches are trained jointly.", "Nevertheless, while at the initial stage of fine-tuning, the pre-trained units are strongly firing on many words, the random ones are firing very weakly.", "As stated in some computer-vision works (Liu et al., 2015; Tamaazousti et al., 2018), the later setting causes an absorption of the weights, outputs and thus gradients of the random units by the pre-trained ones, which thus makes them useless at the end.", "We encountered the same problem with textual data on the POS-tagging problem.", "Indeed, as illustrated in the left plot of Fig.2, at the end of training, the distribution of the random units' weights is still absorbed (closer to zero) by that of the pre-trained ones.", "To prompt the two classifiers to work cooperatively, we normalise (using an (cid:96) p -norm) both of them independently before merging them.", "Formally, we apply N p ( x ) = x || x || p on y pw i and y rw i .", "The normalisation is desirably solving the weights absorption problem since at the end of the training, the distributions of the pre-trained and random weights become very similar (right of Fig. 2).", "Furthermore, we have observed that despite the normalisation, the performances of the pre-trained classifiers were still much better than the randomly initialised ones.", "Thus, to make them more competitive, we propose to start with optimising only the randomly initialised units while freezing the pre-trained ones, then, launch the joint training.", "This is called random++ in the following.", "2.4 Learnable Weighting Vectors Back to the extra predictor (FC layer of random branch), it is important to note that both branches are equally important for making a decision for every class, i.e., no weight is applied on the dimensions of y pw i and y rw i .", "However, this latter is sub-optimal since we, a priori, do not know which kind of units (random or pre-trained) is better for making a decision.", "Consequently, we propose to weight the contribution of the predictions for each class.", "For this end, instead of simply performing an element-wise sum between the random and pre-trained predictions, we first weight each of them with learnable weighting vectors, then compute a Hadamard product with their associated normalised predictions; the learnable vectors u RC and v RC , respectively corresponding to the pre-trained and random branch, are initialised with 1-values and are learned by SGD.", "Formally, the final predictions are computed following: y w i = u (cid:12) N p ( y pw i ) v (cid:12) N p ( y rw i ) .", "In the word-level embeddings, tokens are lower-cased while the character-level component still retains access to the capitalisation information.", "We set the character embedding dimension at 50, the dimension of hidden states of the character-level biLSTM at 100 and used 300-dimensional word-level embeddings.", "The latter were pre-loaded from publicly available Glove pre-trained vectors on 42 billions words from a web crawling and contain-Corpus TPoS Ark TweeBank Train 10,652 26,594 24,753 Dev 2,242 n/a 11,742 Test 2,291 7,707 19,112 Table 1: Number of tokens in every used dataset.", "ing 1.9M words (Pennington et al., 2014).", "Note that, these embeddings are also updated during fine-tuning.", "For biLSTM (token-level feature ex-tractor), we set the number of units of the pre-trained branch to 200 and experimented our approach with k added random-units, with k { 50 , 100 , 150 , 200 } .", "For the normalisation, we used (cid:96) 2 -norm.", "Finally, in all experiments, training was performed using SGD with momentum and mini-batches of 8 sentences.", "Evidently, all the hy-perparameters have been cross-validated.", "For the source-dataset, we used the Wall Street Journal ( WSJ ) part of Penn-Tree-Bank (PTB), a large English dataset containing 1.2M+ tokens from the newswire domain annotated with the PTB tag-set.", "Regarding the target-datasets, we used three Tweets datasets: TPoS (Ritter et al., 2011), annotated with 40 tags ; ARK (Owoputi et al., 2013) containing 25 coarse tags; and the re-cent TweeBank (Liu et al., 2018) containing 17 tags (PTB universal tag-set).", "The number of tokens in the datasets are given in Table 1.", "To assess the POS tagging performances of our PretRand model, we compared it to 5 baselines: Random-200 and Random-400 : randomly initialised neural model with 200 and 400 biLSTM's units; Fine-tuning : pre-trained neural model, fine-tuned with the standard scheme; Ensemble (2 rand) : averaging the predictions of two base models randomly initialised and learned independently (with different random initialisation) on Tweets datasets; and Ensemble (1 pret + 1 rand) : same as the previous but with one pre-trained on WSJ and the other randomly initialised.", "We also compared it to the 3 best SOTA methods: Derczynski et al. (2013) (GATE) is a model based on HMMs with a set of normalisation rules, external dictionaries and lexical features.", "They experiment it on TPoS, with WSJ and 32K tokens from the NPS IRC corpus.", "They also used 1.5M additional training tokens annotated by vote-constrained bootstrapping (GATE-bootstrap).", "Owoputi et al. (2013) proposed a model based on first-order Maximum Entropy Method #params TPoS ArK TweeBank Dev Test Test Dev Test Avg GATE (Derczynski et al., 2013) n/a 89.37 88.69 n/a n/a n/a n/a GATE-bootstrap (Derczynski et al., 2013) n/a n/a 90.54 n/a n/a n/a n/a ARK (Owoputi et al., 2013) n/a n/a 90.40 93.2 n/a 94.6 n/a TPANN (Gui et al., 2017) n/a 91.08 90.92 92.8 n/a n/a n/a Random-200 1 88.32 87.76 90.67 91.20 91.56 89.90 Random-400 1 .", "Markov Model (MEMM) with greedy decoding and using brown clustering and careful hand-engineered features.", "Recently, Gui et al. (2017) proposed TPANN that uses adversarial training to leverage huge amounts of unlabelled Tweets.", "From the results given in Table 2, one can first see that our approach outperforms the SOTA and baseline methods on all the datasets.", "More interestingly, PretRand significantly outperforms the popular fine-tuning baseline by +1.4% absolute point on average and is better on all classes (see per-class improvement on Fig. 4).", "PretRand also outperforms the challenging Ensemble Model by a large margin (+2.2%), while using much less parameters.", "This clearly highlights the difference of Figure 4: Sorted class-accuracy improvement (%) on TweeBank of PretRand compared to fine-tuning.", "our method with ensemble methods and the importance of having a shared word representation as well as our normalisation and weighting learnable vectors during training.", "A key asset of PretRand, is that it uses only 0.02% more parameters compared to the fine-tuning baseline.", "An interesting experiment is to evaluate the gain of performance of PretRand compared to fine-tuning, according different target-datasets' sizes.", "From the results in Fig. 3, PretRand has desirably a bigger gain with bigger target-task datasets, which clearly means that the more target training-data, the more interesting our method will be.", "To assess the contribution of different components of PretRand, we performed an ablation study.", "Specifically, we successively ablated the main components of PretRand, namely, the learnable vectors (learnVect), the longer training for random units (random++) and the normalisation ( (cid:96) 2 -norm).", "From the results in Table 3, we can observe that the performances are only marginally better than standard fine-tuning when ablating the three components from PretRand.", "More importantly, adding each of them successively, makes the performances significantly better, which highlights the importance of every component.", "Here our goal is to highlight that as in (Zhou et al., 2018a), pre-trained units can be biased", "the standard fine-tuning scheme.", "To do so, we follow (Tamaazousti et al., 2017) and analyse the units of (biLSTM layer) before and after fine-tuning.", "Specifically, we compute the Pearson's correlation between all the units of the layer before and after fine-tuning.", "Here, a unit is represented by the random variable being the concatenation of its output activations from all the validation samples of the TweeBank dataset.", "From the resulting correlation matrix illustrated in Fig. 5, one can clearly observe the white diagonal, highlighting the fact that, every unit after fine-tuning is more correlated with itself before fine-tuning than with any other unit.", "This clearly confirms our initial motivation that pre-trained units are highly biased to what they have learned in the source-dataset, making them limited to learn patterns specific to the target-dataset.", "Additionally, we visualise in Fig. 6 a concrete example of a biased neuron when transferring from newswire to Tweets domain.", "Specifically, we show the top-10 words activating unit-169 of (from the standard fine-tuning baseline), before fine-tuning (at this stage, the model is trained on the source-dataset WSJ) and during fine-tuning on the TweeBank dataset.", "We can observe that this unit is highly sensitive to proper nouns ( e.g. , George and Washington ) before fine-tuning, and to words with capitalised first-letter whether the word is a proper noun or not ( e.g. , Man and Father ) during fine-tuning on TweeBank dataset.", "Indeed, we found that most of tokens with upper-cased first letter are mistakenly predicted as proper nouns ( PROPN ) in the standard fine-tuning scheme.", "In fact, in standard English, inside sentences, only proper nouns start with upper-cased letter thus fine-tuning the pre-trained model fails to slough this pattern which is not always respected in Tweets.", "Unique units emerge in random branch Finally, we highlight the ability of randomly initialised units to learn patterns specific to the target-dataset and not learned by the pre-trained ones because of their bias problem.", "To do so, we visualise unique units i.e. , random units having a correlation lower than 0.4 with pre-trained ones emerging in the random branch.", "While only one shown in Fig. 7, many unique units have been learned by the random branch of our PretRand model: 37.5% of the 200 random units have correlation lower than 0.4 with the pre-trained ones.", "Regarding unit-99, it is highly discriminative to tokens na , ta and n't .", "Indeed, in TweeBank, words like gonna (going to) are tokenized into two tokens: gon and na , with the later annotated as a particle and the former as a verb .", "Importantly, not even one unit from the standard fine-tuning scheme has been found firing on the same important and target-dataset specific pattern.", "In this paper, we introduced a method to improve fine-tuning using 3 main ideas: adding random units and jointly learn them with pre-trained ones; normalising the activations of both to balance their different behaviours; applying learnable weights on both predictors to let the network learn which of random or pre-trained one is better for every class.", "We have demonstrated its effectiveness on domain adaptation from newswire domain to three commonly used Tweets-datasets for POS tagging." ]
[ "abstain", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "abstain", "result", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "result", "objective" ]
[ "Translated texts have been used for malicious purposes, i.e., plagiarism or fake reviews.", "Existing detectors have been built around a specific translator (e.g., Google) but fail to detect a translated text from a strange translator.", "If we use the same translator, the translated text is similar to its round-trip translation, which is when text is translated into another language and translated back into the original language.", "However, a round-trip translated text is significantly different from the original text or a translated text using a strange translator.", "Hence, we propose a detector using text similarity with round-trip translation (TSRT).", "TSRT achieves 86 .", "9% accuracy in detecting a translated text from a strange translator.", "It outperforms existing detectors ( 77 . 9% ) and human recognition ( 53 . 3% ).", "A reader may misunderstand the original meaning of a translated text 1 .", "For example, Facebook translated good morning into attack them , leading to an arrest 2 .", "Adversaries can use a translator for malicious tasks such as round-trip translation used in plagiarism (Jones and Sheridan, 2015) to avoid human recognition or in adversarial text (Iyyer et al., 2018) to fool AI.", "Existing work has investigated the detection of translated texts in various approaches.", "The parse tree approach (Chae and Nenkova, 2009; Li et al., 2015) exploits text structure.", "The N gram approach (Aharoni et al., 2014; Arase and Zhou, 2013) estimates text fluency.", "The text complexity approach uses complex words (Nguyen-1 When we mention a translated text, translation, translator, and Google, all are related to machine translation systems 2 www.theguardian.com/technology/2017/oct/24/facebookpalestine-israel-translates-good-morning-attack-them-arrest Son and Echizen, 2017) and phrases (Nguyen-Son et al., 2017).", "The text coherence approach is based on matching similar words on a paragraph level (Nguyen-Son et al., 2018, 2019b).", "A three-layer CNN (Riley et al., 2020) is trained on either one-way or round-trip translated texts.", "Our previous work (Nguyen-Son et al., 2019a) combined round-trip translation with BLEU scores.", "All these approaches fail to detect a text translated by another translator or from a different language.", "Motivation The first translation round induces a low similarity between the translated and original texts, whereas the extent of similarity increases in later rounds (Vanmassenhove et al., 2019).", "Let us consider an example in Fig. 1.", "We randomly selected an English text t from an English-Russian pair 3 ; the Russian text was translated into English by Google, called t ( Go,RU EN ) .", "We measured the similarity between a text and its round-trip translation using the minimum edit distance ( MED ) (Levenshtein, 1966).", "The translated text t is the result of using the translator once, and the similarity between t and its round-trip translation t ( Go,RU EN RU ) is high ( MED = 1) .", "Otherwise, the similarity between the original text t with t ( Go,RU EN RU ) is low ( MED = 5) .", "Based on the difference in similarity, we can distinguish the original from the translated text.", "In reality, a translator's source language is often unknown.", "The similarity decreases when using another language.", "For example, the similarity between t ( Go,RU EN ) translated from Russian and its round-trip translation t ( Go,RU EN DE EN ) from German is low ( MED = 6) .", "It is close to the similarity in the original pair 3 This pair belongs to a Commentary News corpus (Barrault et al., 2019) (,) : The actions of the chief banker, their every word or hint suddenly take on tremendous significance.", "{ t, t ( Go,EN DE EN ) } ( MED = 4) .", "A change in a translator induces a similar phenomenon.", "We thus detected the translator and the language before detecting the translated text.", "Contributions We propose a novel translation detector that utilizes text similarity with round-trip translation (named TSRT).", "This detector can be used as a warning to prevent the risk of translated texts in a certain region where people are familiar with few languages and translators.", "First, we create round-trip translations from multiple configuration translator and language tuples.", "Second, we use each tuple's round-trip translations to train individual subclassifiers.", "Then, we use the tuple with the highest similarity between a suspicious text and its round-trip translation to choose a suitable subclassifier.", "Finally, we use the subclassifer to determine if the text is an original or translated text.", "Experiments demonstrate that TSRT efficiently detects different kinds of translated texts (round-trip and one-way) when the translation translator and language is changed.", "Training Phase First, we collect original texts T i and translated texts T i , which are translated with a configuration tuple i = { language i , translator i } (see Fig. 2).", "Second, we generate round-trip translations T i i and T i i for T i and T i , respectively.", "Finally, T i and T i are combined with T i i and T i i to train a subclassifier i by fine-tuning the BERT model (Devlin et al., 2019).", "We repeat the procedure with other subclassifiers.", "In Fig. 1, t , t , t ( Go,RU ) , and t ( Go,RU ) belong to T , T , T ( Go,RU ) , and T ( Go,RU ) , respectively, with = ( Go, RU ) .", "Testing Phase For a suspicious text s , we aim to determine if s is an original or a translated text.", "First, we generate round-trip translated texts s i with all configuration tuples in the training phase.", "Next, we calculate the similarity i between t and all s i using the minimum edit distance ( MED ).", "Finally, we process s with the subclassifier associated with the best similarity b corresponding to the lowest MED .", "In the case of t in Fig. 1, two round-trip translations t ( Go,RU ) and t ( Go,DE ) are generated with respect to ( Go,RU ) = 1 and ( Go,DE ) = 6 .", "The subclassifier ( Go,RU ) associated with the lower MED is chosen for classifying t .", "Round-trip translation detection : We collected 11 , 748 distinct movie reviews from the Sentiment Treebank (Socher et al., 2013) (19.1 words/review).", "We chose 9 , 000 / 1 , 000 reviews for training/developing and used the remaining pairs for testing.", "This ratio is reused in further experiments.", "We used the original reviews to generate round-trip translations by using configuration tuples of two translators and three languages (Table 1).", "In addition to Google, we chose Fairseq 4 (Ng et al., 2019), the winner in the WMT'19 shared task.", "We compare TSRT 5 with 4 Fairseq is only supported for Russian and German, so we cannot use it for Japanese.", "existing methods using the accuracy metric (accu-racy and F -score are equivalent in this balanced corpus).", "BERT and TSRT have the same optimized hyperparameters 6 .", "The first four methods do not work well with this parallel corpus.", "The round-trip translation (Nguyen-Son et al., 2019a) based on BLEU and BERT (Devlin et al., 2019) improves by approximately 10% .", "TSRT provides the highest performance, as it captures round-trip information using deep learning.", "We analyzed the text lengths of the top three detectors on the whole (Go,RU) test set (Fig. 3).", "BERT surpasses round trips in only short length ranges, while TSRT outperforms the others in all ranges.", "Human recognition : We selected 100 random reviews from the test set for human recognition 7 .", "We sent them to 14 raters (6 were native English detection 6 We optimize hyperparameters with recommended values from BERT (maximum size of 128 , batch size of 32 , learning rate of 2 e 5 , and epoch of 3 ).", "Since the development accuracy is equivalent to the test accuracy, we use the test accuracy for further experiments.", "7 The survey is available at https://forms.gle/ L8EkZxXuEH9Co3UB7 .", "speakers), who decided whether each review was an original or a translated text.", "The average accuracy was 53 .", "3% ( 55 . 0% for the native speakers and 52 . 0% for the nonnative speakers), which was close to random.", "The low Fleiss' = 0 .", "13 implied slight agreement in the native speakers' ratings.", "For nonnative speakers, was even lower ( = 0 . 07 ).", "This indicates that the translated texts were indistinguishable by humans.", "One-way translation detection : We collected parallel sentences from the Commentary News corpus (Barrault et al., 2019).", "We randomly selected 11 , 748 pairs with 21 .", "9 words on average per sentence (same as the movie reviews).", "We experimented with two languages (Russian and German) and two translators (Google and Fairseq) (see Fig. 4).", "Since one-way translation is more challenging to detect, the accuracy is decreased for all methods.", "In the top three detectors, while BERT and round-trip translation yield unstable results, TSRT remains consistent.", "Comparison : Humans are familiar with limited languages and translators.", "Normally, they use their mother tongue and English (international language) and translate by choosing a popular translator such as Google or an open-source translator such as Fairseq.", "Table 2 presents the translation detection with translator and language changes.", "While the existing methods are trained with (Go,DE) or (Fa,RU), TSRT is trained on (Go,DE)+(Go,RU) or (Fa,RU)+(Go,RU), respectively.", "We tested all of them in (Go,RU).", "Our results showed that the existing methods were significantly downgraded in terms of accuracy, but TSRT remained stable.", "Ablation Studies : We trained TSRT on various configuration tuples and tested it on (Go,RU) (Ta-ble 3).", "Training TSRT on the combination with the correct configuration tuple (Go,RU) boosts the performance.", "Configuration identification : We identify the translator and language on round-trip translation detection while the one-way approach obtains similar results.", "For translator change (Table 4's second col-umn), we used (Go,RU) and (Fa,RU).", "For the language change (the third column), we used (Go,RU) and (Go,DE).", "All were tested on (Go,RU).", "We used BERT as the identification baseline.", "We replaced MED with BLEU in TSRT.", "All the metric-based approaches outperformed the baseline.", "The trans-Training data Acc(Red.) (Go,RU) 90.2(-00.0) (Fa,RU) 70.2(-20.0) (Fa,RU) 70.2(-20.0) (Go,DE) 73.4(-16.8) (Fa,DE) 66.6(-23.6) (Go,RU)+(Fa,RU) 86.9(-03.3) (Go,RU)+(Go,DE) 86.6(-03.6) (Go,RU)+(Fa,RU)+(Go,DE)+(Fa,DE) 81.5(-08.7) Table 3: TSRT's results with individuals and combinations of configuration tuples of translators and languages.", "lator detection outperformed language detection.", "While a specific translator often uses the same architecture for all languages, various translators have different architectures.", "Therefore, a translator change was more apparent than a language change.", "MED (designed for structure similarity) was better than BLEU (designed for corpus levels).", "This paper proposed a one-way and round-trip translation detection mechanism using text similarity with round-trip translation (TSRT), which is robust to language and translator changes.", "First, we trained subclassifiers on specific lan-Method Translator Language BERT 63 .", "guages/translators using round-trip translation.", "Then, we identified the language and translator using the highest similarity between the suspicious and round-trip translation texts.", "Finally, we chose the corresponding subclassifier for translation detection.", "The evaluation results show that TSRT outperforms other methods, with an accuracy of up to 90 .", "2% .", "Moreover, TSRT could also identify the original translator and translation language with 93 .", "3% and 85 .", "6% of accuracy, respectively.", "In future work, we will exploit saturation after repeatedly using the same AI system to detect other artificial texts such as fake COVID-19 news.", "We would like to thank you very much for the anonymous reviewers to provide useful comments." ]
[ "abstain", "abstain", "method", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other" ]
[ "Text-level discourse rhetorical structure (DRS) parsing is known to be challenging due to the notorious lack of training data.", "Although recent top-down DRS parsers can better leverage global document context and have achieved certain success, the performance is still far from perfect.", "To our knowledge, all previous DRS parsers make local decisions for either bottom-up node composition or top-down split point ranking at each time step, and largely ignore DRS parsing from the global view point.", "Obviously, it is not sufficient to build an entire DRS tree only through these local decisions.", "In this work, we present our insight on evaluating the pros and cons of the entire DRS tree for global optimization.", "Specifically, based on recent well-performing top-down frameworks, we introduce a novel method to transform both gold standard and predicted constituency trees into tree diagrams with two color channels.", "After that, we learn an adversarial bot between gold and fake tree diagrams to estimate the generated DRS trees from a global perspective.", "We perform experiments on both RST-DT and CDTB corpora and use the original Parseval for performance evaluation.", "The experimental results show that our parser can substantially improve the performance when compared with previous state-of-the-art parsers.", "As the main linguistic theory on discourse rhetorical structure (DRS), Rhetorical Structure Theory (RST) (Mann and Thompson, 1988) describes an article as a discourse tree (DT).", "As illustrated in Figure 1, each leaf node of the tree corresponds to an Elementary Discourse Unit (EDU), and relevant leaf nodes are connected by relation and nuclearity ( nucleus (N) or satellite (S)) tags to form high-layer discourse units (DUs), where the Corresponding author (cid:70) [ e 1 : In fact,] [ e 2 : Budget indicated] [ e 3 : it saw some benefit] [ e 4 : to staying involved in these programs,] [ e 5 : in which renters earn frequent-flier miles] [ e 6 : and fliers can get car-rental discounts.] wsj_2394 e 1 e 2 e 3 e 4 e 5 e 6 Same-Unit (NN) Attribution (NS) List (NN) Elaboration (NS) Elaboration (NS) Figure 1: An example RST-style discourse tree.", "nucleus is considered more important than the satellite .", "Since the RST structure can well describe the organization of an article, it has been playing a central role in various down-stream tasks like summarization (Xu et al., 2020), text categorization (Ji and Smith, 2017), and so on.", "With the release of various discourse corpora, text-level DSR parsing has been drawing more and more attention in the last decade.", "However, since the corpus annotation is usually time-consuming, existing DRS corpora are much limited in size.", "For example, the English RST-DT (Carlson et al., 2001) corpus only contains 385 WSJ articles, and the Chinese CDTB (Li et al., 2014b) corpus only contains 500 newswire articles.", "In this situation, previous studies usually rely on multifarious hand-engineered features (Hernault et al., 2010; Feng and Hirst, 2014; Ji and Eisenstein, 2014; Li et al., 2014a, 2016; Braud et al., 2017).", "And all these systems perform DRS parsing in a bottom-up fashion.", "Until recently, some researchers turn to top-down DRS parsing (Lin et al., 2019; Zhang et al., 2020; Kobayashi et al., 2020) to explore the potential capabilities of data-driven models.", "Nevertheless, text-level DRS parsing is still challenging and worthy of in-depth exploration.", "Theoretically, in supervised learning, annotated (cid:18221) (cid:11096) (cid:1206) (cid:6320)(cid:7519) (cid:1167) (cid:4705)(cid:7263) (cid:2255)(cid:14125)(cid:5719) (cid:7784)(cid:12017)(cid:1953)(cid:6495) (cid:6208) (cid:1791) (cid:6320)(cid:7519) (cid:7573) (cid:4649) (cid:3927)(cid:14145) (cid:17931)(cid:15996) (cid:18000)(cid:5537) (cid:452) (cid:1286)(cid:1308)(cid:712)(cid:1286)(cid:1561)(cid:712)(cid:258) (cid:18221) (cid:11096) (cid:1206) (cid:6320)(cid:7519) (cid:1167) (cid:4705)(cid:7263) (cid:2255)(cid:14125)(cid:5719) (cid:7784)(cid:12017)(cid:1953)(cid:6495) (cid:6208)(cid:1791) (cid:6320)(cid:7519) (cid:7573) (cid:4649) (cid:3927)(cid:14145) (cid:17931)(cid:15996) (cid:18000)(cid:5537) (cid:452)", "data corpora can provide neural models with specific learning objectives, and the corpus size limitation will weaken the learning of these goals.", "To mitigate this problem, we researchers need", "(i) an efficient model to better learn from the limited data and", "(ii) more high-quality training objectives to enhance the model learning.", "Existing studies on text-level DRS parsing show that Compared with bottom-up DRS parsers, recent top-down frameworks can better leverage global document context and have achieved promising results in text-level DRS parsing (Zhang et al., 2020; Kobayashi et al., 2020).", "All previous studies produce their DRS parsers with local decisions made at each time step for either bottom-up node composition or top-down split point selection (Figure 2", "(a)), and no global decisions are made for the entire DRS structure (Figure 2", "(b)).", "Therefore, it is difficult for them to achieve global optimization.", "Although some studies (Braud et al., 2017; Mabona et al., 2019) leverage beam-search to traverse the solution space to find the optimal parsing route, the algorithms are time-consuming to some extent.", "Considering the above-mentioned status quo, in this work, we study a global optimization method based on the well-performing top-down parsers.", "For model structure, we take the top-down parser of Zhang et al. (2020) as our baseline system and make some improvements to it.", "For global optimization, we first utilize a novel strategy to transform both gold standard and predicted DRS trees into tree diagrams with two color channels.", "After that, an LSGAN-based adversarial bot is structured between gold and fake tree diagrams as an examiner for global estimation and optimization.", "Experimental results on the RST-DT and CDTB corpora show that our approaches are effective.", "bottom-up and top-down frameworks.", "For the first category, early studies on DRS parsing heavily relied on hand-crafted features and linguistic characteristics (Hernault et al., 2010; Joty et al., 2013; Feng and Hirst, 2014).", "During the past decade, more and more researchers turned to data-driven approaches, and some effective strategies were proposed to adapt to the small-scale data corpora.", "Among these studies, (Ji and Eisenstein, 2014; Li et al., 2014a, 2016; Mabona et al., 2019) used some trivial features as auxiliaries in their data-driven systems; Braud et al. (2016; 2017) harnessed task supervision from related tasks, alternative views on discourse structures, and cross-lingual data to alleviate the data insufficiency problem; Wang et al. (2017) introduced a two-stage parser to first parse a naked tree structure and then determine rhetorical relations for different discourse levels to mitigate data sparsity; Yu et al. (2018) employed both syntax information and discourse boundaries in their transition-based system and achieved good performance.", "For the second category, some researchers (Lin et al., 2019; Liu et al., 2019; Zhang et al., 2020; Kobayashi et al., 2020) turned to top-down frameworks to tap the potential capabilities of data-driven models.", "Among them, (Lin et al., 2019; Liu et al., 2019) have achieved certain success in sentence-level DRS parsing.", "Nevertheless, due to the long-distance dependency over the discourse, text-level DRS parsing remains challenging.", "To alleviate this problem, Zhang et al. (2020) proposed a top-down architecture tailored for text-level DRS parsing.", "Kobayashi et al. (2020) used contextualized word representation and proposed to parse a document in three granularity levels for good performance.", "In the past decade, GANs have achieved great progress in NLP (Wu et al., 2019; Elazar and Goldberg, 2018; Chen and Chen, 2019; Zou et al., 2020).", "However, to our knowledge, there is still no research on adversarial learning in DRS parsing so far.", "In this work, we explore to adversarially train a discriminator to estimate the quality of the entire DRS tree for global optimization.", "Notably, we propose to transform each DRS tree into a continuous tree diagram, and thus our adversarial method does not suffer from the discrete data problem.", "In this section, we give a brief introduction to our baseline system, the top-down parser of Zhang", "Hierarchical Split Point Encoding.", "For split point representation 1 , Zhang et al. (2020) introduced a hierarchical RNN-CNN architecture in their paper.", "Firstly, they use an attention-based GRU encoder to encode each EDU, obtaining e i .", "Then, the obtained EDU vectors are fed into another BiGRU for context modeling, as shown in Figure 3. Next, a CNN net with a window size of 2 and a stride size of 1 is built for each window of EDUs in the discourse for split point encoding.", "To our knowledge, Zhang et al. (2020) produced dummy split points at both ends of a discourse.", "Since the dummy split points do not participate in the split point selection process, they could be redundant.", "Here, we try to simplify the parsing procedure with the dummy split points discarded, as shown in Figure 3. Following previous work (Yu et al., 2018; Kobayashi et al., 2020), we also splice the sentenceand paragraph-level boundary feature vectors to the representation of split points to enhance the encoder model.", "(0,4) (2,4)(0,0) (2,4) (3,4) (4,4) e 0 e 1 e 2 e 3 e 4 e 5 1 0 2 3 4 h e 0 h e 1 h e 2 h e 3 h e 4 h e 5 h s0 h s1 h s2 h s3 h s4 H 0 H 1 H 2 H 3 H 4 c 0 c 1 c 2 c 3 c 4 d 0 d 1 d 2 d 3 d 4 (cid:18221) (cid:11096) (cid:1206) (cid:6320)(cid:7519) (cid:1167) (cid:4705)(cid:7263) (cid:2255)(cid:14125)(cid:5719) (cid:7784)(cid:12017)(cid:1953)(cid:6495) (cid:6208) (cid:1791) (cid:6320)(cid:7519) (cid:7573) (cid:4649) (cid:3927)(cid:14145) (cid:17931)(cid:15996) (cid:18000)(cid:5537) (cid:452) (cid:1286)(cid:1308)(cid:712)(cid:1286)(cid:1561)(cid:712)(cid:258) (cid:18221) (cid:11096) (cid:1206) (cid:6320)(cid:7519) (cid:1167) (cid:4705)(cid:7263) (cid:2255)(cid:14125)(cid:5719) (cid:7784)(cid:12017)(cid:1953)(cid:6495) (cid:6208)(cid:1791) (cid:6320)(cid:7519) (cid:7573) (cid:4649) (cid:3927)(cid:14145) (cid:17931)(cid:15996) (cid:18000)(cid:5537) (cid:452) Figure 3: Neural architecture of the encoder-decoder.", "Top-Down Split Point Ranking.", "After achieving split point representations, an encoder-decoder is used to rank the split points, as shown in Figure 3. During encoding, the previously obtained split point vectors are taken as input to the BiGRU encoder, obtaining H 0 , . . . , H n 2 .", "During decoding, a uni-directional GRU with an internal stack is used to control the split point ranking process.", "Initially, the stack contains only one element, i.e., indexes of the boundary split points in the discourse.", "Notably, since we do not add dummy split points in this parser, we allow patterns like ( , ) to appear in the stack.", "At the j -th step, the tuple (B , E) is popped from the stack and we enter the concatenated c j = ( HB ; HE ) into the decoder for d j .", "After that, a biaffine function (Dozat and Manning, 2017) is built between the encoder and decoder outputs for split point ranking.", "Different from (Zhang et al., 2020), all split points in the interval [B , E] are selectable in this work.", "At the step j , we calculate the attention score between H i and d j as: s j,i = HT i W d j + UH i + V d j + b (1) where W, U, V, b are model parameters and s j,i 1 The split position between any two neighboring EDUs is called the split point.", "R k denotes the score of the i -th split point over different categories (for split point ranking, k equals 1).", "With this attention function used, at each time step, split position with the highest score is selected as the split point and the original text span is split into two adjacent text spans.", "Meanwhile, newly generated text spans with unselected split points are pushed onto the stack for following steps, as shown in Figure 3. In this way, a DRS tree is built after 5 iterations with the split points (1 , 0 , 2 , 3 , 4) detected in turn.", "To our knowledge, Zhang et al. (2020) use three biaffine classifiers in their parser for structure, nuclearity and relation prediction, respectively.", "Considering the differences between the three learning objectives, using three independent classifiers could weaken the Full performance.", "To alleviate this problem, we combine nuclearity and relation tags into N-R tags and only use two classifiers for DRS parsing.", "Therefore, for N-R prediction, the category number k equals 41 and 46 for the RST-DT and CDTB corpus respectively.", "This section introduces the proposed adversarial learning method which consists of two parts: graphical representation of gold and fake DRS trees and the adversarial model learning process.", "In this study, we aim to learn from the entire DRS tree to optimize our model from a global perspective.", "Usually, our computer understands DRS trees in two ways: either language description or graphical representation.", "Since tree diagrams can reflect the structural features more intuitively and are easy for machines to understand, we explore graphical representation of DRS trees in this work.", "For gold standard trees, we propose to transform each tree into multi-pattern matrices which (cid:18221) (cid:11096) (cid:1206) (cid:6320)(cid:7519) (cid:1167) (cid:4705)(cid:7263) (cid:2255)(cid:14125)(cid:5719) (cid:7784)(cid:12017)(cid:1953)(cid:6495) (cid:6208) (cid:1791) (cid:6320)(cid:7519) (cid:7573) (cid:4649) (cid:3927)(cid:14145) (cid:17931)(cid:15996) (cid:18000)(cid:5537) (cid:452) (cid:1286)(cid:1308)(cid:712)(cid:1286)(cid:1561)(cid:712)(cid:258) (cid:18221) (cid:11096) (cid:1206) (cid:6320)(cid:7519) (cid:1167) (cid:4705)(cid:7263) (cid:2255)(cid:14125)(cid:5719) (cid:7784)(cid:12017)(cid:1953)(cid:6495) (cid:6208)(cid:1791) (cid:6320)(cid:7519) (cid:7573) (cid:4649) (cid:3927)(cid:14145) (cid:17931)(cid:15996) (cid:18000)(cid:5537) (cid:452) Convolution-Layer XSTR 3 R 4 -1 -1 R 2 R 1 R 3 0 -1 -1 -1 0 0 0 0 -1 -1 0 0 0 -2 -1 -1 -1 -2 -2 Parse Tree Image Generarion Gold Tree Adversarial Bot Feature Extraction z Feature Extraction Max Pooling Image Generation Reshape Feature x e 1 e 2 e 3 e 4 e 5 e 6 N-R 1 N-R 3 N-R 2 N-R 3 N-R 4 XNR 0 m= 5, n =5 i=3 , j =3 i=2, j =0 Figure 4: Graphical representation of DRS structure for adversarial learning of text-level DRS parsing.", "is similar to a low resolution image with two color channels (i.e., the structure ( ST ) and nuclearity-relation ( NR ) channels).", "Formally, given a DRS tree of height m with n split points, each split point corresponds to a specific non-leaf node in the tree, and we construct two matrices, XST and XNR , of size m ( n + 2) corresponding to the two color channels, as shown in Figure 4.", "(i) For the ST channel, all the elements in the matrix XST are initialized 2 to -2. With the upper left corner of the matrix as the origin of the coordinate axis, given the split point j at the i -th tree layer (top-down direction), we directly set the element at ( i -1, j +1) by zero.", "Besides, if the left span of the split point is an EDU, then we set the element at ( i , j ) by -1, and the right span is processed in a similar way.", "With this method, we can recursively construct the tree diagram from top to down.", "Additionally, some EDU positions are actually shared in the matrix, and this does not affect the understanding of these nodes.", "For the example in Figure 4, although e 2 and e 3 share a same position in the ST channel, the following two patterns in the matrix can still reveal an accurate representation of each node: N 1 : (cid:20) 0 2 2 1 (cid:21) N 2 : (cid:20) 2 0 1 2 (cid:21) (2)", "(ii) For the NR channel, we set the positions representing non-leaf nodes to specific N-R labels and the positions of leaf nodes to 1 and other non-node positions to zero.", "For the automatically parsed trees, we directly use our model outputs to build the tree diagram with two color channels, X (cid:48) ST and X (cid:48) NR .", "And the 2 We set these non-node positions to -2 in two reasons:", "(i) we apply a log-softmax function to the attention weights for split point ranking with the output ranging ( , 0 ] ;", "(ii) we simply set the non-node positions by -2 to distinguish them from the leaf nodes marked with -1. two matrices of size m ( n + 2) are initialized with zero.", "(i) For the ST channel, as stated before, a set of attention weights are assigned to the encoder outputs during pointing and a split point is selected according to the weights.", "Obviously, each split point corresponds to a group of attention weights (after log-softmax).", "Therefore, we directly add these n -dimensional attention weights of each split point in the i -th tree layer (top-down direction) to the i -th line of X (cid:48) ST .", "Notably, the first and last columns of the matrices are actually placehold-ers initialized with unlearnable scalars representing leaves or non-node positions, so we only add the split point attention weights to the range from 1 to n in each row.", "(ii) For the NR channel, we simply replace these elements corresponding to split points in X (cid:48) ST with predicted N-R labels 3 and other elements keep the same as XNR .", "Alternatively, only the replaced elements in the matrix X (cid:48) NR are learnable, while other positions serve as static features in the image.", "In this way, the model outputs are also abstracted as a tree diagram with two color channels.", "Through the above methods, we achieve graphical representation for both gold standard and automatically predicted DRS trees.", "And the graphical representation can provide our model with a global perspective, which makes the global optimization (Subsection 4.2) of DRS parsing possible.", "For model learning, we have two goals:", "(i) learning of DRS parsing at each time step for local optimization and", "(ii) learning an adversarial bot to evaluate 3 Here, we need to map the attention score, s j,i R k , to a specific N-R label.", "Since the argmax function does not support gradient calculation, we give an alternative solution: L j,i = F sigmoid ( w l s j,i + b l ) K , where K is the number of N-R labels and L j,i R 1 is the learnable N-R label.", "the pros and cons of the entire tree for global optimization.", "For the first goal, we use two negative log-likelihood loss terms to optimize the parsing model.", "For split point ranking, we use L s to maximize the probability of correct split point selection at each decoding step.", "For N-R prediction, given the selected split point, we use L nr to maximize the probability of correct N-R labeling for the split point.", "Since the convergence speeds of the two loss terms are different, we add two loss weights before the loss terms to balance the model training as: LDRS = 1 L s + 2 L nr (3) For the second goal, we explore to learn from the entire DRS tree for global optimization.", "To that end, we produce an adversarial bot in our parser to estimate the generated DRS tree diagrams, as shown in Figure 4. Since the composition and sources of gold and generated tree diagrams are completely different, we use two isomorphic feature extractors to understand the two kinds of images separately.", "For feature extraction, based on such a 2D image-like representation, we perform convolution on every 3 ( n + 2) window to dig out the structural details of the entire tree: (cid:37) ( f ) win = F relu ( w ( f ) X win + b ( f ) ) (4) Then we perform max-pooling in each nonoverlapping 3 1 window for feature extraction, and the resulting matrices are reshaped as (cid:37) R 1 D to serve as the distributed representation of the tree.", "In this work, we do not just need an excellent discriminator expert in classification, we need the adversarial nets to continuously give feedback to our parsing model even when the generated trees are correctly classified.", "On this basis, we leverage Least Squares Generative Adversarial Network (LSGAN) (Mao et al., 2017) as our adversarial bot which has proven to perform more stable and face less problem of vanishing gradients than the original GAN.", "Formally, our adversarial nets consist of two parts:", "(i) a generative net G to capture the data distribution p z over the training data X and", "(ii) a discriminative net D to estimate the probability that a sample comes from X rather than p z .", "On this basis, given the distributed representation of the gold tree x and fake tree z , we formulate the loss functions as follows: min DV ( D ) = 1 2 E x p data ( x ) [( D ( x ) b ) 2 ] +12 E z p z ( z ) [( D ( G ( z )) a ) 2 ] (5) min GV ( G ) = 1 2 E z p z ( z ) [( D ( G ( z )) c ) 2 ] (6) Similar to Mao et al. (2017), we set a = 0 and b = c = 1 to make G generate samples as real as possible.", "Technically, the generator G consists of the parsing model and the feature extractor for fake trees, and the discriminator is an MLP (In: feature size ( (cid:15) ), Hidden: (cid:15)/ 2 , Out: 1) without the sigmoid activation function.", "Therefore, when learning G , parameters of the parsing model and the feature extractor for fake trees are updated.", "Likewise, parameters of the discriminator and the feature extractor for real trees are learned when tuning D .", "At this time, we have a traditional loss term to train the top-down parser at each splitting step and two adversarial loss terms to estimate the entire DRS tree for global optimization.", "It is worth mentioning that we first optimize the LDRS for 7 epochs to warm up the model parameters, and then the adversarial nets join the training process for global optimization of DRS parsing.", "Datasets.", "Following our previous work (Zhang et al., 2020), we utilize both the English RST Discourse Treebank (RST-DT) (Carlson et al., 2001) and the Chinese Connective-driven Discourse TreeBank (CDTB) (Li et al., 2014b) as the benchmark corpora for experimentation.", "Here, we give a brief introduction to the two corpora: The RST-DT corpus contains 385 news articles (347 for training and 38 for testing) from the Wall Street Journal (WSJ).", "Following previous work, we randomly select 34 documents from the training corpus as the development corpus for parameter tuning.", "And we also binarize those non-binary subtrees in RST-DT with right-branching (Sagae and Lavie, 2005) for preprocessing.", "The Chinese CDTB corpus is motivated by taking advantages of both the English RST-DT corpus and the PDTB corpus (Prasad et al., 2008).", "The CDTB corpus annotates each paragraph as a Connective-driven Discourse Tree (CDT).", "The corpus consists of 500 newswire articles which are further segmented into 2336 paragraphs and 10650 EDUs.", "The corpus is divided into three parts with 425 articles (2002 CDT trees) for training, 25 articles (105 CDT trees) for validation, and 50 articles (229 CDT trees) for testing.", "Metrics.", "Following previous studies, we measure the performance of bare tree structure ( S ), tree structure labeled with nuclearity ( N ), and tree structure labeled with rhetorical relation ( R ).", "Recently, the Full ( F ) indicator is used to estimate the tree structure labeled with both nuclearity and relation categories.", "However, since current performances on S, N and R are imbalanced, the performance on F is much limited by relation prediction.", "In other words, the Full score may underestimate the performance in span and nuclearity prediction.", "In this work, we combine nuclearity and rhetorical relation tags for joint N-R prediction aiming to reduce the uncertainty of the Full measure.", "Moreover, since RST-Parseval (Marcu, 2000) overestimates the DRS parsing performance to a certain extent, (Morey et al., 2017; Mabona et al., 2019; Zhang et al., 2020; Koto et al., 2021) adopt the original Parseval to reveal the actual performance level of DRS parsing.", "Following these studies, we also use the original Parseval for evaluation and report the micro-averaged F 1 scores by default.", "Hyper-Parameter Setting.", "For word representation, we employed the 300D vectors of GloVe (Pen-nington et al., 2014) and the 1024D vectors of ELMo (Peters et al., 2018) for RST-DT and the 300D vectors of Qiu et al. (2018) (Qiu-W2V) for CDTB, and we did not update these vectors during training.", "The English POS tags were obtained through the Stanford CoreNLP toolkit (Manning et al., 2014), the Chinese tags were borrowed from Chinese PTB, and all the POS embeddings were optimized during training.", "For model learning, we used the development set to fine-tune the parameters in Table 1, and the number of parameter search trials was around 20.", "All the experiments based on the above-mentioned settings were conducted on GeForce RTX 2080Ti GPU, and the codes will be published at https://github.com/ NLP-Discourse-SoochowU/GAN_DP .", "Comparison between different system settings.", "As stated before, we explore to make possible improvements to the top-down architecture of Zhang et al. (2020).", "Here, we study the effects of these simplification methods based on our simplified architecture.", "For clarity, we remove the adversarial learning process in each system, and the results are presented in Table 2. For the RST-DT corpus, the first two rows show that the top-down parser Parameter EN CN POS embedding 30 30 Uni-directional GRU 512 512 BiGRU 256 256 Biaffine-MLP-Split 128 64 Biaffine-MLP-NR 128 128 Boundary feature size 30 Dropout rate 0.2 0.33 Warm up epochs 7 7 Training epochs 20 20 Batch size (DTs) 5 64 Learning rate of D 1e-4 5e-4 Learning rate of other nets 1e-3 1e-3 1 0.3 0.3 2 1.0 1.0 Table 1: Fine-tuned hyper-parameters.", "performs worse when dummy split points are used, and the decline is obvious in tree structure parsing.", "Then, we further apply three classifiers to the simplified architecture, and the results (lines 1 and 3) show that the Full score drops by 1.8% for lack of correlation between the three learning goals.", "For the CDTB corpus, due to the differences in languages and annotation strategies, the situation is quite different.", "Specifically, lines 4 and 5 show that the top-down parser performs better on all the four indicators when using dummy split points (Zhang et al., 2020).", "Based on the better-performing parser using DS, we further report its performance with three independent classifiers used, and the results (line 6) show that the Full score still drops a lot (6.7%), which suggests the necessity of joint N-R prediction.", "Considering the above results, in the following, we separately use two sets of model settings for different languages.", "For English, we build our final model based on the simplified architecture without dummy split points.", "For Chinese, we build our final model based on the architecture of Zhang et al. (2020).", "For both systems, we only use two classifiers for DRS parsing.", "Comparison on the adversarial bot.", "Here, we perform experiments to explore the effects of the adversarial learning approach, and the experimental results are presented in Table 3. For the RST-DT corpus, the results show that our adversarial model setting can improve the performance on all the four indicators, especially in structure and nuclearity prediction.", "Similarly, the results on the CDTB corpus show that our adversarial method still works much better than the unreinforced parser in structure, relation, and full detection.", "The overall results indicate that the global optimization method we use is definitely effective, although the effectiveness has not yet reached the level of qualitative change.", "In fact, as a preliminary attempt for global optimization of DRS parsing, this research still has much room for improvement which deserves further exploration.", "Comparison with previous studies.", "In this part, we compare with seven previous state-of-the-art (SOTA) parsers on text-level DRS parsing.", "Here, we briefly review these studies as follows: Ji and Eisenstein (2014), a shift-reduce parser with an SVM that is trained by their extracted latent features.", "In this paper, we compare with the updated version of their parser (designated as JE2017-updated) (Morey et al., 2017).", "Feng and Hirst (2014), a two-stage greedy parser with linear-chain CRF models and some hand-engineered features.", "Li et al. (2016), an attention-based hierarchical neural model with hand-crafted features used.", "Braud et al. (2016), a hierarchical BiLSTM model that leverages information from various sequence prediction tasks.", "Braud et al. (2017), a transition-based neural model with both cross-lingual information and hand-crafted features used.", "Mabona et al. (2019), a generative model with a beam search algorithm used for DRS parsing.", "Zhang et al. (2020), a top-down neural architecture tailored for text-level DRS parsing.", "Different from many previous studies, this parser is a pure neural parser without using any additional handcrafted features.", "For the RST-DT corpus, the results are presented in the upper part of Table 4. From the results, although our previous top-down parser (Zhang et al., 2020) can achieve good results without using handcrafted features, the performance is still far from perfect.", "Comparing our GloVe-based top-down parser with previous state-of-the-art parsers, our parser performs better than most previous ones due to its ability in leveraging global context and the adversarial learning strategy.", "Furthermore, comparing the final parser (line 9) with previous work, our ELMo-based parser can further improve the performance on all the four indicators, and the improvements on structure (4.7%) and nuclearity (3.7%) are significant.", "Obviously, the contextualized word representation can greatly improve the parsing performance, especially in such a task with small-scale data corpora.", "For the CDTB corpus, we explore to employ a more strict metric 4 for performance evaluation and the overall results are presented in the lower part of Table 4. In comparison with previous work, our parser achieves comparable performance in nuclearity and relation prediction and much better results on the other two indicators, which proves the usefulness of the adversarial nets we use.", "In 4 We borrow the strict evaluation method from https: //github.com/NLP-Discourse-SoochowU/t2d_discourseparser for evaluation in this study, and report the macro-averaged F1-scores for performance.", "particular, compared with previous parsers, our parser performs significantly better on F due to the joint prediction of nuclearity and relation categories.", "This suggests the robustness of our simplified parser with only two classifiers.", "Moreover, since the two top-down DRS parsers in the table show similar results on R, we speculate that the Chinese rhetorical relation prediction has encountered a bottleneck to some extent, which requires more effort to be invested.", "Performances based on the SOTA language models.", "Recently, more and more researchers (Shi et al., 2020; Koto et al., 2021) propose to improve DRS parsing performance through powerful language models (LMs) like Bert (Devlin et al., 2019) and XLNet (Yang et al., 2019).", "Following these studies, in this work, we perform additional experiments on the XLNet-base models in (Yang et al., 2019) and (Cui et al., 2020) for the RST-DT and CDTB corpus, respectively.", "For better model integration, we slightly adjust the previously described model architecture 5 , more specifically, the EDU encoder.", "We first use a pre-trained LM to encode each entire discourse where each EDU is attached with the [SEP] and [CLS] tokens and then take the LM outputs corresponding to [CLS] as our EDU representation.", "Moreover, we segment each document according to the maximum length of 768 tokens and encode these text segments one by one to avoid the problem of memory overflow.", "For the RST-DT corpus, we report the results of the recent Bert-based top-down parser (Koto et al., 2021) for comparison.", "For the CDTB corpus, we compare with our previously described system based on traditional word vectors, and the overall results are shown in Table 5. From the results we find that our parsers achieve superior results when using the contextualized XLNet for experimentation, which suggests the great effectiveness of pre-trained LMs in such a task with 5 Adjusted model parameters are shown in Appendix.", "limited corpus size.", "Moreover, the ablation study on the adversarial learning strategy further demonstrates the usefulness of our proposed method.", "It should be noted that we report the performance using LMs in this paper never mean to advocate using pre-trained LMs or blindly pursuing performance improvements in DRS parsing.", "Sometimes, the rewards generated by the large-scale LMs could be quite different from and much more effective than that generated by language phenomena, which may hinder the study on the relatively shallow (com-pared with powerful LMs) yet valuable discourse features.", "With this in mind, it is reasonable to perform ablation study using simple word representation to explore useful discourse features and report the performance on powerful LMs for reference.", "Performance Evaluation of Dependency Trees.", "Recently, discourse-level dependency structure has attracted more and more attention.", "Here, we explore whether the proposed global optimization method can improve the RST dependency analysis to some extent.", "To achieve this, we first convert the predicted DRS trees into dependency trees as Kobayashi et al. (2020) did and then perform evaluation on the converted dependencies labeled (LAS) and unlabeled (UAS) with rhetorical relations, and the results are shown in Table 6. Firstly, lines 1 to 4 show that our parser can greatly outperform previous systems in terms of both UAS and LAS indicators.", "Secondly, the last two rows show that the global optimization of constituency trees can simultaneously improve the dependency performance, which further proves the usefulness of our proposed adversarial method.", "Remarkable Progress in DRS Parsing.", "Compared with Chinese DRS parsing where each paragraph is annotated as a DT, the English parsing with 313 DTs for training is much more challenging.", "Nevertheless, results in Table 4 and Table 5 show that our parser can largely outperform previous Systems NN/23% NS/61% SN/16% Ours (GloVe) 43.3 62.9 55.7 Ours (ELMo) 47.8 64.1 58.5 Ours (XLNet) 56.7 67.4 69.6 Advers.", "state-of-the-art parsers on Full.", "(i) For nuclearity prediction, we display the results of our parsers on each nuclearity category to explore where the improvement comes from, as shown in Table 7. From the results, it's obvious that the LM we use plays a big role in nuclearity prediction, and the proposed adversarial method can further improve the performance to a certain extent.", "(ii) For relation prediction, the classification problem with 18 coarse-grained relation tags (RST-DT) is really a challenge.", "From the results in Table 4 we can find that the progress in relation prediction is much limited in recent decade for the lack of data.", "And most of previous state-of-the-art parsers employee a variety of hand-engineered features for good performance.", "Hopefully, the experimental results in Table 5 show that powerful LMs can free data-driven models from corpus size limitation and thus our XLNet-based parser strongly outperforms JE2017-updated (Morey et al., 2017) by 18.8% on R.", "The results of our parsers on each rhetorical relation category are shown in Appendix.", "Discussion on Adversarial Learning.", "Similar to previous GAN work, improving the quality of the generated tree images is really a challenge, and the instability of the adversarial learning process is another intractable issue.", "In order for our model to continuously modify the generated images even when they are correctly classified, we leverage a least squares loss in our system for model learning.", "To avoid the over-learning of the discriminator, we tune it with a moderate learning rate and parameter scale.", "Intuitively, the convergence of our model over different learning rates is presented in Figure 5. From the results, as the learning rate of the discriminator increases, the fluctuation of the loss value becomes larger, and it is hard to reduce the generator loss.", "In these four cases, the first group seems to be more stable and in line with our expectations.", "Therefore, we set the learning rate to 1e-4 in our systems for experimentation.", "Notably, we also tried the sigmoid cross entropy loss in this research which performs much worse than the LS-GAN we use.", "For reference, we also present the model convergence over different loss functions in Appendix for reference.", "In this research, we explored a global optimization method based on recent top-down frameworks.", "Particularly, we proposed a novel strategy to transform both gold standard and predicted DRS trees into tree diagrams with two color channels.", "On this basis, we produced an LSGAN-based adversarial bot between gold and fake trees for global optimization.", "Experimental results on two popular corpora showed that our proposed adversarial approach is effective in DRS parsing and has established new state-of-the-art results for both corpora.", "Here, the first author (Longyin Zhang) would like to thank his fiancee, Dr. Xin Tan, for her valuable discussion on this research.", "This work was supported by the National Key R&D Program of China under Grant No. 2020AAA0108600, Projects 61876118 and 61976146 under the National Natural Science Foundation of China and the Priority Academic Program Development of Jiangsu Higher Education Institutions." ]
[ "abstain", "abstain", "abstain", "abstain", "method", "objective", "method", "method", "result", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "objective", "objective", "objective", "objective", "objective", "method", "method", "objective", "abstain", "result", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "other", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "method", "objective", "other", "other" ]
[ "Lexically constrained neural machine translation (NMT), which controls the generation of NMT models with pre-specified constraints, is important in many practical scenarios.", "Due to the representation gap between discrete constraints and continuous vectors in NMT models, most existing works choose to construct synthetic data or modify the decoding algorithm to impose lexical constraints, treating the NMT model as a black box.", "In this work, we propose to open this black box by directly integrating the constraints into NMT models.", "Specifically, we vectorize source and target constraints into continuous keys and values, which can be utilized by the attention modules of NMT models.", "The proposed integration method is based on the assumption that the correspondence between keys and values in attention modules is naturally suitable for modeling constraint pairs.", "Experimental results show that our method consistently outperforms several representative baselines on four language pairs, demonstrating the superiority of integrating vectorized lexical constraints.", "1 1 Introduction Controlling the lexical choice of the translation is important in a wide range of settings, such as interactive machine translation (Koehn, 2009), entity translation (Li et al., 2018), and translation in safety-critical domains (Wang et al., 2020).", "However, different from the case of statistical machine translation (Koehn et al., 2007), it is non-trivial to directly integrate discrete lexical constraints into neural machine translation (NMT) models (Bah-danau et al., 2015; Vaswani et al., 2017), whose hidden states are all continuous vectors that are difficult for humans to understand.", "In accordance with this problem, one branch of studies directs its attention to designing adCorrespondence to: Yang Liu.", "vanced decoding algorithms (Hokamp and Liu, 2017; Hasler et al., 2018; Post and Vilar, 2018) to impose hard constraints and leave NMT models unchanged.", "For instance, Hu et al. (2019) propose a vectorized dynamic beam allocation (VDBA) algorithm, which devotes part of the beam to candidates that have met some constraints.", "Although this kind of method can guarantee the presence of target constraints in the output, they are found to potentially result in poor translation quality (Chen et al., 2021b; Zhang et al., 2021), such as repeated translation or source phrase omission.", "Another branch of works proposes to learn constraint-aware NMT models through data augmentation.", "They construct synthetic data by replacing source constraints with their target-language correspondents (Song et al., 2019) or appending target constraints right after the corresponding source phrases (Dinu et al., 2019).", "During inference, the input sentence is edited in advance and then provided to the NMT model.", "The major drawback of data augmentation based methods is that they may suffer from a low success rate of generating target constraints in some cases, indicating that only adjusting the training data is sub-optimal for lexical constrained translation (Chen et al., 2021b).", "To make NMT models better learn from and cope with lexical constraints, we propose to lever-7063 age attention modules (Bahdanau et al., 2015) to explicitly integrate vectorized lexical constraints.", "As illustrated in Figure 1, we use vectorized source constraints as additional keys and vectorized target constraints as additional values.", "Intuitively, the additional keys are used to estimate the relevance between the current query and the source phrases while the additional values are used to integrate the information of the target phrases.", "In this way, each revised attention is aware of the guidance to translate which source phrase into what target phrase .", "Experiments show that our method can significantly improve the ability of NMT models to translate with constraints, indicating that the correspondence between attention keys and values is suitable for modeling constraint pairs.", "Inspired by recent progress in controlled text generation (Dathathri et al., 2020; Pascual et al., 2021), we also introduce a plug-in to the output layer that can further improve the success rate of generating constrained tokens.", "We conduct experiments on four language pairs and find that our model can consistently outperform several representative baselines.", "Training The goal of machine translation is to translate a source-language sentence x = x 1 . . . x | x | into a target-language sentence y = y 1 . . . y | y | .", "We use P ( y | x ; ) to denote an NMT model (Vaswani et al., 2017) parameterized by .", "Modern NMT models are usually trained by maximum likelihood estimation (Bahdanau et al., 2015; Vaswani et al., 2017), where the log-likelihood is defined as log P ( y | x ; ) = | y | (cid:88) t =1 log P ( y t | y <t , x ; ) , (1) in which y <t is a partial translation.", "Inference The inference of NMT models can be divided into two sub-processes: probability estimation : the model estimates the token-level probability distribution for each partial hypothesis within the beam; candidate selection : the decoding algorithm selects some candidates based on the probability estimated by the NMT model.", "These two sub-processes are performed alternatively until reaching the maximum length or generating the end-of-sentence token.", "This section explains how we integrate lexical constraints into NMT models.", "Section 3.1 illustrates the way we encode discrete constraints into continuous vectors, Section 3.2 details how we integrate the vectorized constraints into NMT models, and Section 3.3 describes our training strategy.", "Let s = s (1) , . . . , s ( N ) be the source constraints and t = t (1) , . . . , t ( N ) be the target constraints.", "Given a constraint pair s ( n ) , t ( n ) , lexically constrained translation requires that the system must translate the source phrase s ( n ) into the target phrase t ( n ) .", "Since the inner states of NMT models are all continuous vectors rather than discrete tokens, we need to vectorize the constraints before integrating them into NMT models.", "For the n -th constraint pair s ( n ) , t ( n ) , let | s ( n ) | and | t ( n ) | be the lengths of s ( n ) and t ( n ) , respectively.", "We use S ( n ) k R d 1 to denote the vector representation of the k -th token in s ( n ) , which is the sum of word embedding and positional embedding (Vaswani et al., 2017).", "Therefore, the matrix representation of s ( n ) is given by: S ( n ) = (cid:104) S ( n ) 1 ; . . . ; S ( n ) | s ( n ) | (cid:105) , (2) where S ( n ) R d | s ( n ) | is the concatenation of all vector representations of tokens in s ( n ) .", "Similarly, the matrix representation of the target constraint t ( n ) is T ( n ) R d | t ( n ) | .", "Note that the positional embedding for each constraint is calculated independently, which is also independent of the positional embeddings of the source sentence x and the target sentence y .", "We adopt Transformer (Vaswani et al., 2017) as our NMT model, which is nowadays one of the most popular and effective NMT models (Liu et al., 2020).", "Typically, a Transformer consists of an encoder, a decoder, and an output layer, of which the encoder and decoder map discrete tokens into vectorized representations and the output layer converts such representations into token-level probability distributions.", "We propose to utilize the attention modules to integrate the constraints into the encoder and decoder and use a plug-in module to integrate constraints into the output layer.", "We change the formal representation of our model 7064 Encoder Decoder Constraints Q y 1 Q y 3 Q y 5 Q y 4 Q y 2 K x 2 K x 1 K x 3 K x 4 K x 5 K s (1)1 c K s (2)1 c V x 2 V x 1 V x 3 V x 4 V x 5 V t (1)1 c V t (2)1 c", "from P ( y | x ; ) to P ( y | x , s , t ; ) to indicate that the model explicitly considers lexical constraints when estimating probability.", "Constraint-Related Keys and Values We propose to map source and target constraints into additional keys and values, which are called constraint-related keys and values , in order to distinguish from the original keys and values in vanilla attention modules.", "In practice, source and target constraints may have different lengths and they are usually not monotonically aligned (Du et al., 2021), making it challenging to directly convert the constraints into keys and values.", "To fix this problem, We adopt a multi-head attention layer (Vaswani et al., 2017) to align the bilingual constraints.", "The constraint-related keys and values for the n -th constraint pair are given by K ( n ) c = S ( n ) , V ( n ) c = attn (cid:16) S ( n ) , T ( n ) , T ( n ) (cid:17) , (3) where K ( n ) c R d | s ( n ) | and V ( n ) c R d | s ( n ) | .", "attn( Q , K , V ) denotes the multi-head attention function.", "Note that the resulting K ( n ) c and V ( n ) c are of the same shape.", "V ( n ) c can be seen as a redistributed version of the representation of target constraints.", "The constraint-related keys and values of each constraint pair are calculated separately and then concatenated together: K c = [ K (1) c ; . . . ; K ( N ) c ] , V c = [ V (1) c ; . . . ; V ( N ) c ] , (4) where K c R d | s | and V c R d | s | .", "| s | is the total length of all the N source constraints.", "Integration into the Encoder The encoder of Transformer is a stack of I identical layers, each layer contains a self-attention module to learn context-aware representations.", "For the i -th layer, the self-attention module can be represented as attn (cid:16) H ( i 1) enc , H ( i 1) enc , H ( i 1) enc (cid:17) , (5) where H ( i 1) enc R d | x | is the output of the ( i 1) th layer, and H (0)enc is initialized as the sum of word embedding and positional embedding (Vaswani et al., 2017).", "For different layers, H ( i 1) enc may lay in various manifolds, containing different levels of information (Voita et al., 2019).", "Therefore, we should adapt the constraint-related keys and values for each layer before the integration.", "We use a two-layer adaptation network to do this: K ( i ) c4enc = [adapt( K c ); H ( i 1) enc ] , V ( i ) c4enc = [adapt( V c ); H ( i 1) enc ] , (6) where adapt( ) denotes the adaptation network, which consists of two linear transformations with shape d d and a ReLU activation in between.", "The adaptation networks across all layers are independent of each other.", "K ( i ) c4enc R d ( | s | + | x | ) and V ( i ) c4enc R d ( | s | + | x | ) are the constraint-aware keys and values for the i -th encoder layer, respectively.", "The vanilla self-attention module illustrated in Eq.", "(5) is revised into the following form: attn (cid:16) H ( i 1) enc , K ( i ) c4enc , V ( i ) c4enc (cid:17) .", "Integration into the Decoder The integration into the decoder is similar to that into the encoder, the major difference is that we use the cross-attention module to model constraints for the decoder.", "The decoder of the Transformer is a stack of J identical layers, each of which is composed of a self-attention, a cross-attention, and a feed-forward module.", "We integrate vectorized constraints into the cross-attention module for the decoder.", "Formally, the vanilla cross-attention is given by attn (cid:16) S ( j ) dec , H ( I ) enc , H ( I ) enc (cid:17) , (8) where S ( j ) dec R d | y | is the output of the self-attention module in the j -th decoder layer, and H ( I ) enc R d | x | is the output of the last encoder layer.", "We adapt the constraint-related keys and values to match the manifold in the j -th decoder layer: K ( j ) c4dec = [adapt( K c ); H ( I ) enc ] , V ( j ) c4dec = [adapt( V c ); H ( I ) enc ] .", "(9) Then we revise the vanilla cross-attention (Eq.", "(8)) into the following form: attn (cid:16) S ( j ) dec , K ( j ) c4dec , V ( j ) c4dec (cid:17) .", "(10)", "Figure 2c plots an example of the integration into the decoder cross-attention.", "Integration into the Output Layer In vanilla Transformer, an output layer is employed to convert the output of the last decoder layer into token-level probabilities.", "Let h t R d 1 be the decoder output at the t -th time step, the output probability of the Transformer model is defined as P model ( y | y <t , x , s , t ; ) = softmax (cid:16) h t W (cid:17) , (11) where W R d |V| is the output embedding matrix and |V| is the vocabulary size.", "Inspired by the plug-and-play method (Pascual et al., 2021) in the field of controlled text generation (Dathathri et al., 2020; Pascual et al., 2021), we introduce an additional probability distribution over the vocabulary to better generate constrained tokens: P plug ( y | y <t , x , s , t ; ) = 0 y / t max (cid:18) 0 , cos (cid:18) w y | w y | , h t | h t | (cid:19)(cid:19) y t , (12) where w y R d 1 is the word embedding of token y and t is the sequence of all the target-side constrained tokens.", "We also use a gating sub-layer to control the strength of the additional probability: g ( y, h t ) = sigmoid (cid:16) tanh (cid:16)(cid:104) w y W 1 ; h t W 2 (cid:105)(cid:17) W 3 (cid:17) , (13) where W 1 R d d , W 2 R d d , and W 3 R 2 d 1 are three trainable linear transformations.", "The final output probability is given by P ( y | y <t , x , s , t ; ) = (1 g ( y, h t )) P model ( y | y <t , x , s , t ; ) + g ( y, h t ) P plug ( y | y <t , x , s , t ; ) .", "(14) 3.3 Training and Inference Training The proposed constraint-aware NMT model should not only generate pre-specified constraints but also maintain or improve the translation quality compared with vanilla NMT models.", "We thus propose to distinguish between constraint tokens and constraint-unrelated tokens during training.", "Formally, the training objective is given by L ( y | x , s , t ; ) = (cid:88) y t y t log P ( y t | y <t , x , s , t ; ) + (cid:88) y t y \\ t log P ( y t | y <t , x , s , t ; ) , (15) where and are hyperparameters to balance the learning of constraint generation and translation.", "is a set of original vanilla model parameters and c is a set of newly-introduced parameters that are used to vectorize and integrate lexical constraints.", "2 Since c is significantly smaller than v , it requires much less training iterations.", "Therefore, we adopt the strategy of two-stage training (Tu et al., 2018; Zhang et al., 2018) for model optimization.", "Specifically, we optimize v using the standard NMT training objective (Bahdanau et al., 2015; Vaswani et al., 2017) at the first stage and then learn the whole model at the second stage.", "The second stage is significantly shorter than the first stage, we will give more details in Section 4.1.", "Inference As discussed in Section 2, the inference process is composed of two sub-processes: probability estimation and candidate selection.", "In this work, we aim to improve the probability estimation sub-process and our method is orthogonal to constrained decoding algorithms (Hokamp and Liu, 2017; Post and Vilar, 2018; Hu et al., 2019), which instead focus on candidate selection.", "Therefore, we can employ not only beam search but also constrained decoding algorithms at inference time.", "We use VDBA (Hu et al., 2019) as the default constrained decoding algorithm, which supports batched inputs and is significantly faster than most other counterparts (Hokamp and Liu, 2017; Post and Vilar, 2018; Hasler et al., 2018).", "Training Data In this work, we conduct experiments on Chinese English (Zh En) and German English (De En) translation tasks.", "For Zh En, the training set contains 1.25M sentence pairs from LDC 3 .", "For De En, the training set is from the WMT 2014 German English translation task, which consists of 4.47M sentence pairs.", "We apply BPE (Sennrich et al., 2016b) with 32K joint merge operations for both Zh En and De En.", "Evaluation Data Following Chen et al. (2021b), we evaluate our approach on the test sets with human-annotated alignments, which are widely used in related studies (Chen et al., 2020b, 2021a).", "2 c includes parameters of the attention presented in Eq (3), the adaptation networks described in Eq (6) and (9), and the gating sub-layer illustrated in Eq (13).", "We find the alignment test sets have significant overlaps with the corresponding training sets, which is not explicitly stated in previous works.", "In this work, we remove the training examples that are covered by the alignment test sets.", "For Zh En, we use the alignment datasets from Liu et al. (2005) 4 , in which the validation and test sets both contain 450 sentence pairs.", "For De En, we use the alignment dataset from Zenkel et al. (2020) 5 as the test set, which consists of 508 sentence pairs.", "Since there is no human-annotated alignment validation sets for De En, we use fast-align 6 to annotate the newstest 2013 as the validation set for De En.", "Lexical Constraints In real-world applications, lexical constraints are usually provided by human translators.", "We follow Chen et al. (2021b) to simulate the practical scenario by sampling constraints from the phrase pairs that are extracted from parallel data using alignments.", "The script for phrase pair extraction is publicly available.", "7 For the validation and test sets of Zh En and the test set of De En, we use human-annotated alignments to extract phrase pairs.", "For the training corpora in both Zh En and De En, we use fast-align to firstly learn an alignment model and then use the model to automatically annotate the alignments.", "The validation set of De En is also annotated by the alignment model learned on the corresponding training corpus.", "We use the same strategy as Chen et al. (2021b) to sample constraints from the extracted phrase pairs.", "More concretely, the number of constraints in each sentence is up to", "3. The length of each constrained phrase is uniformly sampled among 1 and", "3. For each sentence pair, all the constraint pairs are shuffled and then supplied to the model in an unordered manner.", "Model Configuration We use the base setting (Vaswani et al., 2017) for our model.", "Specifically, the hidden size d is 512 and the depths of both the encoder and the decoder are 6.", "Each multihead attention module has 8 individual attention heads.", "Since our method introduces additional parameters, we use a larger model with an 8-layer encoder and an 8-layer decoder to assimilate the 4 http://nlp.csai.tsinghua.edu.cn/~ly/ systems/TsinghuaAligner/TsinghuaAligner.html 5 https://github.com/lilt/ alignment-scripts 6 https://github.com/clab/fast_align 7 https://github.com/ghchen18/cdalign/ blob/main/scripts/extract_phrase.py 7067 Method BLEU CSR (%) Z E E Z D E E D Avg.", "parameter count for the baselines.", "For Zh En, we optimize v for 50K iterations at the first stage and then optimize for 10K iterations at the second stage.", "For a fair comparison, we train the baselines for 60K iterations in total.", "For De En, we optimize v for 90K iterations then optimize for 10K iterations.", "The baselines are trained for 100K iterations.", "All the involved models are optimized by Adam (Kingma and Ba, 2015), with 1 = 0 .", "9 , 2 = 0 .", "98 and = 10 9 .", "The dropout rate is set to 0.3 for Zh En and 0.1 for De En.", "Label smoothing is employed and the smoothing penalty is set to 0.1 for all language pairs.", "We use the same learning rate schedule as Vaswani et al. (2017).", "All models are trained on 4 NVIDIA V100 GPUs and evaluated on 1 NVIDIA V100 GPU.", "During training, each mini batch contains roughly 32K tokens in total across all GPUs.", "We set the values of and based on the results on the validation set.", "Specifically, for models using VDBA, we set = = 0 .", "5 , while for models using beam search, we set = 0 .", "8 and = 0 .", "2 .", "The beam size is set to 4 during inference.", "Baselines We compare our approach with three representative baselines: VDBA (Hu et al., 2019): dynamically devoting part of the beam for constraint-related hypotheses at inference time; Replace (Song et al., 2019): directly replacing source constraints in the training data with their corresponding target constraints.", "The model is also improved with pointer network; CDAlign (Chen et al., 2021b): explicitly using an alignment model to decide the position to insert target constraints during inference.", "Evaluation Metrics We evaluate the involved methods using the following two metrics: BLEU : we use sacreBLEU 8 (Post, 2018) to report the BLEU score; Copying Success Rate (CSR) : We follow Chen et al. (2021b) to use the percentage of constraints that are successfully generated in the translation as the CSR, which is calculated at word level after removing the BPE separator.", "We use compare-mt (Neubig et al., 2019) for significance testing, with bootstrap = 1000 and prob _ thresh = 0 .", "05 .", "Table 1 shows the results of lexically constrained translation on test sets of all four translation tasks.", "All the investigated methods can effectively improve the CSR over the vanilla Transformer.", "The CSR of VDBA on Zh En is not 100.0% for the reason that some target constraints contain out-of-vocabulary tokens.", "Replace (Song et al., 2019) achieves better BLEU scores on three translation directions (i.e., Zh En and En De) than VDBA, but its CSR is much lower.", "CDAlign (Chen et al., 2021b) also performs better than Replace on average regarding CSR.", "Our method consistently outperforms all the three baselines across the four translation directions in terms of BLEU, demonstrating the necessity of integrating vectorized constraints into NMT models.", "Decoding with VDBA, we also achieve the highest CSR.", "To disentangle the effect of integrating vectorized constraints and 8 Signature for Zh En, De En, and En De: nrefs:1 | case:mixed | eff:no | tok:13a | smooth:exp | version:2.0.0.", "Signature for En Zh: nrefs:1 | case:mixed | eff:no | tok:zh | smooth:exp | version:2.0.0.", "VDBA, we also report the result of our model using beam search in Table", "2. Decoding with beam search, our model can also achieve a better BLEU score than the baselines and the CSR is higher than both Replace and CDAlign on average.", "We investigate the effect of different components through an ablation study, the results are shown in Table", "3. We find that only integrating lexical constraints into attention can significantly improve the CSR over the vanilla model (91.5% vs. 25.5%), which is consistent with our motivation that the correspondence between keys and values is naturally suitable for modeling the relation between source and target constraints.", "Plugging target constraints into the output layer can further improve the performance, but the output plug-in itself can only generate 61.9% of constraints.", "When decoding with VDBA, combining both the two types of integration achieves the best BLEU score, indicating that every component is important for the model to translate with constraints.", "Task Description and Data Preparation An interesting application of lexically constrained machine translation is code-switched translation, of which the output contains terms across different", "(a) BLEU score on code-switched test sets.", "languages.", "Figure 1 shows an example of code-switched translation, where the output Chinese sentence should include the English token \"Beatles\".", "Code-switched machine translation is important in many scenarios, such as entity translation (Li et al., 2018) and the translation of sentences containing product prices or web URLs (Chen et al., 2021b).", "In this work, we evaluate the performance of several approaches on code-switched machine translation.", "The parallel data and extracted constraint pairs for each language pair are the same as those used in the lexically constrained translation task.", "To construct the training and evaluation data for code-switched translation, we randomly replace 50% of the target constraints with their corresponding source constraints.", "The target sentence is also switched if it contains switched target constraints.", "Results Table 4 gives the results of the code-switched translation task.", "The CSR of Replace (Song et al., 2019) is lower than 50% across all the four translation directions, indicating that simply replacing the training data can not handle the code-switched translation.", "A potential reason is that it is difficult for the NMT model to decide whether to translate or copy some source phrases in the input sentence.", "Surprisingly, VDBA, CDAlign, 7069 Method Batch Size (# Sent.) 1 128 Vanilla 1.0 43.2 VDBA 0.5 2.1 Replace 0.9 40.5 CDAlign 0.7 n/a Ours 0.5 2.3 w/o VDBA 0.9 39.2 Table 5: Inference speed with different batch sizes.", "and our method all perform well in this scenario, and our method outperforms the two baselines.", "These results suggest the capability of our method to cope with flexible types of lexical constraints.", "We report the inference speed of each involved approach in Table 5.", "The speed of Replace is close to that of the vanilla Transformer, but its CSR is much lower than other methods.", "Since the open-sourced implementation of CDAlign 9 does not support batched decoding, we compare our method with CDAlign with batch _ size = 1 .", "The speed of our method using beam search is faster than that of CDAlign (0.9 vs. 0.7 ).", "When provided with batched inputs, our method can slightly speed up VDBA (2.3 vs. 2.1 ).", "A potential reason is that the probability estimated by our model is more closely related to the correctness of the candidates, making target constraints easier to find.", "Wang et al. (2020) to investigate the gap between the probability and the correctness of model outputs, which is measured by the inference expected calibration error (ECE).", "As shown in Table 6, the inference ECE of our method is much lower than that of the vanilla model, indicating that the probability of our model is more accurate than vanilla models.", "To better understand the calibration of our model and the baseline model, we also estimate the average probability of all the predicted tokens and the constrained tokens.", "The results show that our model assigns higher probabilities to constrained tokens, which are already known to be correct.", "To address the concern that the proposed model may only memorize the constraints seen in the training set, we calculate the overlap ratio of constraints between training and test sets.", "As shown in Table 7, we find that only 35.6% of the test constraints are seen in the training data, while the CSR of our model decoding with beam search is 94.4%.", "The results indicate that our method extrapolates well to constraints unseen during training.", "Table 8 shows some example translations of different methods.", "We find Replace tends to omit some constraints.", "Although VDBA and CDAlign can successfully generate constrained tokens, the translation quality of the two methods is not satisfying.", "Our result not only contains constrained tokens but also maintains the translation quality compared with the unconstrained model, confirming the necessity of integrating vectorized constraints into NMT models.", "One line of approaches to lexically constrained NMT focuses on designing advanced decoding algorithms (Hasler et al., 2018).", "Hokamp and Liu (2017) propose grid beam search (GBS), which enforces target constraints to appear in the output by 7070 Constraints Zielsetzung objectives , Fiorella Fiorella Source Mit der Zielsetzung des Berichtes von Fiorella Ghilardotti allerdings sind wir einverstanden .", "enumerating constraints at each decoding step.", "The beam size required by GBS varies with the number of constraints.", "Post and Vilar (2018) propose dynamic beam allocation (DBA) to fix the problem of varying beam size for GBS, which is then extended by Hu et al. (2019) into VDBA that supports batched decoding.", "There are also some other constrained decoding algorithms that leverage word alignments to impose constraints (Song et al., 2020; Chen et al., 2021b).", "Although the alignment-based decoding methods are faster than VDBA, they may be negatively affected by noisy alignments, resulting in low CSR.", "Recently, Susanto et al. (2020) adopt Levenshtein Transformer (Gu et al., 2019) to insert target constraints in a non-autoregressive manner, for which the constraints must be provided with the same order as that in the reference.", "Another branch of studies proposes to edit the training data to induce constraints (Sennrich et al., 2016a).", "Song et al. (2019) directly replace source constraints with their target translations and Dinu et al. (2019) insert target constraints into the source sentence without removing source constraints.", "Similarly, Chen et al. (2020a) propose to append target constraints after the source sentence.", "In this work, we propose to integrate vectorized lexical constraints into NMT models.", "Our work is orthogonal to both constrained decoding and constraint-oriented data augmentation.", "A similar work to us is that Li et al. (2020) propose to use continuous memory to store only the target constraint, which is then integrated into NMT models through the decoder self-attention.", "However, Li et al. (2020) did not exploit the correspondence between keys and values to model both source and target constraints.", "Recent years have witnessed rapid progress in controlled text generation.", "Dathathri et al. (2020) propose to use the gradients of a discriminator to control a pre-trained language model to generate towards a specific topic.", "Liu et al. (2021) propose a decoding-time method that employs experts to control the generation of pre-trained language models.", "We borrow the idea presented in Pascual et al. (2021) to insert a plug-in into the output layer.", "The difference between our plug-in network and Pascual et al. (2021) is that we use an input-dependent gate to control the effect of the plugged probability.", "In this work, we propose to vectorize and integrate lexical constraints into NMT models.", "Our basic idea is to use the correspondence between keys and values in attention modules to model constraint pairs.", "Experiments show that our approach can outperform several representative baselines across four different translation directions.", "In the future, we plan to vectorize other attributes, such as the topic, the style, and the sentiment, to better control the generation of NMT models.", "This work is supported by the National Key R&D Program of China (No. 2018YFB1005103), the National Natural Science Foundation of China (No. 61925601, No. 62006138), and the Tencent AI Lab Rhino-Bird Focused Research Program (No. JR202031).", "We sincerely thank Guanhua Chen and Chi Chen for their constructive advice on technical details, and all the reviewers for their valuable and insightful comments." ]
[ "abstain", "abstain", "objective", "method", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "result", "result", "result", "abstain", "method", "other", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "method", "other", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "other", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "other", "method", "method", "abstain", "abstain", "abstain", "abstain", "other", "method", "method", "abstain", "abstain", "method", "other", "abstain", "abstain", "method", "method", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "method", "method", "abstain", "method", "method", "method", "method", "method", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "method", "method", "method", "result", "abstain", "result", "method", "result", "abstain", "result", "result", "abstain", "result", "abstain", "result", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "abstain", "abstain", "other", "other", "other", "other", "method", "method", "objective", "method", "result", "method", "other", "other" ]
[ "When translating natural language questions into SQL queries to answer questions from a database, contemporary semantic parsing models struggle to generalize to unseen database schemas.", "The generalization challenge lies in", "(a) encoding the database relations in an accessible way for the semantic parser, and", "(b) modeling alignment between database columns and their mentions in a given query.", "We present a unified framework, based on the relation-aware self-attention mechanism, to address schema encoding, schema linking, and feature representation within a text-to-SQL encoder.", "On the challenging Spider dataset this framework boosts the exact match accuracy to 57.2%, surpassing its best counterparts by 8.7% absolute improvement.", "Further augmented with BERT, it achieves the new state-of-the-art performance of 65.6% on the Spider leaderboard.", "In addition, we observe qualitative improvements in the model's understanding of schema linking and alignment.", "Our implementation will be open-sourced at https://github.com/Microsoft/rat-sql .", "The ability to effectively query databases with natural language (NL) unlocks the power of large datasets to the vast majority of users who are not proficient in query languages.", "As such, a large body of research has focused on the task of translating NL questions into SQL queries that existing database software can execute.", "The development of large annotated datasets of questions and the corresponding SQL queries has catalyzed progress in the field.", "In contrast to prior semantic parsing datasets (Finegan-Dollak et al., Equal contribution. Order decided by a coin toss. Work done during an internship at Microsoft Research. Work done while partly affiliated with Microsoft Research. Now at Microsoft: [email protected] . 2018), new tasks such as WikiSQL (Zhong et al., 2017) and Spider (Yu et al., 2018b) pose the real-life challenge of generalization to unseen database schemas .", "Every query is conditioned on a multi-table database schema, and the databases do not overlap between the train and test sets.", "Schema generalization is challenging for three interconnected reasons.", "First, any text-to-SQL parsing model must encode the schema into representations suitable for decoding a SQL query that might involve the given columns or tables.", "Second, these representations should encode all the information about the schema such as its column types, foreign key relations, and primary keys used for database joins.", "Finally, the model must recognize NL used to refer to columns and tables, which might differ from the referential language seen in training.", "The latter challenge is known as schema linking aligning entity references in the question to the intended schema columns or tables.", "While the question of schema encoding has been studied in recent literature (Bogin et al., 2019a), schema linking has been relatively less explored.", "Consider the example in Figure 1.", "It illustrates the challenge of ambiguity in linking: while model in the question refers to car_names.model rather than model_list.model , cars actually refers to both cars_data and car_names (but not car_makers ) for the purpose of table joining.", "To resolve the column/table references properly, the semantic parser must take into account both the known schema relations ( e.g. foreign keys) and the question context.", "Prior work (Bogin et al., 2019a) addressed the schema representation problem by encoding the directed graph of foreign key relations in the schema with a graph neural network (GNN).", "While effective, this approach has two important shortcomings.", "First, it does not contextualize schema encoding with the question, thus making reasoning about cars_data id mpg cylinders edispl horsepower weight accelerate year car_names make_id model make model_list model_id maker model car_makers id maker full_name country Natural Language Question: For the cars with 4 cylinders, which modelhas the largest horsepower?", "schema linking difficult after both the column representations and question word representations are built.", "Second, it limits information propagation during schema encoding to the predefined graph of foreign key relations.", "The advent of self-attentional mechanisms in NLP (Vaswani et al., 2017) shows that global reasoning is crucial to effective representations of relational structures.", "However, we would like any global reasoning to still take into account the aforementioned schema relations.", "In this work, we present a unified framework, called RAT-SQL, 1 for encoding relational structure in the database schema and a given question.", "It uses relation-aware self-attention to combine global reasoning over the schema entities and question words with structured reasoning over predefined schema relations.", "We then apply RAT-SQL to the problems of schema encoding and schema linking.", "As a result, we obtain 57.2% exact match accuracy on the Spider test set.", "At the time of writing, this result is the state of the art among models unaugmented with pretrained BERT embeddings and further reaches to the overall state of the art (65.6%) when RAT-SQL is augmented with BERT.", "In addition, we experimentally demonstrate that RAT-SQL enables the model to build more accurate internal representations of the question's true alignment with schema columns and tables.", "Semantic parsing of NL to SQL recently surged in popularity thanks to the creation of two new multi-table datasets with the challenge of schema generalization WikiSQL (Zhong et al., 2017) and Spider (Yu et al., 2018b).", "Schema encoding is not as challenging in WikiSQL as in Spider because it lacks multi-table relations.", "Schema linking is relevant for both tasks but also more challenging in Spider due to the richer NL expressiveness and less restricted SQL grammar observed in it.", "The state of the art semantic parser on WikiSQL (He et al., 1 R elationA ware T ransformer. 2019) achieves a test set accuracy of 91.8%, significantly higher than the state of the art on Spider.", "The recent state-of-the-art models evaluated on Spider use various attentional architectures for question/schema encoding and AST-based structural architectures for query decoding.", "IRNet (Guo et al., 2019) encodes the question and schema separately with LSTM and self-attention respectively, augmenting them with custom type vectors for schema linking.", "They further use the AST-based decoder of Yin and Neubig (2017) to decode a query in an intermediate representation (IR) that exhibits higher-level abstractions than SQL.", "Bogin et al. (2019a) encode the schema with a GNN and a similar grammar-based decoder.", "Both works emphasize schema encoding and schema linking, but design separate featurization techniques to augment word vectors (as opposed to relations between words and columns ) to resolve it.", "In contrast, the RAT-SQL framework provides a unified way to encode arbitrary relational information among inputs.", "Concurrently with this work, Bogin et al. (2019b) published Global-GNN, a different approach to schema linking for Spider, which applies global reasoning between question words and schema columns/tables.", "Global reasoning is implemented by gating the GNN that encodes the schema using the question token representations.", "This differs from RAT-SQL in two important ways:", "(a) question word representations influence the schema representations but not vice versa, and", "(b) like in other GNN-based encoders, message propagation is limited to the schema-induced edges such as foreign key relations.", "In contrast, our relation-aware transformer mechanism allows encoding arbitrary relations between question words and schema elements explicitly, and these representations are computed jointly over all inputs using self-attention.", "We use the same formulation of relation-aware self-attention as Shaw et al. (2018).", "However, they only apply it to sequences of words in the context of machine translation, and as such, their relation types only encode the relative distance between two words.", "We extend their work and show that relation-aware self-attention can effectively encode more complex relationships within an unordered set of elements (in our case, columns and tables within a database schema as well as relations between the schema and the question).", "To the best of our knowledge, this is the first application of relation-aware self-attention to joint representation learning with both predefined and softly induced relations in the input structure.", "Hellendoorn et al. (2020) develop a similar model concurrently with this work, where they use relation-aware self-attention to encode data flow structure in source code embeddings.", "Sun et al. (2018) use a heterogeneous graph of KB facts and relevant documents for open-domain question answering.", "The nodes of their graph are analogous to the database schema nodes in RAT-SQL, but RAT-SQL also incorporates the question in the same formalism to enable joint representation learning between the question and the schema.", "First, we introduce relation-aware self-attention , a model for embedding semi-structured input sequences in a way that jointly encodes pre-existing relational structure in the input as well as induced soft relations between sequence elements in the same embedding.", "Our solutions to schema embedding and linking naturally arise as features implemented in this framework.", "Consider a set of inputs X = { x i } ni =1 where x i R d x .", "In general, we consider it an unordered set, although x i may be imbued with positional embeddings to add an explicit ordering relation.", "A self-attention encoder, or Transformer , introduced by Vaswani et al. (2017), is a stack of self-attention layers where each layer (consisting of H heads ) transforms each x i into y i R d x as follows: e ( h ) ij = x i W ( h ) Q ( x j W ( h ) K ) (cid:62) (cid:112) d z /H ; ( h ) ij = softmax j (cid:8) e ( h ) ij (cid:9) z ( h ) i = n (cid:88) j =1 ( h ) ij ( x j W ( h ) V ); z i = Concat (cid:0) z (1) i , , z ( H ) i (cid:1) y i = LayerNorm ( x i + z i ) y i = LayerNorm ( y i + FC ( ReLU ( FC ( y i ))) (1) where FC is a fully-connected layer, LayerNorm is layer normalization (Ba et al., 2016), 1 h H , and W ( h ) Q , W ( h ) K , W ( h ) V R d x ( d x /H ) .", "computes a learned relation between all the input elements x i , and the strength of this relation is encoded in the attention weights ( h ) ij .", "However, in many applications (including text-to-SQL parsing) we are aware of some preexisting relational features between the inputs, and would like to bias our encoder model toward them.", "This is straightforward for non-relational features (repre-sented directly in each x i ).", "We could limit the attention computation only to the hard edges where the preexisting relations are known to hold.", "This would make the model similar to a graph attention network (Velickovic et al., 2018), and would also impede the Transformer's ability to learn new relations.", "Instead, RAT provides a way to communicate known relations to the encoder by adding their representations to the attention mechanism.", "Shaw et al. (2018) describe a way to represent relative position information in a self-attention layer by changing Equation (1) as follows: e ( h ) ij = x i W ( h ) Q ( x j W ( h ) K + r Kij ) (cid:62) (cid:112) d z /H z ( h ) i = n (cid:88) j =1 ( h ) ij ( x j W ( h ) V + r Vij ) .", "(2) Here the r ij terms encode the known relationship between the two elements x i and x j in the input.", "While Shaw et al. used it exclusively for relative position representation, we show how to use the same framework to effectively bias the Transformer toward arbitrary relational information.", "Consider R relational features, each a binary relation R ( s ) X X (1 s R ) .", "The RAT framework represents all the pre-existing features for each edge ( i, j ) as r Kij = r Vij = Concat (cid:0) (1) ij , . . . , ( R ) ij (cid:1) where each ( s ) ij is either a learned embedding for the relation R ( s ) if the relation holds for the corresponding edge ( i.e. if ( i, j ) R ( s ) ), or a zero vector of appropriate size.", "In the following section, we will describe the set of relations our RAT-SQL model uses to encode a given database schema.", "We now describe the RAT-SQL framework and its application to the problems of schema encoding and linking.", "First, we formally define the text-to-SQL semantic parsing problem and its components.", "In the rest of the section, we present our implementation of schema linking in the RAT framework.", "Given a natural language question Q and a schema S = (cid:104)C , T (cid:105) for a relational database, our goal is to generate the corresponding SQL P .", "Here the question Q = q 1 . . . q | Q | is a sequence of words, and the schema consists of columns C = { c 1 , . . . , c |C| } and tables T = (cid:8) t 1 , . . . , t |T | (cid:9) .", "Each column name c i contains words c i, 1 , . . . , c i, | c i | and each table name t i contains words t i, 1 , . . . , t i, | t i | .", "The desired program P is represented as an abstract syntax tree T in the context-free grammar of SQL.", "Some columns in the schema are primary keys , used for uniquely indexing the corresponding table, and some are foreign keys , used to reference a primary key column in a different table.", "In addition, each column has a type { number , text } .", "Formally, we represent the database schema as a directed graph G = (cid:104)V , E(cid:105) .", "Its nodes V = C T are the columns and tables of the schema, each labeled with the words in its name (for columns, we prepend their type to the label).", "Its edges E are defined by the pre-existing database relations, described in Table 1.", "Figure 2 illustrates an example graph (with a subset of actual edges and labels).", "While G holds all the known information about the schema, it is insufficient for appropriately encoding a previously unseen schema in the context of the question Q .", "We would like our representations of the schema S and the question Q to be joint , in particular for modeling the alignment between them.", "Thus, we also define the question-contextualized schema graph GQ = (cid:104)V Q , EQ (cid:105) where VQ = V Q = C T Q includes nodes for the question words (each labeled with a corresponding word), and EQ = E EQ S are the schema edges E extended with additional special relations between the question words and schema members, detailed in the rest of this section.", "For modeling text-to-SQL generation, we adopt the encoder-decoder framework .", "Given the input as a graph GQ , the encoder f enc embeds it into joint representations c i , t i , q i for each column c i C , table t i T , and question word q Q respectively.", "The decoder f dec then uses them to compute a distribution Pr( P | GQ ) over the SQL programs.", "Following the state-of-the-art NLP literature, our encoder first obtains the initial representations c init i t init i for every node of G by", "(a) retrieving a pretrained Glove embedding (Pennington et al., 2014) for each word, and", "(b) processing the embeddings in each multi-word label with a bidirectional LSTM (BiLSTM) (Hochreiter and Schmidhuber, 1997).", "It also runs a separate BiLSTM over the question Q to obtain initial word representations q init i .", "The initial representations c init i , t init i , and q init i are independent of each other and devoid of any relational information known to hold in EQ .", "To produce joint representations for the entire input graph GQ , we use the relation-aware self-attention mechanism (Section 3).", "Its input X is the set of all the node representations in GQ : X = ( c init 1 , , c init |C| , t init 1 , , t init |T | , q init 1 , , q init | Q | ) .", "The encoder f enc applies a stack of N relation-aware self-attention layers to X , with separate weight matrices in each layer.", "The final representations c i , t i , q i produced by the N th layer constitute the output of the whole encoder.", "Alternatively, we also consider pre-trained BERT (Devlin et al., 2019) embeddings to obtain the initial representations.", "Following (Huang et al., 2019; Zhang et al., 2019), we feed X to the BERT and use the last hidden states as the initial representations before proceeding with the RAT layers.", "2 Importantly, as detailed in Section 3, every RAT layer uses self-attention between all elements of the input graph GQ to compute new contextual representations of question words and schema members.", "However, this self-attention is biased toward some pre-defined relations using the edge vectors r Kij , r Vij in each layer.", "We define the set of used relation types in a way that directly addresses the challenges of schema embedding and linking.", "Occurrences of these relations between the question and the schema constitute the edges EQ S .", "Most of these relation types address schema linking (Sec-tion 4.3); we also add some auxiliary edges to aid schema encoding (see Appendix A).", "Schema linking relations in EQ S aid the model with aligning column/table references in the question to the corresponding schema columns/tables.", "This alignment is implicitly defined by two kinds of information in the input: matching names and matching values , which we detail in order below.", "Name-Based Linking Name-based linking refers to exact or partial occurrences of the column/table names in the question, such as the occurrences of cylinders and cars in the question in Figure 1.", "Textual matches are the most explicit evidence of question-schema alignment and as such, one might expect them to be directly beneficial to the encoder.", "However, in all our experiments the representations produced by vanilla self-attention were insensitive to textual matches even though their initial representations were identical.", "Brunner et al. (2020) suggest that representations produced by Transformers mix the information from different positions and cease to be directly interpretable after 2+ layers, which might explain our observations.", "Thus, to remedy this phenomenon, we explicitly encode name-based linking using RAT relations.", "Specifically, for all n-grams of length 1 to 5 in the question, we determine (1) whether it exactly matches the name of a column/table ( exact match ); or (2) whether the n-gram is a subsequence of the name of a column/table ( partial match ).", "3 Then, for every ( i, j ) where x i Q , x j S (or vice versa), we set r ij EQ S to QUESTION-COLUMNM , QUESTION-TABLEM , COLUMN-QUESTIONM or TABLE-QUESTIONM depending on the type of x i and x j .", "Here M is one of EXACTMATCH , PARTIALMATCH , or NOMATCH .", "Value-Based Linking Question-schema alignment also occurs when the question mentions any values that occur in the database and consequently participate in the desired SQL, such as 4 in Figure 1.", "While this example makes the alignment explicit by mentioning the column name cylinders , many real-world questions do not.", "Thus, linking a value to the corresponding column requires background knowledge.", "The database itself is the most comprehensive and readily available source of knowledge about possible values, but also the most challenging to process in an end-to-end model because of the privacy and speed impact.", "However, the RAT framework allows us to outsource this processing to the database engine to augment GQ with potential value-based linking without exposing the model itself to the data.", "Specifically, we add a new COLUMN-VALUE relation between any word q i and column name c j s.t. q i occurs as a value 3 This procedure matches that of Guo et al. (2019), but we use the matching information differently in RAT.", "(or a full word within a value) of c j .", "This simple approach drastically improves the performance of RAT-SQL (see Section 5).", "It also directly addresses the aforementioned DB challenges:", "(a) the model is never exposed to database content that does not occur in the question,", "(b) word matches are retrieved quickly via DB indices & textual search.", "Memory-Schema Alignment Matrix Our intuition suggests that the columns and tables which occur in the SQL P will generally have a corresponding reference in the natural language question.", "To capture this intuition in the model, we apply relation-aware attention as a pointer mechanism between every memory element in y and all the columns/tables to compute explicit alignment matrices L col R | y ||C| and L tab R | y ||T | : L col i,j = y i W col Q ( c final j W col K + r Kij ) (cid:62) d x (3) L tab i,j = y i W tab Q ( t final j W tab K + r Kij ) (cid:62) d x L col i,j = softmax j (cid:8) L col i,j (cid:9) L tab i,j = softmax j (cid:8) L tab i,j (cid:9) Intuitively, the alignment matrices in Eq.", "(3) should resemble the real discrete alignments, therefore should respect certain constraints like sparsity.", "When the encoder is sufficiently parameterized, sparsity tends to arise with learning, but we can also encourage it with an explicit objective.", "Appendix B presents this objective and discusses our experiments with sparse alignment in RAT-SQL.", "The decoder f dec of RAT-SQL follows the tree-structured architecture of Yin and Neubig (2017).", "It generates the SQLP as an abstract syntax tree in depth-first traversal order, by using an LSTM to output a sequence of decoder actions that either", "(i) expand the last generated node into a grammar rule, called APPLYRULE ; or when completing a leaf node,", "(ii) choose a column/table from the schema, called SELECTCOLUMN and SELECTTABLE .", "Formally, Pr( P | Y ) = (cid:81) t Pr( a t | a <t , Y ) where Y = f enc ( GQ ) is the final encoding of the question and schema, and a <t are all the previous actions.", "In a tree-structured decoder, the LSTM state is updated as m t , h t = f LSTM ([ a t 1 (cid:107) z t (cid:107) h p t (cid:107) a p t (cid:107) n f t ] , m t 1 , h t 1 ) where m t is the LSTM cell state, h t is the LSTM output at step t , a t 1 is the embedding of the previous action, p t is the step corresponding to airlineid airlinename city airports many airlines How SELECT count(*) WHERE = 0.1 0.1 0.8 Column?", "expanding the parent AST node of the current node, and n f t is the embedding of the current node type.", "Finally, z t is the context representation, computed using multi-head attention (with 8 heads) on h t 1 over Y .", "For APPLYRULE [ R ] , we compute Pr( a t = APPLYRULE [ R ] | a <t , y ) = softmax R ( g ( h t )) where g ( ) is a 2-layer MLP with a tanh nonlinearity.", "For SELECTCOLUMN , we compute i = h t W sc Q ( y i W sc K ) T d x i = softmax i (cid:8) i (cid:9) Pr( a t = SELECTCOLUMN [ i ] | a <t , y ) = | y | (cid:88) j =1 j L col j,i and similarly for SELECTTABLE .", "We refer the reader to Yin and Neubig (2017) for details.", "We implemented RAT-SQL in PyTorch (Paszke et al., 2017).", "During preprocessing, the input of questions, column names and table names are to-kenized and lemmatized with the StandfordNLP toolkit (Manning et al., 2014).", "Within the encoder, we use GloVe (Pennington et al., 2014) word embeddings, held fixed in training except for the 50 most common words in the training set.", "For RAT-SQL BERT, we use the WordPiece tokenization.", "All word embeddings have dimension 300 .", "The bidirectional LSTMs have hidden size 128 per direction, and use the recurrent dropout method of Gal and Ghahramani (2016) with rate 0 .", "2 .", "We stack 8 relation-aware self-attention layers on top of the bidirectional LSTMs.", "Within them, we set d x = d z = 256 , H = 8 , and use dropout with rate 0 .", "1 .", "The position-wise feed-forward network has inner layer dimension 1024.", "Inside the decoder, we use rule embeddings of size 128 , node type embeddings of size 64 , and a hidden size of 512 inside the LSTM with dropout of 0 .", "21 .", "We used the Adam optimizer (Kingma and Ba, 2015) with the default hyperparameters.", "During the first warmup _ steps = max _ steps/ 20 steps of training, the learning rate linearly increases from 0 to 7 .", "4 10 4 .", "Afterwards, it is annealed to 0 with 7 .", "4 10 4 (1 step warmup _ steps max _ steps warmup _ steps ) 0 .", "5 .", "We use a batch size of 20 and train for up to 40,000 steps.", "For RAT-SQL + BERT, we use a separate learning rate of 3 10 6 to fine-tune BERT, a batch size of 24 and train for up to 90,000 steps.", "Hyperparameter Search We tuned the batch size (20, 50, 80), number of RAT layers (4, 6, 8), dropout (uniformly sampled from [0 . 1 , 0 . 3] ), hidden size of decoder RNN (256, 512), max learning rate (log-uniformly sampled from [5 10 4 , 2 10 3 ] ).", "We randomly sampled 100 configurations and optimized on the dev set.", "RAT-SQL + BERT reuses most hyperparameters of RAT-SQL, only tuning the BERT learning rate (1 10 4 , 3 10 4 , 5 10 4 ), number of RAT layers (6, 8, 10), number of training steps (4 10 4 , 6 10 4 , 9 10 4 ).", "We use the Spider dataset (Yu et al., 2018b) for most of our experiments, and also conduct preliminary experiments on WikiSQL (Zhong et al., 2017) to confirm generalization to other datasets.", "As described by Yu et al., Spider contains 8,659 examples (questions and SQL queries, with the accompanying schemas), including 1,659 examples lifted from the Restaurants (Popescu et al., 2003; Tang and Mooney, 2000), GeoQuery (Zelle and Mooney, 1996), Scholar (Iyer et al., 2017), Academic (Li and Jagadish, 2014), Yelp and IMDB (Yaghmazadeh et al., 2017) datasets.", "As Yu et al. (2018b) make the test set accessible only through an evaluation server, we perform Split Easy Medium Hard Extra Hard All RAT-SQL Dev 80.4 63.9 55.7 40.6 62.7 Test 74.8 60.7 53.6 31.5 57.2 RAT-SQL + BERT Dev 86.4 73.6 62.1 42.9 69.7 Test 83.0 71.3 58.3 38.4 65.6 Table 3: Accuracy on the Spider development and test sets, by difficulty as defined by Yu et al. (2018b).", "most evaluations (other than the final accuracy measurement) using the development set.", "It contains 1,034 examples, with databases and schemas distinct from those in the training set.", "We report results using the same metrics as Yu et al. (2018a): exact match accuracy on all examples, as well as divided by difficulty levels.", "As in previous work on Spider, these metrics do not measure the model's performance on generating values in the SQL.", "In Table 2 we show accuracy on the (hidden) Spider test set for RAT-SQL and compare to all other approaches at or near state-of-the-art (according to the official leaderboard).", "RAT-SQL outperforms all other methods that are not augmented with BERT embeddings by a large margin of 8.7%.", "Surprisingly, it even beats other BERT-augmented models.", "When RAT-SQL is further augmented with BERT, it achieves the new state-of-the-art performance.", "Compared with other BERT-argumented models, our RAT-SQL + BERT has smaller generalization gap between development and test set.", "We also provide a breakdown of the accuracy by difficulty in Table 3.", "As expected, performance drops with increasing difficulty.", "The overall generalization gap between development and test of RAT-SQL was strongly affected by the significant drop in accuracy (9%) on the extra hard questions.", "When RAT-SQL is augmented with BERT, the generalization gaps of most difficulties are reduced.", "Ablation Study Table 4 shows an ablation study over different RAT-based relations.", "The ablations Figure 5: Alignment between the question For the cars with 4 cylinders, which model has the largest horsepower and the database car_1 schema (columns and tables) depicted in Figure 1.", "are run on RAT-SQL without value-based linking to avoid interference with information from the database.", "Schema linking and graph relations make statistically significant improvements (p<0.001).", "The full model accuracy here slightly differs from Table 2 because the latter shows the best model from a hyper-parameter sweep (used for test evaluation) and the former gives the mean over five runs where we only change the random seeds.", "We also conducted preliminary experiments on WikiSQL (Zhong et al., 2017) to test generalization of RAT-SQL to new datasets.", "Although WikiSQL lacks multi-table schemas (and thus, its challenge of schema encoding is not as prominent), it still presents the challenges of schema linking and generalization to new schemas.", "For simplicity of experiments, we did not implement either BERT augmentation or execution-guided decoding (EG) (Wang et al., 2018), both of which are common in state-of-the-art WikiSQL models.", "We thus only compare to the models that also lack these two enhancements.", "While not reaching state of the art, RAT-SQL still achieves competitive performance on WikiSQL as shown in Table 5.", "Most of the gap between its accuracy and state of the art is due to the simpli-fied implementation of value decoding , which is required for WikiSQL evaluation but not in Spider.", "Our value decoding for these experiments is a simple token-based pointer mechanism, which often fails to retrieve multi-token value constants accurately.", "A robust value decoding mechanism in RAT-SQL is an important extension that we plan to address outside the scope of this work.", "Alignment Recall from Section 4 that we explicitly model the alignment matrix between question words and table columns, used during decoding for column and table selection.", "The existence of the alignment matrix provides a mechanism for the model to align words to columns.", "An accurate alignment representation has other benefits such as identifying question words to copy to emit a constant value in SQL.", "In Figure 5 we show the alignment generated by our model on the example from Figure 1.", "4 For the three words that reference columns ( cylinders , model , horsepower ), the alignment matrix correctly identifies their corresponding columns.", "The alignments of other words are strongly affected by these three keywords, resulting in a sparse span-to-column like alignment, e.g. largest horsepower to horsepower .", "The tables cars_data and cars_names are implicitly mentioned by the word cars .", "The alignment matrix successfully infers to use these two tables instead of car_makers using the evidence that they contain the three mentioned columns.", "The Need for Schema Linking One natural question is how often does the decoder fail to select the correct column, even with the schema encoding and linking improvements we have made.", "To 4 The full alignment also maps from column and table names, but those end up simply aligning to themselves or the table they belong to, so we omit them for brevity.", "answer this, we conducted an oracle experiment (see Table 6).", "For oracle sketch, at every grammar nonterminal the decoder is forced to choose the correct production so the final SQL sketch exactly matches that of the ground truth.", "The rest of the decoding proceeds conditioned on that choice.", "Likewise, oracle columns forces the decoder to emit the correct column/table at terminal nodes.", "With both oracles, we see an accuracy of 99.4% which just verifies that our grammar is sufficient to answer nearly every question in the data set.", "With just oracle sketch, the accuracy is only 73.0%, which means 72.4% of the questions that RAT-SQL gets wrong and could get right have incorrect column or table selection.", "Similarly, with just oracle columns, the accuracy is 69.8%, which means that 81.0% of the questions that RAT-SQL gets wrong have incorrect structure.", "In other words, most questions have both column and structure wrong, so both problems require important future work.", "Error Analysis An analysis of mispredicted SQL queries in the Spider dev set showed three main causes of evaluation errors.", "(I) 18% of the mispredicted queries are in fact equivalent implementations of the NL intent with a different SQL syntax ( e.g. ORDER BY C LIMIT 1 vs. SELECT MIN( C ) ).", "Measuring execution accuracy rather than exact match would detect them as valid.", "(II) 39% of errors involve a wrong, missing, or extraneous column in the SELECT clause.", "This is a limitation of our schema linking mechanism, which, while substantially improving column resolution, still struggles with some ambiguous references.", "Some of them are unavoidable as Spider questions do not always specify which columns should be returned by the desired SQL.", "Finally, (III) 29% of errors are missing a WHERE clause, which is a common error class in text-to-SQL models as reported by prior works.", "One common example is domain-specific phrasing such as older than 21 , which requires background knowledge to map it to age > 21 rather than age < 21 .", "Such errors disappear after in-domain fine-tuning.", "Despite active research in text-to-SQL parsing, many contemporary models struggle to learn good representations for a given database schema as well as to properly link column/table references in the question.", "These problems are related: to encode & use columns/tables from the schema, the model must reason about their role in the context of the question.", "In this work, we present a unified framework for addressing the schema encoding and linking challenges.", "Thanks to relation-aware self-attention, it jointly learns schema and question representations based on their alignment with each other and schema relations.", "Empirically, the RAT framework allows us to gain significant state of the art improvement on text-to-SQL parsing.", "Qualitatively, it provides a way to combine predefined hard schema relations and inferred soft self-attended relations in the same encoder architecture.", "This representation learning will be beneficial in tasks beyond text-to-SQL, as long as the input has some predefined structure.", "We thank Jianfeng Gao, Vladlen Koltun, Chris Meek, and Vignesh Shiv for the discussions that helped shape this work.", "We thank Bo Pang, Tao Yu for their help with the evaluation.", "We also thank anonymous reviewers for their invaluable feedback." ]
[ "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "result", "other", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "result", "abstain", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "objective", "method", "other", "objective", "objective", "abstain", "other", "other", "abstain", "method", "abstain", "method", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "result", "abstain", "abstain", "other", "other", "other" ]
[ "A long-term goal of AI research is to build intelligent agents that can communicate with humans in natural language, perceive the environment, and perform real-world tasks.", "Vision-and-Language Navigation (VLN) is a fundamental and interdisciplinary research topic towards this goal, and receives increasing attention from natural language processing, computer vision, robotics, and machine learning communities.", "In this paper, we review contemporary studies in the emerging field of VLN, covering tasks, evaluation metrics, methods, etc.", "Through structured analysis of current progress and challenges, we highlight the limitations of current VLN and opportunities for future work.", "This paper serves as a thorough reference for the VLN research community.", "1 1 Introduction Humans communicate with each other using natural language to issue tasks and request help.", "An agent that can understand human language and navigate intelligently would significantly benefit human society, both personally and professionally.", "Such an agent can be spoken to in natural language, and would autonomously execute tasks such as household chores indoors, repetitive delivery work outdoors, or work in hazardous conditions following human commands (bridge inspection; fire-fighting).", "Scientifically, developing such an agent explores how an artificial agent interprets natural language from humans, perceives its visual environment, and utilizes that information to navigate to complete a task successfully.", "Vision-and-Language Navigation (VLN) (An-derson et al., 2018b; Chen et al., 2019; Thomason et al., 2019b) is an emerging research field that aims to build such an embodied agent that can 1 We also release a Github repo to keep track of advances in VLN: https://github.com/eric-ai-lab/ awesome-vision-language-navigation Natural Language Communication O b se r va t i on A c t i on O b se r va t i on A c t i on Agent Oracle Environment Figure 1: The agent and oracle discuss the VLN task in natural language.", "communicate with humans in natural language and navigate in real 3D environments.", "VLN extends visual navigation in both simulated (Zhu et al., 2017; Mirowski, 2019) and real environments (Mirowski et al., 2018) with natural language communication.", "As illustrated in Figure 1, VLN is a task that involves the oracle (frequently a human), the agent, and the environment.", "The agent and the oracle communicate in natural language.", "The agent may ask for guidance and the oracle could respond.", "The agent navigates and interacts with the environment to complete the task according to the instructions received and the environment observed.", "Meanwhile, the oracle observes the environment and agent status, and may interact with the environment to help the agent.", "Since the development and release of works such as Room-to-Room (R2R) (Anderson et al., 2018b), many VLN datasets have been introduced.", "Regarding the degree of communication, researchers create benchmarks where the agent is required to passively understand one instruction before navigation, to benchmarks where agents converse with the oracle in free-form dialog.", "Regarding the task objective, the requirements for the agent range from strictly following the route described in the ini-7606 tial instruction to actively exploring the environment and interacting with objects.", "In a slight abuse of terminology, we refer to benchmarks that involve object interaction together with substantial sub-problems of navigation and localization, such as ALFRED (Shridhar et al., 2020), as VLN benchmarks.", "Many challenges exist in VLN tasks.", "First, VLN faces a complex environment and requires effective understanding and alignment of information from different modalities.", "Second, VLN agents require a reasoning strategy for the navigation process.", "Data scarcity is also an obstacle.", "Lastly, the generalization of a model trained in seen environments to unseen environments is also essential.", "We categorize the solutions according to the respective challenges.", "(1) Representation learning methods help understand information from different modalities.", "(2) Action strategy learning aims to make reasonable decisions based on gathered information.", "(3) Data-centric learning methods effectively utilize the data and address data challenges such as data scarcity.", "(4) Prior exploration helps the model familiarize itself with the test environment, improving its ability to generalize.", "We make three primary contributions.", "(1) We systematically categorize current VLN benchmarks from communication complexity and task objective perspectives, with each category focusing on a different type of VLN task.", "(2) We hierarchically classify current solutions and the papers within the scope.", "(3) We discuss potential opportunities and identify future directions.", "The ability for an agent to interpret natural language instructions (and in some instances, request feedback during navigation) is what makes VLN unique from visual navigation (Bonin-Font et al., 2008).", "In Table 2, we mainly categorize current datasets on two axes, Communication Complexity and Task Objective .", "Communication Complexity defines the level at which the agent may converse with the oracle, and we differentiate three levels: In the first level, the agent is only required to understand an Initial Instruction before navigation starts.", "In the second level, the agent sends a signal for help whenever it is unsure, utilizing the Guidance from the oracle.", "In the third level, the agent with Dialogue ability asks questions in the form of natural language during the navigation and understands further oracle guidance.", "Task Objective defines how the agent attains its goal based on the initial instructions from the oracle.", "In the first objective type, Fine-grained Navigation , the agent can find the target according to a detailed step-by-step route description.", "In the second type, Coarse-grained Navigation , the agent is required to find a distant target goal with a coarse navigation description, requiring the agent to reason a path in a navigable environment and possibly elicit additional oracle help.", "Tasks in the previous two types only require the agent to navigate to complete the mission.", "In the third type, Navigation and Object Interaction , besides reasoning a path, the agent also needs to interact with objects in the environment to achieve the goal since the object might be hidden or need to change physical states.", "2 As with coarse-grained navigation, some object interaction tasks can require additional supervision via dialogue with the oracle.", "In many VLN benchmarks, the agent is given a natural language instruction for the whole navigation process, such as Go upstairs and pass the table in the living room. Turn left and go through the door in the middle.", "Fine-grained Navigation An agent needs to strictly follow the natural language instruction to reach the target goal.", "Anderson et al. (2018b) create the R2R dataset based on the Matterport3D simulator (Chang et al., 2017).", "An embodied agent in R2R moves through a house in the simulator traversing edges on a navigation graph, jumping to adjacent nodes containing panoramic views.", "R2R is extended to create other VLN benchmarks.", "Room-for-Room joins paths in R2R to longer trajectories (Jain et al., 2019).", "Yan et al. (2020) collect XL-R2R to extend R2R with Chinese instructions.", "RxR (Ku et al., 2020) contains instructions from English, Hindi, and Telegu.", "The dataset has more samples and the instructions in it are time-aligned to the virtual poses of the instruction.", "The English split of RxR is further extended to build Landmark-RxR (He et al., 2021) by incorporating landmark information.", "In most current datasets, agents traverse a navigation graph at predefined viewpoints.", "To facil-2 Navigation and Object Interaction includes both fine-grained and coarse-grained instructions, which ideally should be split further.", "But given that there are only few datasets in this category, we keep the current categorization in Table 2.", "itate transfer learning to real agents, VLN tasks should provide a continuous action space and a freely navigable environment.", "To this end, Krantz et al. (2020) reconstruct the navigation graph based R2R trajectories in continuous environments and create VLNCE.", "Irshad et al. (2021) propose Robo-VLN task where the agent operates in a continuous action space over long-horizon trajectories.", "Outdoor environments are usually more complex and contain more objects than indoor environments.", "In TOUCHDOWN (Chen et al., 2019), an agent follows instructions to navigate a streetview rendered simulation of New York City to find a hidden object.", "Most photo-realistic outdoor VLN datasets including TOUCHDOWN (Chen et al., 2019), StreetLearn (Mirowski et al., 2019; Mehta et al., 2020), StreetNav(Hermann et al., 2020), and Talk2Nav (Vasudevan et al., 2021) are proposed based on Google Street View.", "Some work uses natural language to guide drones.", "LANI (Misra et al., 2018) is a 3D synthetic navigation environment, where an agent navigates between landmarks following natural language instructions.", "Current datasets on drone navigation usually fall in a synthetic environment such as Unity3D (Blukis et al., 2018, 2019).", "since it may be unknown to the human instructor (oracle).", "Usually, instructions are more concise and contain merely information of the target goal.", "In Embodied QA (Das et al., 2018), the agent navigates through the environment to find answer for a given question.", "The instructions in REVERIE (Qi et al., 2020b) are annotated by humans, and thus more complicated and diverse.", "The agent navigates through the rooms and differentiates the object against multiple competing candidates.", "In SOON (Zhu et al., 2021a), an agent receives a long, complex coarse-to-fine instruction which gradually narrows down the search scope.", "Navigation+Object Interaction For some tasks, the target object might be hidden (e.g., the spoon in a drawer), or need to change status (e.g., a sliced apple is requested but only a whole apple is available).", "In these scenarios, it is necessary to interact with the objects to accomplish the task (e.g., opening the drawer or cutting the apple).", "Interactive Question Answering (IQA) requires the agent to navigate and sometimes to interact with objects to answer a given question.", "Based on indoor scenes in AI2-THOR (Kolve et al., 2017), Shridhar et al. (2020) propose the ALFRED dataset, where agents are 7608 provided with both coarse-grained and fine-grained instructions complete household tasks in an interactive visual environment.", "CHAI (Misra et al., 2018) requires the agent to navigate and simply interact with the environments.", "Agents in Guidance VLN tasks may receive further natural language guidance from the oracle during navigation.", "For example, if the agent is unsure of the next step (e.g., entering the kitchen), it can send a [help] signal, and the oracle would assist by responding go left (Nguyen et al., 2019).", "Fine-grained Navigation The initial fine-grained navigation instruction may still be ambiguous in a complex environment.", "Guidance from the oracle could clarify possible confusion.", "Chi et al. (2020) introduce Just Aska task where an agent could ask oracle for help during navigation.", "Coarse-grained Navigation With only a coarse-grained instruction given at the beginning, the agent tends to be more confused and spends more time exploring.", "Further guidance resolves this ambiguity.", "VNLA (Nguyen et al., 2019) and HANNA (Nguyen and Daum III, 2019) both train an agent to navigate indoors to find objects.", "The agent could request help from the oracle, which responds by providing a subtask which helps the agent make progress.", "While oracle in VNLA uses predefined script to respond, the oracle in HANNA uses a neural network to generate natural language responses.", "CEREALBAR (Suhr et al., 2019) is a collaborative task between a leader and a follower.", "Both agents move in a virtual game environment to collect valid sets of cards.", "Navigation+Object Interaction While VLN is still in its youth, there are no VLN datasets in support of Guidance and Object Interaction.", "It is human-friendly to use natural language to request help (Banerjee et al., 2020; Thomason et al., 2019b).", "For example, when the agent is not sure about what fruit the human wants, it could ask What fruit do you want, the banana in the refrigerator or the apple on the table? , and the human response would provide clear navigation direction.", "Fine-grained Navigation No datasets are in the scope of this category.", "Currently, route-detailed instruction with possible guidance could help the agent achieve relatively good performance in most simulated environments.", "We expect datasets to be developed for this category for super long horizon navigation tasks in complex environments especially with rich dynamics where dialog is necessary to clear confusions.", "Coarse-grained Navigation CVDN (Thomason et al., 2019b) is a dataset of human-human dialogues.", "Besides interpreting a natural language instruction and deciding on the following action, the VLN agent also needs to ask questions in natural language for guidance.", "The oracle, with knowledge of the best next steps, needs to understand and correctly answer said questions.", "Dialogue is important in complex outdoor environments.", "de Vries et al. (2018) introduce the Talk the Walk dataset, where the guide has knowledge from a map and guides the tourist to a destination, but does not know the tourist's location; while the tourist navigates a 2D grid via discrete actions.", "Navigation+Object Interaction Minecraft Collaborative Building (Narayan-Chen et al., 2019) studies how an agent places blocks into a building by communicating with the oracle.", "TEACh (Pad-makumar et al., 2021) is a dataset that studies object interaction and navigation with free-form dialog.", "The follower converses with the commander and interacts with the environment to complete various house tasks such as making coffee.", "DialFRED (Gao et al., 2022) extends ALFRED (Shrid-har et al., 2020) dataset by allowing the agent to actively ask questions.", "Goal-oriented Metrics mainly consider the agent's proximity to the goal.", "The most intuitive is Success Rate (SR) , which measures how frequently an agent completes the task within a certain distance of the goal.", "Goal Progress (Thomason et al., 2019b) measures the reduction in remaining distance to the target goal.", "Path Length (PL) measures the total length of the navigation path.", "Shortest-Path Distance (SPD) measures the mean distance between the agent's final location and the goal.", "Since a longer path length is undesirable (increases duration and wear-and-tear on actual robots), Success weighted by Path Length (SPL) (Anderson et al., 2018a) balances both Success Rate and Path Length.", "Similarly, Success weighted by Edit Distance (SED) (Chen et al., 2019) compares the expert's actions/trajectory to the agent's actions/trajectory, also balancing SR and PL.", "Oracle Navigation Error (ONE) takes the shortest dis-7609 tance from any node in the path rather than just the last node, and Oracle Success Rate (OSR) measures whether any node in the path is within a threshold from the target location.", "Path-fidelity Metrics evaluate to what extent an agent follows the desired path.", "Some tasks require the agent not only to find the goal location but also to follow specific path.", "Fidelity measures the matches between the action sequence in the expert demonstration and the action sequence in the agent trajectory.", "Coverage weighted by LS (CLS) (Jain et al., 2019) is the product of the Path Coverage (PC) and Length Score (LS) with respect to the reference path.", "It measures how closely an agent's trajectory follows the reference path.", "Normalized Dynamic Time Warping (nDTW) (Ilharco et al., 2019) softly penalizes deviations from the reference path to calculate the match between two paths.", "Success weighted by normalized Dynamic Time Warping (SDTW) (Ilharco et al., 2019) further constrains nDTW to only successful episodes to capture both success and fidelity.", "As shown in Figure 2, we categorize existing methods into Representation Learning , Action Strategy Learning , Data-centric Learning , and Prior Exploration .", "Representation learning methods help agent understand relations between these modalities since VLN involves multiple modalities, including vision, language, and action.", "Moreover, VLN is a complex reasoning task where mission results depend on the accumulating steps, and better action strategies help the decision-making process.", "Additionally, VLN tasks face challenges within their training data.", "One severe problem is scarcity.", "Collecting training data for VLN is expensive and time-consuming, and the existing VLN datasets are relatively small with respect to the complexity of VLN tasks.", "Therefore, data-centric methods help to utilize the existing data and create more training data.", "Prior exploration helps adapt agents to previously unseen environments, improving their ability to generalize, decreasing the performance gap between seen versus unseen environments.", "Representation learning helps the agent understand how the words in the instruction relate to the perceived features in the environment.", "Vision or Language Using a pretrained model to initialize a vision or text encoder provides agents with single-modality knowledge.", "pretrained vision models may use a ResNet (He et al., 2016) or Vision Transformers (Dosovitskiy et al., 2020).", "Other navigation tasks (Wijmans et al., 2019b) may also provide visual initialization (Krantz et al., 2020).", "Large pretrained language models such as BERT (Devlin et al., 2019) and GPT (Radford et al., 2019) can encode language and improve instruction understanding (Li et al., 2019), which can be further pretrained with VLN instructions (Pashevich et al., 2021) before fine-tuning in VLN task.", "Vision and Language Vision-and-language pretrained models provide good joint representation for text and vision.", "A common practice is to initialize the VLN agent with a pretrained model such as ViLBERT (Lu et al., 2019).", "The agent may be further trained with VLN-specific features such as objects and rooms (Qi et al., 2021).", "VLN Downstream tasks benefit from being closely related to the pretraining task.", "Researchers also explored pretraining on the VLN domain directly.", "VLN-BERT (Majumdar et al., 2020) pretrains navigation models to measure the compatibility between paths and instructions, which formats VLN as a path selection problem.", "PREVALENT (Hao et al., 2020) is trained from scratch on image-text-action triplets to learn textual representations in VLN tasks.", "The output embedding from the [CLS] token in BERT-based pretraining models could be leveraged in a recurrent fashion to represent history state (Hong et al., 2021; Moudgil et al., 2021).", "Airbert (Guhur et al., 2021) achieve good performance on few-shot setting after pretraining on a large-scale in-domain dataset.", "Semantic understanding of VLN tasks incorporates knowledge about important features in VLN.", "In addition to the raw features, high-level semantic representations also improve performance in unseen environments.", "Intra-Modality Visual or textual modalities can be decomposed into many features, which matter differently in VLN.", "The overall visual features extracted by a neural model may actually hurt the performance in some cases (Thomason et al., 2019a; Hu et al., 2019; Zhang et al., 2020b).", "Therefore, it is important to find the feature(s) that best improve performance.", "High-level features such as visual appearance, route structure, and detected objects outperform the low level visual features extracted by CNN (Hu et al., 2019).", "Different types of tokens within the instruction also function differently (Zhu et al., 2021b).", "Extracting these tokens and encoding the object tokens and directions tokens are crucial (Qi et al., 2020a; Zhu et al., 2021b).", "Inter-Modality Semantic connections between different modalities: actions, scenes, observed objects, direction clues, and objects mentioned in instructions can be extracted and then softly aligned with attention mechanism (Qi et al., 2020a; Gao et al., 2021).", "The soft alignment also highlights relevant parts of the instruction with respect to the current step (Landi et al., 2019; Zhang et al., 2020a).", "Building graph to incorporate structured information from instruction and environment observation provides explicit semantic relation to guide the navigation.", "The graph neural network may encode the relation between text and vision to better interpret the context information (Hong et al., 2020a; Deng et al., 2020).", "The graph could record the location information during the navigation, which can used to predict the most likely trajectory (Ander-son et al., 2019a) or probability distribution over action space (Deng et al., 2020).", "When connected with prior exploration, an overview graph about the navigable environment (Chen et al., 2021a) can be built to improve navigation interpretation.", "Information accumulates as the agent navigates, which is not efficient to utilize directly.", "Memory structure helps the agent effectively leverage the navigation history.", "Some solutions leverage memory modules such as LSTMs or recurrently utilize informative states (Hong et al., 2021), which can be relatively easily implemented, but may struggle to remember features at the beginning of the path as path length increases.", "Another solution is to build a separate memory model to store the relevant information (Zhu et al., 2020c; Lin et al., 2021; Nguyen and Daum III, 2019).", "Notably, by hierarchically encoding a single view, a panorama, and then all panoramas in history, HAMT (Chen et al., 2021b) successfully utilized the full navigation history for decision-making.", "Auxiliary tasks help the agent better understand the environment and its own status without extra labels.", "From the machine learning perspective, an auxiliary task is usually achieved in the form of an additional loss function.", "The auxiliary task could, for example, explain its previous actions, or predict information about future decisions (Zhu et al., 2020a).", "Auxiliary tasks could also involve the current mission such as current task accomplishment, and vision & instruction alignment (Ma et al., 2019a; Zhu et al., 2020a).", "Notably, auxiliary tasks are effective when adapting pretrained representations for VLN (Huang et al., 2019).", "With many possible action choices and complicated environment, action strategy learning provides a variety of methods to help the agent decide on those best actions.", "VLN is a sequential decision-making problem and can naturally be modeled as a Markov decision process.", "So Reinforcement Learning (RL) methods are proposed to learn better policy for VLN tasks.", "A critical challenge for RL methods is that VLN agents only receive the success signal at the end of the episode, so it is difficult to know which actions to attribute success to, and which to penalize.", "To address the ill-posed feedback issue, Wang et al. (2019) propose RCM model to enforces cross-modal grounding both locally and globally, 7611 with goal-oriented extrinsic reward and instruction-fidelity intrinsic reward.", "He et al. (2021) propose to utilize the local alignment between the instruction and critical landmarks as the reward.", "Evaluation metrics such as CLS (Jain et al., 2019) or nDTW (Ilharco et al., 2019) can also provide informative reward signal (Landi et al., 2020), and natural language may also provide suggestions for reward (Fu et al., 2019).", "To model the dynamics in the environment, Wang et al. (2018) leverage model-based reinforcement learning to predict the next state and improve the generalization in unseen environment.", "Zhang et al. (2020a) find recursively alternating the learning schemes of imitation and reinforcement learning improve the performance.", "Exploring and gathering environmental information while navigating provides a better understanding of the state space.", "Student-forcing is a frequently used strategy, where the agent keeps navigating based on sampled actions and is supervised by the shortest-path action (Anderson et al., 2018b).", "There is a tradeoff between exploration versus exploitation: with more exploration, the agent sees better performance at the cost of a longer path and longer duration, so the model needs to determine when and how deep to explore (Wang et al., 2020a).", "After having gathered the local information, the agent needs to decide which step to choose, or whether to backtrack (Ke et al., 2019).", "Notably, Koh et al. (2021) designed Pathdreamer, a visual world model to synthesize visual observation future viewpoints without actually looking ahead.", "Planing future navigation steps leads to a better action strategy.", "From the visual side, predicting the waypoints (Krantz et al., 2021), next state and reward (Wang et al., 2018), generate future observation (Koh et al., 2021) or incorporating neighbor views (An et al., 2021) has proven effective.", "The natural language instruction also contains landmarks and direction clues to plan detailed steps.", "Anderson et al. (2019b) predict the forthcoming events based on the instruction, which is used to predict actions with a semantic spatial map.", "An intelligent agent asks for help when uncertain about the next action.", "Action probabilities or a separately trained model (Chi et al., 2020; Zhu et al., 2021c; Nguyen et al., 2021a) can be leveraged to decide whether to ask for help.", "Using natural language to converse with the oracle covers a wider problem scope than sending a signal.", "Both rule-based methods (Padmakumar et al., 2021) and neural-based methods (Roman et al., 2020; Nguyen et al., 2021a) have been developed to build navigation agents with dialog ability.", "Meanwhile, for tasks (Thomason et al., 2019b; Padmakumar et al., 2021) that do not provide an oracle agent to answer question in natural language, researchers also need to build a rule-based (Padmakumar et al., 2021) or neural-based (Roman et al., 2020) oracle.", "DialFRED (Gao et al., 2022) uses a language model as an oracle to answer questions.", "Compared with previously discussed works that focus on building a better VLN agent structure, data-centric methods most effectively utilize the existing data, or create synthetic data.", "Trajectory-Instruction Augmentation Augmented path-instruction pairs could be used in VLN directly.", "Currently the common practice is to train a speaker module to generate instructions given a navigation path (Fried et al., 2018).", "This generated data have varying quality (Zhao et al., 2021).", "Therefore an alignment scorer (Huang et al., 2019) or adversarial discriminator (Fu et al., 2020) can select high-quality pairs for augmentation.", "Environment Augmentation Generating more environment data not only helps generate more trajectories, but also alleviates the problem of overfitting in seen environments.", "Randomly masking the same visual feature across different viewpoints (Tan et al., 2019) or simply splitting the house scenes and remixing them (Liu et al., 2021) could create new environments, which could further be used to generate more trajectory-instruction pairs (Fried et al., 2018).", "Training data may also be augmented by replacing some visual features with counterfactual ones (Parvaneh et al., 2020).", "Curriculum learning (Bengio et al., 2009) gradually increases the task's difficulty during the training", "process.", "The instruction length could be a metric for task difficulty.", "BabyWalk (Zhu et al., 2020b) keep increasing training samples' instruction length during the training process.", "Attributes from the trajectory may also be used to rank task difficulty.", "Zhang et al. (2021) rearrange the R2R dataset using the number of rooms each path traverses.", "They found curriculum learning helps smooth the loss landscape and find a better local optima.", "Different VLN tasks can benefit from each other by cross-task knowledge transfer.", "Wang et al. (2020c) propose an environment-agnostic multitask navigation model for both VLN and Navigation from Dialog History tasks (Thomason et al., 2019b).", "Chap-lot et al. (2020) propose an attention module to train a multitask navigation agent to follow instructions and answer questions (Wijmans et al., 2019a).", "A trajectory instruction interpreted multiple times in different ways may help the agent better understand its objective.", "LEO (Xia et al., 2020) leverages and encodes all the instructions with a shared set of parameters to enhance the textual understanding.", "LWIT (Nguyen et al., 2021b) interprets the instructions to make it clear to interact with what class of objects.", "Shorter, and more concise instructions provide clearer guidance for the agent compared to longer, semantically entangled instructions, thus Hong et al. (2020b) breaks long instructions into shorter ones, allowing the agent to track progress and focus on each atomic instruction individually.", "Good performance in seen environments often cannot generalize to unseen environments (Hu et al., 2019; Parvaneh et al., 2020; Tan et al., 2019).", "Prior exploration methods allow the agent to observe and adapt to unseen environments, 3 bridging the performance gap between seen and unseen environments.", "Wang et al. (2019) introduce a self-supervised imitation learning to learn from the agent's own past, good behaviors.", "The best navigation path determined to align the instruction the best by a matching critic will be used to update the agent.", "Tan et al. (2019) leverage the testing environments to sample and augment paths for adaptation.", "Fu 3 Thus prior exploration methods are not directly comparable with other VLN methods.", "et al. (2020) propose environment-based prior exploration, where the agent can only explore a particular environment where it is deployed.", "When utilizing graph, prior exploration may construct a map or overview about the unseen environment to provide explicit guidance for navigation (Chen et al., 2021a; Zhou et al., 2021).", "This paper focuses on Vision-and-Language Navigation tasks with an emphasis on photo-realistic environments.", "2D map may also be a uesful virtual environment for navigation tasks (Vogel and Jurafsky, 2010; Chen and Mooney, 2011; Paz-Argaman and Tsarfaty, 2019).", "Synthetic environments may also be a substitute for realistic environment (MacMahon et al., 2006; Blukis et al., 2020).", "Tellex et al. (2011) propose to instantiate a probabilistic graphical model for natural language commands in robotic navigation and mobile manipulation process.", "In VLN, an agent needs to follow the given instruction and even ask for assistants in human language.", "An agent in Visual Navigation tasks is usually not required to understand information from textual modality.", "Visual Navigation is a problem of navigating an agent from the current location to find the goal target.", "Researchers have achieved success in both simulated environments (Zhu et al., 2017; Mirowski, 2019) and real environments (Mirowski et al., 2018).", "In this paper, we discuss the importance of VLN agents as a part of society, how their tasks vary as a function of communication level versus task objective, and how different agents may be evaluated.", "We broadly review VLN methodologies and categorize them.", "This paper only discusses these issues broadly at an introductory level.", "In reviewing these papers, we can see the immense progress that has already been made, as well as directions that this research topic can be expanded on.", "Current methods usually do not explicitly utilize external knowledge such as objects and general house descriptions in Wikipedia.", "Incorporating knowledge also improves the interpretability and trust of embodied AI.", "Moreover, currently several navigation agents learn which direction to move and with what to interact, but there is a last-mile problem of VLNhow to interact with objects.", "An-7613 derson et al. (2018b) asked whether a robot could learn to Bring me a spoon ; new research may ask how a robot can learn to Pick up a spoon .", "The environments also lack diversity: most interior terrestrial VLN data consists of American houses, but never warehouses or hospitals: the places where these agents may be of most use.", "Below we detail additional future directions: Collaborative VLN Current VLN benchmarks and methods predominantly focus on tasks where only one agent navigates, yet complicated real-world scenarios may require several robots collaborating.", "Multi-agent VLN tasks require development in swarm intelligence, information communication, and performance evaluation.", "MeetUp!", "(Ilinykh et al., 2019) is a two-player coordination game where players move in a visual environment to find each other.", "VLN studies the relationship between the human and the environment in Figure 1, yet here humans are oracles simply observing (but not acting on) the environment.", "Collaboration between humans and robots is crucial for them to work together as teams (e.g., as personal assistants or helping in construction).", "Future work may target at collaborative VLN between multiple agents or between human and agents.", "Simulation to Reality There is a performance loss when transferred to real-life robot navigation (An-derson et al., 2020).", "Real robots function in continuous space, but most simulators only allow agents to hop through a pre-defined navigation graph which is unrealistic for three reasons (Krantz et al., 2020).", "Navigation graphs assume: (1) perfect localizationin the real world it is a noisy estimate; (2) oracle navigationreal robots cannot teleport to a new node; (3) known topologyin reality an agent may not have access to a preset list of navigable nodes.", "Continuous implementations of realistic environments may contain patches of the images, be blurred, or have parallax errors, making them unrealistic.", "A simulation that is based on both a 3D model and realistic imagery could improve the match between virtual sensors (in simulation) and real sensors.", "Lastly, most simulators assume a static environment only changed by the agent.", "This does not account for other dynamics such as people walking or objects moving, nor does it account for lighting conditions through the day.", "VLN environments with probabilistic transition functions may also narrow the gap between simulation and reality.", "Ethics & Privacy During both training and inference, VLN agents may observe and store sensitive information that can get leaked or misused.", "Effective navigation with privacy protection is crucially important.", "Relevant areas such as federated learning (Konecn`y et al., 2016) or differential privacy (Dwork et al., 2006) could also be studied in VLN domain to preserve the privacy of training and inference environments.", "Multicultural VLN VLN lacks diversity in 3D environments: most outdoor VLN datasets use Google Street View recorded in major American cities, but lacks data in developing countries.", "Agents trained on American data face potential generalization problems in other city or housing layouts.", "Future work should explore more diverse environments across multiple cultures and regions.", "Multilingual VLN datasets (Yan et al., 2020; Ku et al., 2020) could be good resources to study multicultural differences from the linguistic perspective." ]
[ "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain" ]
[ "Variational Neural Machine Translation (VNMT) is an attractive framework for modeling the generation of target translations, conditioned not only on the source sentence but also on some latent random variables.", "The latent variable modeling may introduce useful statistical dependencies that can improve translation accuracy.", "Unfortunately, learning informative latent variables is non-trivial, as the latent space can be prohibitively large, and the latent codes are prone to be ignored by many translation models at training time.", "Previous works impose strong assumptions on the distribution of the latent code and limit the choice of the NMT architecture.", "In this paper, we propose to apply the VNMT framework to the state-of-the-art Transformer and introduce a more flexible approximate posterior based on normalizing flows.", "We demonstrate the efficacy of our proposal under both in-domain and out-of-domain conditions, significantly outperforming strong baselines.", "Translation is inherently ambiguous.", "For a given source sentence, there can be multiple plausible translations due to the author's stylistic preference, domain, and other factors.", "On the one hand, the introduction of neural machine translation (NMT) has significantly advanced the field (Bahdanau et al., 2015), continually producing state-of-the-art translation accuracy.", "On the other hand, the existing framework provides no explicit mechanisms to account for translation ambiguity.", "Recently, there has been a growing interest in latent-variable NMT (LV-NMT) that seeks to incorporate latent random variables into NMT to account for the ambiguities mentioned above.", "For instance, Zhang et al. (2016) incorporated latent codes to capture underlying global semantics of source sentences into NMT, while Su et al. (2018) proposed fine-grained latent codes at the word level.", "The learned codes, while not straightforward to analyze linguistically, are shown empirically to improve accuracy.", "Nevertheless, the introduction of latent random variables complicates the parameter estimation of these models, as it now involves intractable inference.", "In practice, prior work resorted to imposing strong assumptions on the latent code distribution, potentially compromising accuracy.", "In this paper, we focus on improving Variational NMT (VNMT) (Zhang et al., 2016): a family of LV-NMT models that relies on the amortized variational method (Kingma and Welling, 2014) for inference.", "Our contributions are twofold.", "(1) We employ variational distributions based on normalizing flows (Rezende and Mohamed, 2015), instead of uni-modal Gaussian.", "Normalizing flows can yield complex distributions that may better match the latent code's true posterior.", "(2) We employ the Transformer architecture (Vaswani et al., 2017), including Transformer-Big , as our VNMT's generator network.", "We observed that the generator networks of most VNMT models belong to the RNN family that are relatively less powerful as a translation model than the Transformer.", "We demonstrate the efficacy of our proposal on the German-English IWSLT'14 and English-German WMT'18 tasks, giving considerable improvements over strong non-latent Transformer baselines, and moderate improvements over Gaussian models.", "We further show that gains generalize to an out-of-domain condition and a simulated bimodal data condition.", "Background Let x and y be a source sentence and its translation, drawn from a corpus D .", "Our model seeks to find parameters that maximize the marginal of a latent-variable model p ( y , Z | x ) where Z RD is a sentence-level latent code similar to (Zhang et al., 2016).", "VNMT models sidestep the marginalization by introducing variational distributions and seek to minimize this function (i.e., the Evidence Lower Bound or ELBO): (cid:88) ( x , y ) D E q ( Z | x , y ) [log p ( y | x , Z )] KL ( q ( Z | x , y ) || p ( Z | x )) , (1) where q ( Z | x , y ) , p ( Z | x ) are the variational posterior and prior distribution of the latent codes, while p ( y | x , Z ) is a generator that models the generation of the translation conditioned on the latent code 1 .", "The ELBO is improved when the model learns a posterior distribution of latent codes that minimizes the reconstruction loss (the first term) while incurring a smaller amount of KL divergence penalty between the variational posterior and the prior (the second term).", "The majority of VNMT models design their variational distributions to model unimodal distribution via isotropic Gaussians with diagonal covariance, which is the simplest form of prior and approximate posterior distribution.", "This assumption is computationally convenient because it permits a closed-form solution for computing the KL term and facilitates end-to-end gradient-based optimization via the re-parametrization trick (Rezende and Mohamed, 2015).", "However, such a simple distribution may not be expressive enough to approximate the true posterior distribution, which could be non-Gaussian, resulting in a loose gap between the ELBO and the true marginal likelihood.", "Therefore, we propose to employ more flexible posterior distributions in our VNMT model, while keeping the prior a Gaussian.", "and Mohamed (2015) proposed Normalizing Flows (NF) as a way to introduce a more flexible posterior to Variational Autoencoder (VAE).", "The basic idea is to draw a sample, Z 0 , from a simple (e.g., Gaussian) probability distribution and to apply K invertible parametric transformation functions ( f k ) called flows to transform the sample.", "The final latent code is given by ZK = f K ( ...f 2 ( f 1 ( Z 0 )) ... ) whose probability density function, q ( ZK | x , y ) , 1 In VAE terms, the posterior and prior distributions are referred to as the encoders, while the generator is referred to as the decoder.", "As these terms have other specific meaning in NMT, we avoid to use them in this paper.", "is defined via the change of variable theorem as follows: q 0 ( Z 0 | x , y ) K (cid:89) k =1 (cid:12)(cid:12)(cid:12)(cid:12) det f k ( Z k 1 ; k ( x , y )) Z k 1 (cid:12)(cid:12)(cid:12)(cid:12) 1 , where k refers to the parameters of the k -th flow with 0 corresponds to the parameters of a base distribution.", "In practice, we can only consider transformations, whose determinants of Jacobians (the second term) are invertible and computationally tractable.", "For our model, we consider several NFs, namely planar flows (Rezende and Mohamed, 2015), Sylvester flows (van den Berg et al., 2018) and affine coupling layer (Dinh et al., 2017), which have been successfully applied in computer vision tasks.", "Planar flows (PF) applies this function: f k ( Z ; k ( x , y )) = Z + u tanh( w TZ + b ) , where k = { u , w RD , b R } .", "Planar flows perform contraction or expansion to the direction perpendicular to the ( w TZ + b ) hyperplane.", "Sylvester flows (SF) applies this function: f k ( Z ; k ( x , y )) = Z + A tanh( B Z + b ) , where k = { A , B RM D , b RM } and M is the number of hidden units.", "Planar flows are a special case of Sylvester flows where M = 1 .", "In our experiments, we consider the orthogonal Sylvester flows (van den Berg et al., 2018), whose parameters are matrices with M orthogonal columns.", "Meanwhile, the affine coupling layer (CL) first splits Z into Z d 1 , Z d 2 R D/ 2 and applies the following function: f k ( Z d 1 ; k ( x , y )) = Z d 1 , f k ( Z d 2 ; k ( x , y , Z d 1 )) = Z d 2 (cid:12) exp( s k ) + t k , where it applies identity transform to Z d 1 and applies a scale-shift transform to Z d 2 according to k = { s k , t k } , which are conditioned on Z d 1 , x and y .", "CL is less expressive than PF and SF, but both sampling and computing the probability of arbitrary samples are easier.", "In practice, we follow (Dinh et al., 2017) to switch Z d 1 and Z d 2 alternately for subsequent flows.", "As we adopt the amortized inference strategy, the parameters of these NFs are data-dependent.", "In our model, they are the output of 1-layer linear map with inputs that depend on x and y .", "Also, as the introduction of normalizing flows no longer offers a simple closed-form solution, we modify the KL term in Eq.", "1 into: E q ( Z | x , y ) [log q ( Z | x , y ) log p ( Z | x )] where we estimate the expectation w.r.t. q ( ZK | x ; ) via L Monte-Carlo samples.", "We found that L = 1 is sufficient, similar to (Zhang et al., 2016).", "To address variable-length inputs, we use the average of the embeddings of the source and target tokens via a mean-pooling layer, i.e., meanpool ( x ) and meanpool ( y ) respectively.", "Transformer-based Generator We incorporate the latent code to the Transformer model by mixing the code into the output of the Transformer decoder's last layer ( h j ) as follows: g j = ([ h j ; Z ]) , h j = (1 g j ) h j + g j Z where g j controls the latent code's contribution, and ( ) is the sigmoid function.", "In the case of the dimension of the latent code ( D ) doesn't match the dimension of h j , we apply a linear projection layer.", "Our preliminary experiments suggest that Transformer is less likely to ignore the latent code in this approach compared to other approaches we explored, e.g., incorporating the latent code as the first generated token as used in (Zhang et al., 2016).", "Prediction Ultimately, we search for the most probable translation ( y ) given a source sentence ( x ) through the evidence lower bound.", "However, sampling latent codes from the posterior distribution is not straightforward, since the posterior is conditioned on the sentence being predicted.", "Zhang et al. (2016) suggests taking the prior's mean as the latent code.", "Unfortunately, as our prior is a Gaussian distribution, this strategy can diminish the benefit of employing normalizing flows posterior.", "Eikema and Aziz (2018) explore two strategies, namely restricting the conditioning of the posterior to x alone (dropping y ) and introducing an auxiliary distribution, r ( Z | x ) , from which the latent codes are drawn.", "They found that the former is more accurate with the benefit of being simpler.", "This is confirmed by our preliminary experiments.", "We opt to adopt this strategy and use the mean of the posterior as the latent code at prediction time.", "Mitigating Posterior Collapse As reported by previous work, VNMT models are prone to posterior collapse, where the training fails to learn informative latent code as indicated by the value of KL term that vanishes to", "0. This phenomenon is often attributed to the strong generator (Alemi et al., 2018) employed by the models, in which case, the generator's internal cells carry sufficient information to generate the translation.", "Significant research effort has been spent to weaken the generator network.", "Mitigating posterior collapse is crucial for our VNMT model as we employ the Transformer, an even stronger generator that comes with more direct connections between source and target sentences (Bahuleyan et al., 2018).", "To remedy these issues, we adopt the C-VAE (Prokhorov et al., 2019) and compute the following modified KL term: | KL C | where is the scaling factor while C is a rate to control the KL magnitude.", "When C > 0 , the models are discouraged from ignoring the latent code.", "In our experiments, we set C = 0 .", "1 and = 1 .", "Additionally, we apply the standard practice of word dropping in our experiments.", "Related Work VNMT comes in two fla-vors.", "The first variant models the conditional probability akin to a translation model, while the second one models the joint probability of the source and target sentences.", "Our model adopts the first variant similar to (Zhang et al., 2016; Su et al., 2018; Pagnoni et al., 2018), while (Eikema and Aziz, 2018; Shah and Barber, 2018) adopt the second variant.", "The majority of VNMT models employ RNN-based generators and assume isotropic Gaussian distribution, except for (McCarthy et al., 2019) and (Przystupa et al., 2019).", "The former employs the Transformer architecture but assumes a Gaussian posterior, while the latter employs the normalizing flows posterior (particularly planar flows) but uses an RNN-based generator.", "We combine more sophisticated normalizing flows and the more powerful Transformer architecture to produce state-of-the-art results.", "Experimental Setup We integrate our proposal into the Fairseq toolkit (Ott et al., 2019; Gehring et al., 2017a,b).", "We report results on the IWSLT'14 German-English (De-En) and the WMT'18 English-German (En-De) tasks.", "For IWSLT'14, we replicate Wu et al. (2019); Edunov et al. (2018)'s setup with 160K training sentences and a 10K joint BPE vocabulary, while for WMT'18, we replicate Edunov et al. (2018)'s setup with 5.2M training sentences and a 32K joint BPE vocabulary.", "For WMT experiments, we report the accuracy using detokenized SacreBLEU (Post, 2018) to facilitate fair comparison with other published results.", "Note that tokenized BLEU score is often higher depending on the tokenizer, thus not comparable.", "We apply KL annealing schedule and token dropout similar to (Bowman et al., 2016), where we set the KL annealing to 80K updates and drop out 20% target tokens in the IWSLT and 10% in the WMT experiments.", "The encoder and decoder of our Transformer generator have 6 blocks each.", "The number of attention heads, embedding dimension, and inner-layer dimensions are 4, 512, 1024 for IWSLT; and 16, 1024, 4096 for WMT.", "The WMT setup is often referred to as the Transformer Big .", "To our knowledge, these architectures represent the best configurations for our tasks.", "We set the latent dimension to D = 128 , which is projected using a 1-layer linear map to the embedding space.", "We report decoding results with beam=5.", "For WMT experiments, we set the length penalty to 0.6.", "For all experiments with NF-based posterior, we employ flows of length 4, following the results of our pilot study.", "In-Domain Results We present our IWSLT results in rows 1 to 6 of Table", "1. The accuracy of the baseline Transformer model is reported in row (1), which matches the number reported by Wu et al. (2019).", "In row (2), we report a static Z experiment, where Z = meanpool ( x ) .", "We design this experiment to isolate the benefits of token dropping and utilizing average source embedding as context.", "As shown, the static Z provides +0.8 BLEU point gain.", "In row (3), we report the accuracy of our VNMT baseline when the approximate posterior is a Gaussian, which is +1.3 BLEU point from baseline or +0.5 point from the static Z , suggesting the efficacy of latent-variable modeling.", "We then report the accuracy of different variants of our model in rows (4) to (6), where we replace the Gaussian posterior with a cascade of 4 PF, SF and CL, respectively.", "For SF, we report the result with M = 8 orthogonal columns in row (5).", "As shown, these flows modestly add +0.2 to +0.3 points.", "It is worth noticing that the improvement introduces only around 5% additional parameters.", "We report our WMT results that use the Transformer Big architecture in rows (10) to (15).", "For comparison, we quote the state-of-the-art result for this dataset from Edunov et al. (2018) in row (9), where the SacreBLEU score is obtained from Edunov (2019).", "As shown, our baseline result (row 10) is on par with the state-of-the-art result.", "The WMT results are consistent with the IWSLT experiments, where our models (rows 13-15) significantly outperform the baseline, even though they differ in terms of which normalizing flows perform the best.", "The gain over the VNMT baseline is slightly higher, perhaps because NF is more effective in larger datasets.", "In particular, we found that SF and PF perform better than CL, perhaps due to their simpler architecture, i.e., their posteriors are conditioned only on the source sentence, and their priors are uninformed Gaussian.", "Row (11) shows that the static Z 's gain is minimal.", "In row (14), our best VNMT outperforms the state-of-the-art Transformer Big model by +0.6 BLEU while adding only 3% additional parameters.", "Simulated Bimodal Data We conjecture that the gain partly comes from NF's ability to capture non-Gaussian distribution.", "To investigate this, we artificially increase the modality of our training data, i.e., forcing all source sentences to have multiple translations.", "We perform the sequence-level knowledge distillation (Kim and Rush, 2016) with baseline systems as the teachers, creating additional data referred to as distilled data.", "We then train systems on this augmented training data, i.e., original + distilled data.", "Rows (7) and (16) show that the baseline systems benefit from the distilled data.", "Rows (8) and (17) show that our VNMT models gain more benefit, resulting in +2.1 and +0.9 BLEU points over non-latent baselines on IWSLT and WMT tasks respectively.", "Simulated Out-of-Domain Condition We investigate whether the in-domain improvement carries to out-of-domain test sets.", "To simulate an out-of-domain condition, we utilize our existing setup where the domain of the De-En IWSLT task is TED talks while the domain of the En-De WMT task is news articles.", "In particular, we invert the IWSLT De-En test set, and decode the English sentences using our baseline and best WMT En-De systems of rows (10) and (14).", "For this inverted set, the accuracy of our baseline system is 27.9, while the accuracy of our best system is 28.8, which is +0.9 points better.", "For reference, the accuracy of the Gaussian system in row (11) is 28.2 BLEU.", "While more rigorous out-of-domain experiments are needed, this result gives a strong indication that our model is relatively robust for this out-of-domain test set.", "Translation Analysis To better understand the effect of normalizing flows, we manually inspect our WMT outputs and showcase a few examples in Table", "2. We compare the outputs of our best model that employs normalizing flows (VNMT-NF, row 14) with the baseline non-latent Transformer (row 10) and the baseline VNMT that employs Gaussian posterior (VNMT-G, row 12).", "As shown, our VNMT model consistently improves upon gender consistency.", "In example 1, the translation of the interior decorator depends on the gender of its cataphora ( her ), which is feminine.", "While all systems translate the cataphora correctly to ihrem , the baseline and VNMT-G translate the Example 1 Source In her book , the interior decorator presents 17 housing models for independent living in old age .", "phrase to its masculine form.", "In contrast, the translation of our VNMT-NF produces the feminine translation, respecting the gender agreement.", "In example 2, only VNMT-NF and VNMT-G produce gender consistent translations.", "We present a Variational NMT model that outperforms a strong state-of-the-art non-latent NMT model.", "We show that the gain modestly comes from the introduction of a family of flexible distribution based on normalizing flows.", "We also demonstrate the robustness of our proposed model in an increased multimodality condition and on a simulated out-of-domain test set.", "We plan to conduct a more in-depth investigation into actual multimodality condition with high-coverage sets of plausible translations.", "We conjecture that conditioning the posterior on the target sentences would be more beneficial.", "Also, we plan to consider more structured latent variables beyond modeling the sentence-level variation as well as to apply our VNMT model to more language pairs." ]
[ "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "objective", "method", "abstain", "method", "result", "objective", "result", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "other", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "result", "result", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "objective", "method", "abstain", "method" ]
[ "Variational autoencoders (VAEs) combine latent variables with amortized variational inference, whose optimization usually converges into a trivial local optimum termed posterior collapse , especially in text modeling.", "By tracking the optimization dynamics, we observe the encoder-decoder incompatibility that leads to poor parameterizations of the data manifold.", "We argue that the trivial local optimum may be avoided by improving the encoder and decoder parameterizations since the posterior network is part of a transition map between them.", "To this end, we propose Coupled-VAE, which couples a VAE model with a deterministic autoencoder with the same structure and improves the encoder and decoder parameterizations via encoder weight sharing and decoder signal matching.", "We apply the proposed Coupled-VAE approach to various VAE models with different regularization, posterior family, decoder structure, and optimization strategy.", "Experiments on benchmark datasets (i.e., PTB, Yelp, and Yahoo) show consistently improved results in terms of probability estimation and richness of the latent space.", "We also generalize our method to conditional language modeling and propose CoupledC VAE, which largely improves the diversity of dialogue generation on the Switchboard dataset.", "1 1 Introduction The variational autoencoder (VAE) (Kingma and Welling, 2014) is a generative model that combines neural latent variables and amortized variational inference, which is efficient in estimating and sampling from the data distribution.", "It infers a posterior distribution for each instance with a shared inference network and optimizes the evidence lower bound (ELBO) instead of the intractable marginal 1 Our code is publicly available at https://github.", "log-likelihood.", "Given its potential to learn representations from massive text data, there has been much interest in using VAE for text modeling (Zhao et al., 2017; Xu and Durrett, 2018; He et al., 2019).", "Prior work has observed that the optimization of VAE suffers from the posterior collapse problem, i.e., the posterior becomes nearly identical to the prior and the decoder degenerate into a standard language model (Bowman et al., 2016; Zhao et al., 2017).", "A widely mentioned explanation is that a strong decoder makes the collapsed posterior a good local optimum of ELBO, and existing solutions include weakened decoders (Yang et al., 2017; Semeniuta et al., 2017), modified regularization terms (Higgins et al., 2017; Wang and Wang, 2019), alternative posterior families (Rezende and Mohamed, 2015; Davidson et al., 2018), richer prior distributions (Tomczak and Welling, 2018), improved optimization strategies (He et al., 2019), and narrowed amortization gaps (Kim et al., 2018).", "In this paper, we provide a novel perspective for the posterior collapse problem.", "By comparing the optimization dynamics of VAE with deterministic autoencoders (DAE), we observe the incompatibility between a poorly optimized encoder and a decoder with too strong expressiveness.", "From the perspective of differential geometry, we show that this issue indicates poor chart maps from the data manifold to the parameterizations , which makes it difficult to learn a transition map between them.", "Since the posterior network is a part of the transition map, we argue that the posterior collapse would be mitigated with better parameterizations.", "To this end, we propose the Coupled-VAE approach, which couples the VAE model with a deterministic network with the same structure.", "For better encoder parameterization, we share the encoder weights between the coupled networks.", "For better decoder parameterization, we propose a signal matching loss that pushes the stochastic decoding signals to the deterministic ones.", "Notably, our approach is model-agnostic since it does not make any assumption on the regularization term, the posterior family, the decoder architecture, or the optimization strategy.", "Experiments on PTB, Yelp, and Yahoo show that our method consistently improves the performance of various VAE models in terms of probability estimation and the richness of the latent space.", "The generalization to conditional modeling, i.e., CoupledC VAE, largely improves the diversity of dialogue generation on the Switchboard dataset.", "Our contributions are as follows: We observe the encoder-decoder incompatibility in VAE and connect it to the posterior collapse problem.", "We propose the Coupled-VAE, which helps the encoder and the decoder to learn better parameterizations of the data manifold with a coupled deterministic network, via encoder weight sharing and decoder signal matching.", "Experiments on PTB, Yelp, and Yahoo show that our approach improves the performance of various VAE models in terms of probability estimation and richness of the latent space.", "We also generalize Coupled-VAE to conditional modeling and propose CoupledC VAE, which largely improves the diversity of dialogue generation on the Switchboard dataset.", "The generative process of VAE is first to sample a latent code z from the prior distribution P ( z ) and then to sample the data x from P ( x | z ; ) (Kingma and Ba, 2015).", "Since the exact marginalization of the log-likelihood is intractable, a variational family of posterior distributions Q ( z | x ; ) is adopted to derive the evidence lower bound (ELBO), i.e., log P ( x ; ) E z Q ( z | x ; ) [log P ( x | z ; )] KL[ Q ( z | x ; ) (cid:107) P ( z )] (1) For training, as shown in Figure", "1(a), the encoded text e is transformed into its posterior via a posterior network.", "A latent code is sampled and mapped to the decoding signal h .", "Finally, the decoder infers the input with the decoding signal.", "The objective can be viewed as a reconstruction loss L rec plus a regularization loss L reg (whose form varies), i.e., L = L rec + L reg (2) \u0000 Encoder( \u0000 ) f \u0000 \u0000 \u0000 Posterior( \u0000 | \u0000 ) \u0000 Decoder( \u0000 | ) \u0000 \u0000 MLP( \u0000 ) g \u0000 reg rec", "However, the optimization of the VAE objective is challenging.", "We usually observe a very small L reg and a L rec similar to a standard language model, i.e., the well-known posterior collapse problem.", "An older family of autoencoders is the deterministic autoencoder (DAE) (Rumelhart et al., 1986; Ballard, 1987).", "Figure", "1(b) shows an overview of DAE for text modeling, which is composed of a text encoder, an optional MLP, and a text decoder.", "The reconstruction loss of DAE is usually much lower than that of VAE after convergence.", "To understand the posterior collapse problem, we take a deeper look into the training dynamics of VAE.", "We investigate the following questions.", "How much backpropagated gradient does the encoder receive from reconstruction?", "How much does it receive from regularization?", "How much information does the decoder receive from the encoded text?", "To answer the first question, we study the gradient norm of the reconstruction loss w.r.t. the encoded text, i.e., (cid:107) L rec / e (cid:107) 2 , which shows the magni-0k", "tude of gradients received by the encoder parameters.", "From Figure", "2(a), we observe that it constantly increases in DAE, while in VAE it increases marginally in the early stage and then decreases continuously.", "It shows that the reconstruction loss actively optimizes the DAE encoder, while the VAE encoder lacks backpropagated gradients after the early stage of training.", "We seek the answer to the second question by studying the gradient norm of the regularization loss w.r.t. the encoded text, i.e., (cid:107) L reg / e (cid:107) 2 .", "In a totally collapsed posterior, i.e., Q ( z | x ; ) = P ( z ) for each x , (cid:107) L reg / e (cid:107) 2 would be zero.", "Thus, (cid:107) L reg / e (cid:107) 2 can show how far the posterior of each instance is from the aggregate posterior or the prior.", "Figure", "2(b) shows a constant decrease of the gradient norm in VAE from the 2.5K step until convergence, which shows that the posterior collapse is aggravated as the KL weight increases.", "For the third question, we compute the normalized gradient norm of the decoding signal w.r.t. the encoded text, i.e., (cid:107) h / e (cid:107) F / (cid:107) h (cid:107) 2 .", "As this term shows how relatively the decoding signal changes with the perturbation of the encoded text, it reflects the amount of information passed from the encoder to the decoder.", "Figure", "2(c) shows that for DAE, it constantly increases.", "For VAE, it at first increases even faster than DAE, slows down, and finally decreases until convergence, indicating that the VAE decoder, to some extent, ignores the encoder in the late stage of training.", "Based on the training dynamics in Section 3.1 and the observations in previous work (Bowman et al., 2016; Zhao et al., 2017), text VAE has three features, listed as follows.", "First, the encoder is poorly optimized, as shown by the low (cid:107) L rec / e (cid:107) 2 .", "Second, the decoder degenerates into a powerful language model.", "Third, h contains less information from e in VAE than in DAE, which is indicated by the lower (cid:107) h / e (cid:107) F / (cid:107) h (cid:107) 2 .", "We call these features as encoder-decoder incompatibility .", "To bridge the incompatibility and posterior collapse, we start with the manifold hypothesis which states that real-world data concentrates near a manifold with a lower dimensionality than the ambient space (Narayanan and Mitter, 2010; Bengio et al., 2013).", "In our case, we denote the manifold of text data as X (cid:83) l NV l where V is the vocabulary.", "In the language of differential geometry, the encoded text e E R d and the decoding signal h H R d can be viewed as the parameterizations (or coordinates ) of x X under two different charts (or coordinate systems ).", "Formally, we denote the chart maps as e : X E and h : X H , which satisfy e = e ( x ) and h = h ( x ) for any x X .", "Given the two charts, the map from E to H is called the transition map h 1 e : E H between the two charts.", "In DAE, the two chart maps and the transition map between them are learned simultaneously via the single reconstruction loss, which we rewrite as L rec = E x X [ L ( x, 1 h ( h 1 e ( e ( x ))))] (3) where e , h 1 e , and 1 h are modeled as the encoder, the MLP, and the decoder (strictly speaking, in text modeling, the range of 1 h is not X but distributions on X ), as illustrated in Figure 3. In VAE, as discussed before, both e and h inadequately parameterize the data manifold.", "We argue that the inadequate parameterizations make it harder to find a smooth transition map in VAE than in DAE, as shown by the lower (cid:107) h / e (cid:107) F / (cid:107) h (cid:107) 2 .", "Since the posterior network is a part of the transition map, it consequently seeks to map each instance to the prior (discussed in Section 3.1) rather than learning the transition map.", "Based on the above analysis, we argue that posterior collapse could be alleviated by learning chart maps (i.e., e and h ) that better parameterize the data manifold.", "Inspired by the chart maps in DAE, we propose to couple the VAE model with a deterministic network, outlined in Figure 3. Modules with a subscript c are deterministic networks that share the structure with those in the stochastic network.", "Sampling is disabled in the deterministic network, e.g., in the case of Gaussian posterior, we use the predicted mean vector for later computation.", "Please find details for other posterior families in Appendix B. Similar to DAE, the coupled deterministic network is optimized solely by the coupled reconstruction loss L c rec , which is the same autoregressive cross-entropy loss as L rec .", "To learn a well-optimized e , we share the encoder between the stochastic and the deterministic networks, which leverages the rich gradients backpropagated from L c rec .", "To learn better h , we propose to guide h with a well-learned chart map, i.e., the one characterized by Decoder c .", "Thus, we introduce a signal matching loss L match that pushes the h to h c .", "The objective of our approach is L = L rec + L reg + r L c rec + m L match (4) where r and m are hyperparameters 2 , L c rec is the coupled reconstruction loss, and the signal matching loss L match is essentially a distance function 2 To avoid heavy hyperparameter tuning, we set r = 1 .", "0 unless otherwise specified.", "between h and h c .", "We evaluate both the Euclidean distance and the Rational Quadratic kernel 3 , i.e., L match = (cid:40) (cid:107) h Detach( h c ) (cid:107) 2 Eucl (cid:80) s s C s C + (cid:107) h Detach( h c ) (cid:107) 2 RQ (5) where s { 0 .", "1 , 0 .", "2 , 0 .", "5 , 1 , 2 , 5 , 10 } , C is a hyperparameter, and Detach prevents gradients to be propagated into h c since we would like h c to guide h but not the opposite.", "One would question the necessity of sharing the structure of the posterior network by resorting to universal approximation (Hornik et al., 1989).", "Specifically, a common question is: why not using an MLP as Posterior c ?", "We argue that each structure has a favored distribution of H in R d , so structure sharing facilitates the optimization when we are learning by gradient descent.", "For example, the latent space learned by planar flows (Rezende and Mohamed, 2015) has compression and expansion, and vMF-VAE (Xu and Durrett, 2018), which is supported on a sphere, may significantly influence the distribution of H in its ambient space R d .", "We conduct the experiments on three commonly used datasets for text modeling, i.e., the Penn Tree-bank (PTB) (Marcus et al., 1993), Yelp (Xu et al., 2016), and Yahoo.", "The training/validation/test splits are 42K/3370/3761 for PTB, 63K/7773/8671 for Yelp, and 100K/10K/10K for Yahoo.", "The vocabulary size for PTB/Yelp/Yahoo is 10K/15K/20K.", "We discard the sentiment labels in Yelp.", "We evaluate the proposed Coupled-VAE approach by applying it to various VAE models, which in-3", "clude VAE (Kingma and Welling, 2014), -VAE (Higgins et al., 2017), vMF-VAE (Xu and Durrett, 2018; Davidson et al., 2018) with learnable , CNN-VAE (Yang et al., 2017), WAE (Tolstikhin et al., 2018), VAE with normalizing flows (VAE-NF) (Rezende and Mohamed, 2015), WAE with normalizing flows (WAE-NF), VAE with cyclic annealing schedule (CycAnn-VAE) (Fu et al., 2019), VAE with encoder pretraining and the free bits objective (PreFB-VAE) (Li et al., 2019), and Lagging-VAE (He et al., 2019).", "We also show the result of GRU-LM (Cho et al., 2014) and SA-VAE (Kim et al., 2018).", "We do not apply our method to SA-VAE since it does not follow amortized variational inference.", "Please find more details in Appendix C and previous footnotes.", "We report negative log-likelihood (NLL), KL divergence, and perplexity as the metrics for language", "modeling.", "NLL is estimated with importance sampling, KL is approximated by its Monte Carlo estimate, and perplexity is computed based on NLL.", "Please find the metric details in Appendix D. Table 1 displays the language modeling results.", "For all models, our proposed approach achieves smaller negative log-likelihood and lower perplexity, which shows the effectiveness of our method to improve the probability estimation capability of various VAE models.", "Larger KL divergence is also observed, showing that our approach helps address the posterior collapse problem.", "Language modeling results only evaluate the probability estimation ability of VAE.", "We are also interested in how rich the latent space is.", "We report the mutual information (MI) between the text x and the latent code z under Q ( z | x ) , which is approximated with Monte Carlo estimation.", "Better PTB Yelp Yahoo MI BLEU-1/2 MI BLEU-1/2 MI BLEU-1/2 VAE 10.48 23.2 / 4.4 8.28 28.7 / 5.3 15.43 21.2 / 3.6 Coupled-VAE 11.99 23.4 / 4.5 9.65 30.4 / 5.8 16.44 23.1 / 4.1 (0 . 8) -VAE 15.43 24.5 / 4.9 13.52 30.6 / 6.0 24.16 24.0 / 4.3 Coupled (0 . 8) -VAE 18.13 24.3 / 4.8 17.69 32.6 / 6.6 28.03 26.4 / 4.9 (1 . 2) -VAE 9.16 22.8 / 4.3 6.60 28.0 / 5.0 11.83 18.2 / 2.9 Coupled (1 . 2) -VAE 10.28 22.9 / 4.2 7.90 29.8 / 5.6 13.51 22.4 / 3.8 vMF-VAE 1.74 15.2 / 2.0 0.03 22.4 / 2.8 2.06 8.5 / 1.1 Coupled-vMF-VAE 2.37 16.1 / 2.3 2.60 25.1 / 4.0 3.37 10.3 / 1.4 CNN-VAE 78.49 32.0 / 7.8 17.26 32.9 / 7.1 30.18 24.9 / 5.3 Coupled-CNN-VAE 80.54 31.8 / 7.7 19.15 33.4 / 7.3 37.62 26.9 / 5.9 WAE 15.09 24.8 / 5.1 15.08 30.7 / 6.1 24.73 24.2 / 4.5 Coupled-WAE 18.51 24.7 / 5.1 18.56 32.5 / 6.6 30.08 27.7 / 5.3 VAE-NF 5.63 19.2 / 3.3 5.64 25.6 / 4.5 8.02 13.7 / 2.1 Coupled-VAE-NF 5.86 19.4 / 3.3 6.06 26.3 / 4.6 9.14 15.3 / 2.5 WAE-NF 7.18 19.7 / 3.5 7.95 26.0 / 4.6 11.43 13.8 / 2.2 Coupled-WAE-NF 8.10 20.7 / 3.7 8.53 27.2 / 5.0 12.56 14.9 / 2.5 CycAnn-VAE 1.55 16.3 / 2.3 1.18 22.6 / 3.2 3.09 8.3 / 1.1 Coupled-CycAnn-VAE 2.27 16.7 / 2.6 2.01 23.1 / 3.4 3.89 10.9 / 1.5 PreFB-VAE 20.6 25.5 / 5.7 20.3 33.1 / 6.8 26.2 27.2 / 5.2 Coupled-PreFB-VAE 23.2 25.8 / 5.8 21.0 33.3 / 6.8 27.0 27.2 / 5.3 Lagging-VAE 2.90 -0.96 -3.04 -Coupled-Lagging-VAE 3.29 -2.36 -3.06 Table 2: Mutual information (MI) and reconstruction.", "reconstruction from the encoded text is another way to show the richness of the latent space.", "For each text x , we sample ten latent codes from Q ( z | x ) and decode them with greedy search.", "We report the BLEU-1 and BLEU-2 scores between the reconstruction and the input.", "Please find the metric details in Appendix E. In Table 2, we observe that our approach improves MI on all datasets, showing that our approach helps learn a richer latent space.", "BLEU-1 and BLEU-2 are consistently improved on Yelp and Yahoo, but not on PTB.", "Given that text samples in PTB are significantly shorter than those in Yelp and Yahoo, we conjecture that it is easier for the decoder to reconstruct on PTB by exploiting its autoregressive expressiveness, even without a rich latent space.", "We investigate the effect of key hyperparameters.", "Results are shown in Table 3. Note that the lowest NLL does not guarantee the best other metrics, which shows the necessity to use multiple metrics for a more comprehensive evaluation.", "For the distance function, we observe that the Euclidean distance (denoted as Eucl in Table 3) is more sensitive to m than the Rational Quadratic kernel (denoted as RQ in Table 3).", "The first and the third block in Table 3 show that, with larger m , the model achieves higher KL divergence, MI, and reconstruction metrics.", "Our interpretation is that by pushing the stochastic decoding signals closer to the deterministic ones, we get latent codes with richer text information.", "We leave the analysis of m = 0 .", "0 in Section 5.6.", "The second block in Table 3 shows the role of r , which we interpret as follows.", "When r is too small (e.g., 0 . 5 ), the learned parameterizations are still inadequate for a smooth transition map; when r is too large (e.g., 5 . 0 ), it distracts the optimization too far away from the original objective (i.e., L rec + L reg ).", "Note that r = 0 .", "0 is equivalent to removing the coupled reconstruction loss L c rec in Eq.", "(4)).", "In Section 5.5 we observe richer latent space (i.e., larger MI and BLEU scores) with larger m .", "However, a richer latent space does not guarantee a better probability estimation result.", "Thus, in this PTB Yelp Dist m r NLL (KL) PPL MI BLEU-1/2 NLL (KL) PPL MI BLEU-1/2 RQ 0 .", "We study three models of different posterior families (i.e., Coupled-VAE, Coupled-VAE-NF, and Coupled-vMF-VAE).", "Results are shown in Table 4, where we do not report the KL, MI, and BLEU scores because they have been shown to be improved with larger m in Table 3. We observe that the effects of signal matching on probability estimation vary in different posterior families.", "We study the three gradient norms defined in Section 3 on the test sets, displayed in Table 5 (for Coupled-VAE, m = 0 . 1 ).", "Notably, (cid:107) L c rec / e (cid:107) 2 in Coupled-VAE is even larger than (cid:107) L rec / e (cid:107) 2 in DAE.", "It has two indications.", "First, the encoder indeed encodes rich information of the text.", "Second, compared with DAE, Coupled-VAE better generalizes to the test sets, which we conjecture is due to the regularization on the posterior.", "Coupled-VAE also has a larger (cid:107) L reg / e (cid:107) 2 compared with VAE, which based on the argument in Section 3.1 indicates that, in Coupled-VAE, the posterior of each instance is not similar to the prior.", "We also observe larger (cid:107) h / e (cid:107) F / (cid:107) h (cid:107) 2 in Coupled-VAE, which indicates a better transition map between the two parameterizations in Coupled-VAE than in VAE.", "We also track the gradient norms of Coupled-VAE ( m = 10 . 0 for a clearer comparison), plotted along with VAE and DAE in Figure 2. The curve for Coupled-VAE in Figure", "2(a) stands for (cid:107) ( L rec + L c rec ) / e (cid:107) 2 .", "We observe that Coupled-VAE receives constantly increasing backpropagated gradients from the reconstruction.", "In contrast to VAE, the (cid:107) L reg / e (cid:107) 2 in Coupled-VAE does not decrease significantly as the KL weight increases.", "The decrease of (cid:107) h / e (cid:107) F / (cid:107) h (cid:107) 2 , which VAE suffers from, is not observed in Coupled-VAE.", "Plots on more datasets are in Appendix F. 5.8 Sample Diversity We evaluate the diversity of the samples from the prior distribution.", "We sample 3200 texts from the prior distribution and report the Dist-1 and Dist-2 metrics (Li et al., 2016), which are the ratios of distinct unigrams and bigrams over all generated unigrams and bigrams.", "Distinct-1 and Distinct-2 in Table 6 show that texts sampled from Coupled-VAE ( m = 10 . 0 ) are more diverse than those from VAE.", "Given limited space, we put several samples in Appendix G for qualitative analysis.", "A property of VAE is to match the interpolation in the latent space with the smooth transition in the data space (Bowman et al., 2016).", "In Table 7, we show the interpolation of VAE and Coupled-VAE on PTB.", "It shows that compared with VAE, Coupled-VAE has smoother transitions of subjects (cid:107) L rec / e (cid:107) 2 (cid:107) L c rec / e (cid:107) 2 (cid:107) ( L rec + L c rec ) / e (cid:107) 2 (cid:107) L reg / e (cid:107) 2 (cid:107) h / e (cid:107) F / (cid:107) h (cid:107) 2 PTB DAE 1719.8 --3.14 VAE 112.5 -19.4 2.05 Coupled-VAE 148.5 2109.6 2320.2 27.7 2.12 Yelp DAE 2443.6 --2.55 VAE 59.7 -18.8 1.62 Coupled-VAE 84.8 3640.8 3764.7 25.0 2.25 Yahoo DAE 4104.6 --3.39 VAE 257.9 -52.8 2.92 Coupled-VAE 335.3 5105.0 5615.0 65.0 3.91 Table 5: Gradient norms defined in Section 3.1 on each test set.", "( both sides it ) and verbs ( are expected have been has been has ), indicating that the linguistic information is more smoothly encoded in the latent space of Coupled-VAE.", "To generalize our approach to conditional language modeling, we propose CoupledC VAE.", "A graphical overview is displayed in Figure 4.", "Specifically, the (coupled) posterior network and the (coupled) decoder are additionally conditioned.", "The objective of Coupled-CVAE is identical to Eq.", "(4).", "We compare Couple-CVAE with GRU encoder-decoder (Cho et al., 2014) and CVAE (Zhao et al., 2017) for dialogue generation.", "We use the Switchboard dataset (John and Holliman, 1993), whose training/validation/test splits are 203K/5K/5K, and the vocabulary size is 13K.", "For probability estimation, we report the NLL, KL, and PPL based on the gold responses.", "Since the key motivation of using CVAE in Zhao et al. (2017) is the diversity of responses, we sample one response for each post and report the Distinct-1 and Distinct-2 metrics over all samples.", "Please find more details in Appendix I. Table 8 shows that Coupled-CVAE greatly increases the diversity of dialogue modeling, while it only slightly harms the probability estimation capability.", "It indicates that Coupled-CVAE better captures the one-to-many nature of conversations than CVAE and GRU encoder-decoder.", "We also observe that the diversity is improved with increasing m , which shows that m can control diversity via specifying the richness of the latent space.", "Bowman et al. (2016) identify the posterior collapse problem of text VAE and propose KL annealing and word drop to handle the problem.", "Zhao et al. (2017) propose the bag-of-words loss to mitigate this issue.", "Later work on this problem focuses on less powerful decoders (Yang et al., 2017; Semeniuta et al., 2017), modified regularization objective (Higgins et al., 2017; Bahuleyan et al., 2019; Wang and Wang, 2019), alternative posterior families (Rezende and Mohamed, 2015; Xu and Durrett, 2018; Davidson et al., 2018; Xiao et al., 2018), richer prior distributions (Tomczak and Welling, 2018), improved optimization (He et al., 2019) or KL annealing strategy (Fu et al., 2019), the use of skip connections (Dieng et al., 2019), hierarchical or autoregressive posterior distributions (Park et al., 2018; Du et al., 2018), and narrowing the amortization gap (Hjelm et al., 2016; Kim et al., 2018; Marino et al., 2018).", "We provide the encoder-decoder incompatibility as a new perspective on the posterior collapse problem.", "Empirically, our approach can be combined with the above ones to alleviate the problem further.", "A model to be noted is -VAE (Higgins et al., 2017), in which the reconstruction and regularization are modeled as a hyperparameterized trade-off, i.e., the improvement of one term compromises the other.", "Different from -VAE, we adopt the idea of multi-task learning, i.e., the coupled reconstruction task helps improve the encoder chart map and the signal matching task helps improve the decoder chart map.", "Both our analysis in Section 3.2 and the empirical results show that the modeling of posterior distribution can be improved (but not necessarily compromised) with the additional tasks.", "Ghosh et al. (2020) propose to substitute stochas-ticity with explicit and implicit regularizations, which is easier to train and empirically improves the quality of generated outputs.", "Different from their work, we still strictly follow the generative nature (i.e., data density estimation) of VAE, and the deterministic network in our approach serves as an auxiliary to aid the optimization.", "Encoder pretraining (Li et al., 2019) initializes the text encoder and the posterior network with an autoencoding objective.", "Li et al. (2019) shows that encoder pretraining itself does not improve the performance of VAE, which indicates that initialization is not strong enough as an inductive bias to learn a meaningful latent space.", "Given the discrete nature of text data, we highlight the two-level representation learning for text modeling: 1) the encoder and decoder parameterizations via autoencoding and 2) a transition map between the parameterizations.", "Notably, the transition map has large freedom.", "In our case, the transition map decides the amount and type of information encoded in the variational posterior, and there are other possible instances of the transition map, e.g., flow-based models (Dinh et al., 2015).", "In this paper, we observe the encode-decoder incompatibility of VAE for text modeling.", "We bridge the incompatibility and the posterior collapse problem by viewing the encoder and the decoder as two inadequately learned chart maps from the data manifold to the parameterizations, and the posterior network as a part of the transition map between them.", "We couple the VAE model with a deterministic network and improve the parameterizations via encoder weight sharing and decoder signal matching.", "Our approach is model-agnostic and can be applied to a wide range of models in the VAE family.", "Experiments on benchmark datasets, i.e., PTB, Yelp, and Yahoo, show that our approach improves various VAE models in terms of probability estimation and the richness of the latent space.", "We also generalize Coupled-VAE to conditional language modeling and propose Coupled-CVAE.", "Results on Switchboard show that Coupled-CVAE largely improves diversity in dialogue generation.", "We would like to thank the anonymous reviewers for their thorough and helpful comments." ]
[ "abstain", "result", "result", "objective", "objective", "abstain", "objective", "abstain", "other", "abstain", "abstain", "abstain", "other", "objective", "result", "result", "method", "objective", "abstain", "objective", "abstain", "result", "abstain", "objective", "objective", "result", "objective", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "method", "objective", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "other", "other", "other", "objective", "method", "other", "abstain", "abstain", "other", "method", "other", "other", "abstain", "other", "method", "result", "abstain", "result", "method", "result", "objective", "abstain", "other" ]
[ "Online textsacross genres, registers, domains, and stylesare riddled with human stereotypes, expressed in overt or subtle ways.", "Word embeddings, trained on these texts, perpetuate and amplify these stereotypes, and propagate biases to machine learning models that use word embeddings as features.", "In this work, we propose a method to debias word embeddings in multiclass settings such as race and religion, extending the work of (Boluk-basi et al., 2016) from the binary setting, such as binary gender.", "Next, we propose a novel methodology for the evaluation of multiclass debiasing.", "We demonstrate that our multiclass debiasing is robust and maintains the efficacy in standard NLP tasks.", "In addition to possessing informative features useful for a variety of NLP tasks, word embeddings reflect and propagate social biases present in training corpora (Caliskan et al., 2017; Garg et al., 2018).", "Machine learning systems that use embeddings can further amplify biases (Barocas and Selbst, 2016; Zhao et al., 2017), discriminating against users, particularly those from disadvantaged social groups.", "(Bolukbasi et al., 2016) introduced a method to debias embeddings by removing components that lie in stereotype-related embedding subspaces.", "They demonstrate the effectiveness of the approach by removing gender bias from word2vec embeddings (Mikolov et al., 2013), preserving the utility of embeddings and potentially alleviating biases in downstream tasks.", "However, this method was only for binary labels (e.g., male/female), whereas most real-world demographic attributes, * Equal contributions Work done while at CMU and The Microsoft AI Development Acceleration Program Gender Biased Analogies man doctor woman nurse woman receptionist man supervisor woman secretary man principal Racially Biased Analogies black criminal caucasian police asian doctor caucasian dad caucasian leader black led Religiously Biased Analogies muslim terrorist christian civilians jewish philanthropist christian stooge christian unemployed jewish pensioners Table 1: Examples of gender, racial, and religious biases in analogies generated from word embeddings trained on the Reddit data from users from the USA.", "including gender, race, religion, are not binary but continuous or categorical, with more than two categories.", "In this work, we show a generalization of Bolukbasi et", "al.'s (2016) which enables multiclass debiasing, while preserving utility of embeddings ( 3).", "We train word2vec embeddings using the Reddit L2 corpus (Rabinovich et al., 2018) and apply multiclass debiasing using lexicons from studies on bias in NLP and social science ( 4.2).", "We introduce a novel metric for evaluation of bias in collections of word embeddings ( 5).", "Finally, we validate that the utility of debiased embeddings in the tasks of part-of-speech (POS) tagging, named entity recognition (NER), and POS chunking is on par with off-the-shelf embeddings.", "As defined by (Bolukbasi et al., 2016), debiasing word embeddings in a binary setting requires identifying the bias subspace of the embeddings.", "Components lying in that subspace are then removed from each embedding.", "(Bolukbasi et al., 2016) define the gender subspace using defining sets of words, where the words in each set represent different ends of the bias.", "For example, in the case of gender, one defining set might be the gendered pronouns { he , she } and another set might be the gendered nouns { man , woman } .", "The gender subspace is then computed from these defining sets by 1) computing the vector differences of the word embeddings of words in each set from the set's mean, and 2) taking the most significant components of these vectors.", "Following the identification of the gender subspace, one can apply hard or soft debiasing (Bolukbasi et al., 2016) to completely or partially remove the subspace components from the embeddings.", "Hard debiasing (also called Neutralize and Equalize) involves two steps.", "First, bias components are removed from words that are not gendered and should not contain gender bias (e.g., doctor , nurse ), and second, gendered word embeddings are centered and their bias components are equalized.", "For example, in the binary case, man and woman should have bias components in opposite directions, but of the same magnitude.", "Intuitively, this then ensures that any neutral words are equidistant to any biased words with respect to the bias subspace.", "More formally, to neutralize , given a bias subspace B spanned by the vectors { b 1 , b 2 , ..., b k } , we compute the component of each embedding in this subspace: w B = k (cid:88) i =1 (cid:104) w , b i (cid:105) b i (1) We then remove this component from words that should be bias-neutral and normalize to get the debiased embedding: w (cid:48) = w w B (cid:107) w w B (cid:107) (2) To equalize the embeddings of words in an equality set E , let = 1 | E | (cid:80) w E w be the mean embedding of the words in the set and B be its component in the bias subspace as calculated in Equation", "1. Then, for w E , w (cid:48) = ( B ) + (cid:112) 1 (cid:107) B (cid:107) 2 w B B (cid:107) w B B (cid:107) (3) Note that in both Equations 2 and 3, the new embedding has unit length.", "Soft debiasing involves learning a projection of the embedding matrix that preserves the inner product between biased and debiased embeddings while minimizing the projection onto the bias subspace of embeddings that should be neutral.", "Given embeddings W and N which are embeddings for the whole vocabulary and the subset of bias-neutral words respectively, and the bias subspace B obtained in Section 2.1, soft debiasing seeks for a linear transformation A that minimizes the following objective: (cid:107) ( AW ) (cid:124) ( AW ) W (cid:124) W (cid:107) 2 F + (cid:107) ( AN ) (cid:124) ( A B ) (cid:107) 2 F (4) Minimizing the first term preserves the inner product after the linear transformation A , and minimizing the second term minimizes the projection onto the bias subspace B of embeddings.", "R is a tunable parameter that balances the two objectives.", "We now discuss our proposed extension of word embedding debiasing to the multiclass setting.", "As in the binary setting, debiasing consists of two steps: identifying the bias subspace and removing this component from the set of embeddings.", "The core contribution of our work is in identifying the bias subspace in a multiclass setting; if we can identify the bias subspace then prior work can be used for multiclass debiasing.", "Past work has shown that it is possible to linearly separate multiple social classes based on components of word embeddings (Garg et al., 2018).", "Based on this we hypothesize that there exists some component of these embeddings which can capture multiclass bias.", "While a multiclass Gender Debiasing MAC P-Value Biased 0.623 N/A Hard Debiased 1.000 1.582e-14 Soft Debiased ( = 0.2) 0.747 1.711e-12 Race Debiasing MAC P-Value Biased 0.892 N/A Hard Debiased 1.009 7.235e-04 Soft Debiased ( = 0.2) 0.985 6.217e-05 Religion Debiasing MAC P-Value Biased 0.859 N/A Hard Debiased 1.004 3.006e-07 Soft Debiased ( = 0.2) 0.894 0.007 Table 2: The associated mean average cosine similarity (MAC) (defined in Section 3.2) and P-Values for debiasing methods for gender, race, and religious bias.", "problem is inherently not a linearly separable problem, a one versus rest classifier is.", "Following from this, the computation of a multiclass bias subspace does not have any linear constraints, though it does come with a loss of resolution.", "As a result we can compute the principal components required to compute the bias subspace by simply adding an additional term for each additional bias class to each defining set.", "Formally, given defining sets of word embeddings D 1 , D 2 , ..., D n , let the mean of the defining set i be i = 1 | D i | (cid:80) w D i w , where w is the word embedding of w .", "Then the bias subspace B is given by the first k components of the following principal component analysis (PCA) evaluation: PCA n (cid:91) i =1 (cid:91) w D i w i (5) The number of components k can be empirically determined by inspecting the eigenvalues of the PCA, or using a threshold.", "Also, note that the defining sets do not have to be the same size.", "We discuss the robustness of this method later.", "Following the identification of the bias subspace, we apply the hard Neutralize and Equalize debiasing and soft debiasing method presented in (Bolukbasi et al., 2016) and discussed in Section 2.2 to completely or partially remove the subspace components from the embeddings.", "For equalization, we take the defining sets to be the equality sets as well.", "We propose a new metric for the evaluation of bias in collections of words which is simply the mean average cosine similarity (MAC).", "This approach is motivated by the WEAT evaluation method proposed by (Caliskan et al., 2017) but modified for a multiclass setting.", "To compute this metric the following data is required: a set of target word embeddings T containing terms that inherently contain some form of social bias (e.g. { church , synagogue , mosque } ), and a set A which contains sets of attributes A 1 , A 2 , ..., AN containing word embeddings that should not be associated with any word embeddings contained in the set T (e.g. { violent , liberal , conservative } ).", "S ( t , A j ) = 1 N (cid:88) a A j cos ( t , a ) , (6) where the cosine distance is: cos ( u , v ) = 1 u v (cid:107) u (cid:107) 2 (cid:107) v (cid:107) 2 .", "(7) Finally, we define MAC as: MAC ( T, A ) = 1 | T || A | (cid:88) T i T (cid:88) A j AS ( T i , A j ) (8) We also perform a paired t -test on the distribution of average cosines used to calculate the MAC.", "Thus we can quantify the effect of debiasing on word embeddings in T and sets in A .", "To measure the utility of the debiased word embeddings, we use the tasks of NER, POS tagging, and POS chunking.", "This is to ensure that the debiasing procedure has not destroyed the utility of the word embeddings.", "We evaluate test sentences that contain at least one word affected by debiasing.", "Additionally, we measure the change in performance after replacing the biased embedding matrix by a debiased one, and retraining the model on debiased embeddings.", "In this section we discuss the different data sources we used for our initial word embeddings, the social bias data used for evaluating bias, and the linguistic data used for evaluating the debiasing process.", "We used the L2-Reddit corpus (Rabinovich et al., 2018), a collection of Reddit posts and comments by both native and non-native English speakers.", "The native countries of post authors are determined based on their posts in countryand region-specific subreddits (such as r/Europe and r/UnitedKingdom), and other metadata such as user flairs, which serve as self-identification of the user's country of origin.", "In this work, we exclusively explore data collected from the United States.", "This was done to leverage extensive studies of social bias done in the United States.", "To obtain the initial biased word embeddings, we trained word2vec embeddings (Mikolov et al., 2013) using approximately 56 million sentences.", "We used the following vocabularies and studies to", "1 For gender, we used vocabularies created by (Bolukbasi et al., 2016) and (Caliskan et al., 2017).", "compile lexicons for bias detection and removal.", "For race we consulted a number of different sources for each race: Caucasians (Chung-Herrera and Lankau, 2005; Goad, 1998); African Americans (Punyanunt-Carter, 2008; Brown Givens and Monahan, 2005; Chung-Herrera and Lankau, 1 The lexicon used can be found here: https: //docs.google.com/spreadsheets/d/1BQBFLUvB9bnuifxikjrcJNqLA0dx9ZeAWk4kdu_OCUM . 2005; Hakanen, 1995; Welch, 2007; Kawai, 2005); and Asian Americans (Leong and Hayes, 1990; Lin et al., 2005; Chung-Herrera and Lankau, 2005; Osajima, 2005; Garg et al., 2018).", "Finally, for religion we used the following sources and labels: Christians (Rios et al., 2015; Zuckerman, 2009; Unnever et al., 2005); Jews (Dundes, 1971; Fetzer, 2000); and Muslims (Shry-ock, 2010; Alsultany, 2012; Shaheen, 1997).", "We evaluate biased and debiased word embeddings on several downstream tasks.", "Specifically, the CoNLL 2003 shared task (Tjong Kim Sang and De Meulder, 2003) which provides evaluation data for NER, POS tagging, and POS chunking.", "In this section we review the results of our experiments and discuss what those results mean in the context of this work.", "We can observe bias in word embeddings in many different ways.", "However, for the purposes of demonstrating that bias exists in these word embeddings we use the analogy task that was used to demonstrate bias in (Bolukbasi et al., 2016).", "We observe that bias is present in generated analogies by viewing them directly.", "A small subset of these analogies are in Table 1 to highlight our findings.", "We perform our debiasing in the same manner as described in Section 3.1 and calculate the MAC scores and p -values to measure the effects of debiasing.", "Results are presented in Table", "2. Does multiclass debiasing decrease bias?", "We see that this debiasing procedure categorically moves MAC scores closer to 1.0.", "This indicates an increase in cosine distance.", "Further, the associated P-values indicate these changes are statistically significant.", "This demonstrates that our approach for multiclass debiasing decreases bias.", "The effects of debiasing on downstream tasks are shown in Table", "3. Debiasing can either help or harm performance.", "For POS tagging there is almost always a decrease in performance.", "However, for NER and POS chunking, there is a consistent increase.", "We conclude that these models have learned to depend on some bias subspaces differently.", "Note that many performance changes are of questionable statistical significance.", "Does multiclass debiasing preserve semantic utility?", "We argue the minor changes in Table 3 support the preservation of semantic utility in the multiclass setting, especially compared to gender debiasing which is known to preserve utility (Bolukbasi et al., 2016).", "Is the calculated bias subspace robust?", "The bias subspace is at least robust enough to support the above debiasing operations.", "This is shown by statistically significant changes in MAC scores.", "Calculating multiclass bias subspace using our proposed approach has drawbacks.", "For example, in the binary gender case, the extremes of bias subspace reflect extreme male and female terms.", "However, this is not possible when projecting multiple classes into a linear space.", "Thus, while we can calculate the magnitude of the bias components, we cannot measure extremes of each class.", "Additionally, the methods presented here rely on words that represent biases (defining sets) and words that should or should not contain biases (equality sets).", "These lists are based on data collected specifically from the US.", "Thus, they may not translate to other countries or cultures.", "Further, some of these vocabulary terms, while peer reviewed, may be subjective and may not fully capture the bias subspace.", "Recent work by Gonen and Goldberg (2019) suggests that debiasing methods based on bias component removal are insufficient to completely remove bias in the embeddings, since embeddings with similar biases are still clustered together after bias component removal.", "Following Gonen and Goldberg's (2019) procedure, we plot the number of neighbors of a particular bias class as a function of the original bias, before and after debiasing in Figure 1 and 2 in the Appendix.", "In line with Gonen and Goldberg's (2019) findings, simply removing the bias component is insufficient to remove multiclass cluster bias.", "However, increasing the size of the bias subspace reduces the correlation of the two variables (Table 4 in the Ap-pendix).", "We showed that word embeddings trained on www.reddit.com data contain multiclass biases.", "We presented a novel metric for evaluating debiasing procedures for word embeddings.", "We robustly removed multiclass bias using a generalization of existing techniques.", "Finally, we showed that this multiclass generalization preserves the utility of embeddings for different NLP tasks.", "This research was supported by Grant No.", "IIS1812327 from the United States National Science Foundation (NSF).", "We also acknowledge several people who contributed to this work: Benjamin Pall for his valuable early support of this work; Elise Romberger who helped edit this work prior to its final submission.", "Finally, we are greatly appreciative of the anonymous reviewers for their time and constructive comments." ]
[ "abstain", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "result", "abstain", "method", "objective", "abstain", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "other", "abstain", "method", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "other", "method", "method", "abstain", "result", "result", "abstain", "result", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "result", "objective", "method", "result", "other", "other", "other", "other" ]
[ "The importance of explaining the outcome of a machine learning model, especially a black-box model, is widely acknowledged.", "Recent approaches explain an outcome by identifying the contributions of input features to this outcome.", "In environments involving large black-box models or complex inputs, this leads to computationally demanding algorithms.", "Further, these algorithms often suffer from low stability, with explanations varying significantly across similar examples.", "In this paper, we propose a Learning to Explain (L2E) approach that learns the behaviour of an underlying explanation algorithm simultaneously from all training examples.", "Once the explanation algorithm is distilled into an explainer network, it can be used to explain new instances.", "Our experiments on three classification tasks, which compare our approach to six explanation algorithms, show that L2E is between 5 and 7 .", "5 10 4 times faster than these algorithms, while generating more stable explanations, and having comparable faithfulness to the black-box model.", "Explaining the mechanisms and reasoning behind the outcome of complex machine learning models, such as deep neural networks (DNNs), is crucial.", "Such explanations can shed light on the potential flaws and biases within these powerful and widely applicable models, e.g., in medical diagnosis (Caru-ana et al., 2015) and judicial systems (Rich, 2016).", "Existing explainability methods mostly produce explanations, or rationales (DeYoung et al., 2020), which identify the attributions of features in an input example, e.g., are they contributing positively or negatively to the prediction of an outcome.", "For text classifiers, this means identifying words or phrases in an input document that account for a Novell's Microsoft attack completes Linux conversion: Novell Inc. has completed its conversion to Linux by launching an attack on Microsoft Corp., claiming that the company has stifled software innovation and that the market will abandon Microsoft Windows at some point in the future.", "y xxx = 99% Sci/Tech; y xxx (cid:114) A = 14%; y xxx (cid:114) L2E = 0.7% Microsoft expands Windows update Release: Microsoft Corp. is starting to ramp up distribution of its massive security update for the Windows XP operating system, but analysts say they still expect the company to move at a relatively slow pace to avoid widespread glitches.", "y xxx = 98% Sci/Tech; y xxx (cid:114) A = 66%; y xxx (cid:114) L2E = 0.4% Figure 1: Two similar examples from the News dataset.", "prediction.", "Current approaches are typically computationally demanding, requiring expensive operations, such as consulting a black-box model multiple times (Zeiler and Fergus, 2014), or generating samples to learn an approximate but explainable transparent model (Ribeiro et al., 2016).", "This computational demand reduces the utility of these explanation algorithms, especially for large black-box models, long documents and real-time scenarios (Kim et al., 2018).", "Further, these algorithms generate explanations for different examples independently.", "This may lead to the generation of different explanations for similar examples, which is undesirable.", "For example, a black-box predicts with similar confidence (99% and 98%) that the topic of the two semantically similar documents in Figure 1 is Sci/Tech.", "However, even though the words Microsoft' and Windows' appear in both documents, the baseline explainer A deems Windows' to be important for the top document, and Microsoft' for the bottom document (that is, masking these words results in a significant drop in the black-box's confidence).", "In this paper, we present a learning to explain (L2E) approach that efficiently learns the commonalities of the explanation process across different examples.", "This, in turn, leads to explanations that exhibit stability, i.e., important words are chosen consistently, without loss of faithfulness to the underlying black-box.", "1 Given a set of examples paired with their explanations produced by an existing method, e.g., LIME (Ribeiro et al., 2016), our approach uses a DNN to learn the explanation algorithm.", "DNNs are Turing complete (Perez et al., 2019; Montufar et al., 2014); therefore, given enough training data and learning capacity, they should be able to learn the existing explanation algorithms.", "This is akin to Knowledge Distillation (Hinton et al., 2015), where a teacher, or in our case a teacher algorithm, distils knowledge into a student network.", "Our contributions are:", "(i) the L2E framework, which is general, and can successfully learn to produce explanations from several teacher explainers;", "(ii) two learning formulations, i.e., Ranking and Sequence Labelling, to enable L2E to circumvent the high variance of non-discrete teacher explanations via discretization;", "(iii) an experimental setup to compare L2E against six popular explanation algorithms, and a comprehensive evaluation to investigate the stability and faithfulness of L2E on three text classification tasks;", "(iv) a methodology that employs human rationales as proxies for the ground-truth explanations of a black-box model.", "The core of this method is a modified training protocol whereby the model makes neutral predictions if human rationales are absent.", "We consider two main approaches to explanation generation: algorithmic and model-based.", "Algorithmic Approaches.", "These approaches can be broadly categorized into gradient-based, attention-based and perturbation-based methods.", "Gradient-based methods (Simonyan et al., 2013; Sundararajan et al., 2017; Shrikumar et al., 2017; Erion et al., 2019) or backpropagation-based methods (Bach et al., 2015) require access to the black-box, and are mostly applied to models with differentiable functions.", "Further, they may be sensitive 1 This approach does not aim to improve the transparency (Lipton, 2018) of the black-box model.", "to randomized model initializations or permuted data labels (Adebayo et al., 2018), which is undesirable.", "These methods can be computationally heavy in the case of complex black-box models (Wu and Ong, 2021), e.g., BERT (Devlin et al., 2018).", "Attention-based methods (Wiegreffe and Pinter, 2019) can only be applied to Transformer-based models (Vaswani et al., 2017), and their effectiveness is questionable (Jain and Wallace, 2019; Serrano and Smith, 2019).", "Perturbation-based methods approximate feature importance by observing changes in a model's outcome after a feature is changed.", "They either consider changes in performance as an indicator of feature importance directly (Martens and Provost, 2014; Zeiler and Fergus, 2014; Schwab and Karlen, 2019), or they employ a higher-order approximation of the decision boundary (Ribeiro et al., 2016; Lundberg and Lee, 2017).", "Perturbation-based methods are typically computationally inefficient for explaining high-dimensional data, and they suffer from high variance due to perturbation randomness (Slack et al., 2020; Chen et al., 2019).", "Model-based Approaches.", "These approaches train the explainer with an objective function to improve efficiency at test time.", "The closest work to ours is by Schwab and Karlen (2019), who train an explainer using a causality-based explanation algorithm.", "However, these approaches do not learn from arbitrary algorithms or discretize feature weights the high variation of continuous weights may impair the ability to capture the commonalities in an explanation algorithm.", "Jain et al. (2020) discretize the weights produced by an existing method, but they use these weights to build a faithful classifier for an underlying black-box model, rather than using them to explain the model directly.", "Other works train a classifier and an explainer jointly in order to incorporate explainability directly into the classifier (Lei et al., 2016; Cam-buru et al., 2018).", "Unlike these approaches, we do not change the classifier or require an expensive process to collect human rationales, as done in (Camburu et al., 2018).", "Lastly, a few works use information-theoretic objectives to train an explainer directly from the underlying classifier (Chen et al., 2018; Bang et al., 2019).", "These explainers require careful training to select a low number of important features (Paranjape et al., 2020); hence, some input features do not have attributions.", "Goodness of Explanations.", "Researchers have quantified the goodness of an explanation in different ways, such as brevity, alignment to human rationales, contrastiveness and stability.", "Minimal (brief) explanations are generated in (Martens and Provost, 2014; Ribeiro et al., 2018; Alvarez-Melis et al., 2019; Bang et al., 2019).", "Explanations aligned with human rationales are produced in (Sen et al., 2020; Atanasova et al., 2020), and contrastive explanations are generated in (Miller, 2018; Alvarez-Melis et al., 2019).", "According to Atanasova et al. (2020), only a few algorithmic explanation methods produce stable explanations (Robnik-Sikonja and Bohanec, 2018), e.g., LIME (Ribeiro et al., 2016).", "To the best of our knowledge, we are the first to explore the stability of explanations in model-based approaches.", "L2E can be applied to any Natural Language Processing task to which an underlying feature-based explanation algorithm can be applied, such as Natural Language Inference and Question Answering (Wang et al., 2020).", "In this paper, we focus on explaining text classification models.", "Our setup requires two inputs:", "(i) a black-box text classification model y = f ( xxx ) , which assigns document xxx to a label y Y , where Y is the label set; and", "(ii) an explanation algorithm A ( xxx, y, f ) www , which generates explanation www R | xxx | for the class of document xxx obtained by the black-box f ( xxx ) .", "A can be any off-the-shelf explanation algorithm; and w i can be thought as the importance weight of x i the i th token of a document.", "The main idea of L2E is to train a separate explanation model g ( xxx ) to predict the explanation generated by A ( . ) for f ( . ) (Figures 2a and 2b).", "Intuitively, our approach distils the explanation algorithm A into the explanation model g .", "As con-firmed by our experiments ( 4.5), this has several benefits.", "Firstly, it leads to stable explanations, as g can capture A 's common patterns when generating explanations for different documents.", "Secondly, it speeds up the explanation generation process compared to many existing explanation algorithms, which rely on computationally heavy operations, such as consulting the black-box model multiple times, e.g., Occlusion (Zeiler and Fergus, 2014), or sampling, e.g., LIME (Ribeiro et al., 2016).", "Our approach, which learns a model with explanations Algorithm 1 Learning to Explain (L2E) 1: D : a training set of documents 2: f : the original deep NN model 3: g : the explainer deep NN model 4: A : the underlying explanation method 5: procedure TRAINEXPLAINER ( D , f ) 6: Z 7: for each input xxx D do 8: y f ( xxx ) 9: www A ( xxx, y, f ) 10: Z Z ( xxx, y,www ) 11: end for 12: initialize randomly 13: t 0 14: while a stopping condition is not met do 15: Randomly pick ( xxx t , y t ,www t ) Z 16: t L ( g ( xxx t , y t ) ,www t ) 17: t t + 1 18: end while 19: return the explanation model g 20: end procedure of all training data, takes advantage of the computations done by A , and generates more stable explanations faster.", "Our approach to train the explanation model g is summarized in Algorithm 1.", "First, the algorithm generates training data in the form of triplets ( xxx, y,www ) (lines 711), and then it trains the explanation model using supervised learning (lines 1418).", "At test time, the trained model is deployed to generate explanations for unseen documents.", "A crucial component in training the explanation model under supervised learning is the loss function L ( g ( xxx t , y ) ,www ) .", "It penalizes a deviation of the predicted explanation g ( xxx t , y ) from the ground truth explanation www .", "This loss function is determined by our supervised learning formulation.", "Given that www is a continuous-valued vector, learning the model g may be cast as a multivariate regression problem.", "However, the continuous feature attributions generated by existing explanation algorithms could be sensitive to initializations (Slack et al., 2020).", "Further, manually annotated rationales (highlighting important words in a document) are sufficient for people to understand/perform a classification task (Zaidan et al., 2007).", "So, instead of a regression formulation, we consider two supervised learning formulations for discretized outputs: Ranking and Sequence Labeling.", "Ranking Formulation.", "In this formulation, the explanation model aims to learn the ranking of the document tokens from their importance weights.", "That is, we consider the ordering of the token weights induced by www , and train the explanation This is a great movie.", "model g such that it induces the same ordering.", "Specifically, the loss function is as follows: L ( g ( xxx, y ) ,www ) = | xxx | 1 (cid:88) i =1 | xxx | (cid:88) j = i +1 log e v k e v i + e v j where v i ( v j ) is the i th ( j th ) component of the importance vector vvv = g ( xxx, y ) predicted by the explanation model, and k = arg max k (cid:48) { i,j } | w k (cid:48) | .", "In other words, each pair of token weights is compared, and the parameters are learnt such that a token with a high importance weight under A also gets a high score under g .", "Sequence Labeling Formulation.", "Here, explanation generation is treated as a sequence labeling problem, where the continuous importance weights are discretized according to the heuristic h , whereby the importance weights are partitioned along two dimensions, high/low and posi-tive/neutral/negative, according to the mean value of the positive/negative weights from the baseline explanation method A .", "Thus, the labels are recoded to { high negative, low negative, neutral, low positive, high positive } .", "The explanation model g is then trained to predict the label of the tokens according to the following loss function: L ( g ( xxx, y ) ,www ) = | xxx | (cid:88) i =1 log Pr ( h ( w i ) | g ,i ( xxx, y )) where g ,i ( xxx, y ) is the predicted distribution over the labels of the i th token of the document, and h ( w i ) is the discrete label produced using the dis-cretization heuristic h .", "Owing to the quadratic complexity of the Ranking formulation, compared to the linear complexity of Sequence Labeling, we recommend using Ranking when the input is short, and a fine-grained order of feature attributions is required.", "Otherwise, the Sequence Labeling formulation is a better option.", "We conduct experiments on three classification tasks; each task has a different black-box classifier chosen based on the best accuracy on the selected dataset as reported in the literature.", "2 Dataset statistics are reported in Appendix A. Topic Classification.", "The AG corpus (Zhang et al., 2015) comprises news articles on multiple topics.", "We separate 10% of the training documents for the dev set.", "The black-box classifier is a fine-tuned BERT model (Devlin et al., 2018) with 12 hidden layers and 12 attention heads.", "It achieves a 92.6% test accuracy.", "Sentiment Analysis.", "The SST dataset (Socher et al., 2013) comprises movie reviews with positive and negative sentiments.", "The black-box classifier is a distilled BERT model (Sanh et al., 2019) with 6 layers and 12 attention heads from Hugging Face (Wolf et al., 2019).", "It achieves 90% test accuracy.", "Linguistic Acceptability.", "The CoLA dataset (Warstadt et al., 2019) contains sentences that are deemed acceptable or unacceptable in terms of their grammatical correctness.", "The black-box classifier is a fine-tuned ALBERT model (Lan et al., 2020) with 12 attention heads and 12 layers.", "It achieves a 74% test accuracy.", "2 All black-box models are open-sourced by TextAt-tack (Morris et al., 2020) unless otherwise stated.", "We use six baselines for our experimental setup: Occlusion (Zeiler and Fergus, 2014; Schwab and Karlen, 2019), Gradient (Simonyan et al., 2013), LRP (Bach et al., 2015), LIME (Ribeiro et al., 2016), Kernel SHAP (Lundberg and Lee, 2017) and Deep SHAP (Shrikumar et al., 2017; Lundberg and Lee, 2017).", "The detailed setup of these baselines is provided in Appendix B. 4.3 Explanation Models ( g ) We use a Transformer encoder (Vaswani et al., 2017) with 4 blocks and 4 attention heads as g .", "3 All models are trained with a Stochastic Gradient Descent optimizer and a fixed learning rate ( 1 e 4 ) until convergence.", "To balance the different statuses of model convergence, we train all models with three random parameter initializations and report the average values of their performance metrics.", "We condition the explainer model g on the label y predicted by the underlying black-box model f by appending y to the start and the end of the input document before passing it to g (Figure 2a).", "Thus, g can leverage the predicted label in the attention computation.", "For the sequence labeling formulation, we also introduce a softmax layer on top to produce the labeling distribution over the discrete labels for each token, as detailed in Figure 2b.", "Faithfulness.", "A standard approach to evaluate the faithfulness of an explanation to a black-box classification model is to measure the degree of agreement between the prediction given the full document and the prediction given the explanation (Ribeiro et al., 2016).", "However, the aim of L2E is to approximate an existing explanation method A , which constitutes a layer of separation from the original black-box f .", "Hence, we provide two faithfulness evaluations for our approach when the ground-truth explanation is unavailable: Prediction based.", "We measure the agreement between:", "(a) the predictions of the black-box model f when the explanations generated by g are given as input, and", "(b) f 's predictions when A 's explanations are given as input (instead of using the full document); 4 3 We use the fairseq framework (Ott et al., 2019) for all our implementations of g .", "Our source code is available at https://github.com/situsnow/L2E.", "4 We do not evaluate the faithfulness of L2E to A in terms Confidence based.", "We adopt the log-odds ( xxx ) metric used by Schwab and Karlen (2019), which measures the difference in the confidence of the f black-box model in a prediction before and after masking the words in an explanation.", "log-odds (Pr( y | f ( xxx ))) log-odds (Pr( y | f ( xxx ))) where y is the predicted output of f ( xxx ) , log-odds (Pr) = log Pr1 Pr , and xxx is a version of input xxx where the tokens in the explanation are masked out.", "We expect a high log-odds value if we mask positive important words in x x x , and a low value if we mask unimportant or negative important words.", "Stability.", "We employ Intersection over Union ( IoU ) to measure explanation stability across similar instances.", "Specifically, for each test instance xxx , we select its nearest neighbors N ( xxx ) according to one of two pairwise document similarity metrics: semantic similarity cosine of their BERT representations; and lexical similarity ratio of overlapping n-grams.", "Details appear in Appendix C. IoU ( xxx, N ( xxx )) then measures the consistency of explanations of xxx and those of its neighbours, 1 |N ( xxx ) | (cid:88) xxx (cid:48) N ( xxx ) (cid:80) (cid:96) L (cid:96) (cid:54) = neutral | vvv (cid:96)xxx vvv (cid:96)xxx (cid:48) | (cid:80) (cid:96) L (cid:96) (cid:54) = neutral | vvv (cid:96)xxx vvv (cid:96)xxx (cid:48) | (1) where L is the discretized label set in the Sequence Labeling formulation or the top K words in the Ranking formulation, and vvv (cid:96)xxx is the set of tokens with label (cid:96) in the predicted explanation g ( xxx, y ) .", "We report the average of IoU ( xxx, N ( xxx )) across documents in the test set.", "We start by investigating the faithfulness of an explanation model to the black-box model f .", "Once faithfulness has been established, we investigate stability and speed compared to the underlying explanation methods A .", "We also include a Random baseline, which displays the performance obtained by randomly selecting the same K number of words as we select from explanations produced by L2E and A in each row of the table, and averaging it over the six comparisons.", "of token importance, because A is not always faithful to the black-box model.", "Faithfulness.", "For the Ranking formulation of L2E, we select the top 30% of the important words in each test sample.", "5 For the Sequence Labeling formulation, we select the same number of posi-tive/negative words identified by L2E and A .", "Table 1 shows the Prediction-based agreement between the black-box model f and our method L2E, between f and the underlying explainer A , and between L2E and A .", "We see that the explanations generated by L2E are equally predictive of the output class as those generated by A in both the Ranking and the Sequence Labeling formulations.", "We also note that the L2E version that learns with the Ranking formulation is often less faithful, though not significantly, to the black-box model f than A compared to the version that learns with the Sequence Labeling formulation.", "6 For example, the percentage agreement of L2E-Ranking is lower than that of Occlusion for the three datasets, while the agreement of L2E-SequenceLabeling is higher than that of Occlusion for these datasets.", "Interestingly, when the baseline explanation algorithm does not perform well, e.g., Kernel SHAP on SST, L2E is still able to find words that are predictive of the output of f .", "In such circumstances, the agree-5 We select 30% to ensure sufficient important words are selected in each dataset given their average document length.", "We use the same percentage in the Stability evaluation.", "6 Statistical significance ( < 0 . 05 ) was measured by performing the Wilcoxon Signed-Rank Test (Woolson, 2007) followed by a sequential Holm-Bonferroni correction (Holm, 1979; Abdi, 2010) for all pairs of comparisons in a table.", "ment between L2E and A is quite low (Both is 58% and 51% for Ranking and Sequence Labeling respectively).", "The low performance of Kernel SHAP may be attributed to insufficient samples ( 10 3 in this case) in the kernel computation for SST, while L2E could still utilize all the samples during training.", "Table 2 presents the log-odds results for positive explanation words in the Sequence Labeling formulation.", "Similar results are observed for negative explanation words in the same formulation, and top important words in the Ranking formulation.", "These results appear in Appendix D. They are obtained by randomly selecting 100 documents in the test set, and masking the same number of important words in each document based on the explanations generated by L2E and by A .", "We observe that some baselines have inconsistent faithfulness for different datasets.", "For example, LRP and Deep SHAP perform worse than Kernel SHAP for the News dataset, but better for SST.", "We also note that, when one baseline performs worse than the other baselines, e.g., Kernel SHAP for SST, our method L2E still performs significantly better than that baseline.", "This result demonstrates that our model can learn important words that yield more faithful explanations than those learned by the teacher explainer.", "Interestingly, none of the results for the CoLA dataset, from the baseline A or L2E, significantly outperforms the Random baseline.", "This flags a drawback of evaluating explanation faithfulness on short documents.", "Stability.", "For each test document, we consider the top-3 similar documents in the test set, and report the average IoU as explained in 4.4.", "Table 3 shows the results obtained using semantic similarity for the baseline A and L2E.", "Similar results with lexical similarity appear in Appendix C. From Table 3, we see that, in most cases, our method statistically significantly outperforms the baseline for all three datasets.", "For both formulations, Ranking and Sequence Labeling, L2E achieves a higher stability than the baseline A , even in cases where A 's IoU is comparable to that of the Random baseline, e.g., Gradient for SST and CoLA.", "These results show that learning the explanation process across different examples, as done by L2E, can capture more commonalities (higher stability) than generating explanation individually (baselines).", "Overall, the LIME baseline performs consistently better than most baselines in terms of faithfulness and stability across the three datasets.", "Therefore, L2E also performs better when it learns from LIME than when it learns from other baselines.", "Computational Efficiency.", "We now compare the efficiency of L2E against that of the baseline explanation algorithms A when generating explanations for test documents.", "In our experiments, the black-box is a transformer-based model comprising L layers, H attention-heads and D embedding dimensions.", "The complexity of this model when predicting a document of size N is then O ( L N D ( D + N + H )) (Gu et al., 2020).", "Various factors contribute to the computational demands of existing explanation algorithms (details in Appendix B), and make the complexity of these algorithms grow with the size of the black-box L 2 EG r a d i e n t LRPLIMEO cc l u s i on K ' SH APD ' SH AP 2 4 2 0 2 4 2 8 2 12 0 .", "model.", "These factors include the size of the input document (Occlusion), the sample size (LIME, Kernel SHAP and Deep SHAP) etc.", "In contrast, L2E is a distillation of any explanation algorithm, employing a smaller architecture than the black-box, e.g., fewer layers and attention heads, and lower embedding dimensions.", "Figure 3 shows the inference time of L2E-SequenceLabeling compared to that of the baseline explainers for the IMDB-R dataset.", "7 We only show the results obtained with Sequence Labeling, since the inference time of L2E models is independent of the learning formulation.", "As seen in Figure 3, L2E requires statistically significantly less time than any of the six baseline explanation algorithms for IMDB-R.", "Similar patterns were observed for the 7 All timing information is collected with the same hardware configuration: Intel Xeon E5-2680 v3, NVIDIA Tesla K80, 32 GB RAM.", "other three datasets (Appendix E).", "Finally, L2E only needs a forward pass through the explainer DNN.", "Comparing with Gradient and LRP, which require only one backpropagation through the black-box DNN, L2E is respectively 5 and 10 times faster for all datasets (all black-box sizes appear in 4.1 and Appendix F).", "Evaluation of explanation methods for DNNs is challenging, as ground-truth explanations are often unavailable.", "In this section, we propose to address this issue using the IMDB-R dataset (Zaidan et al., 2007), which contains movie reviews xxx together with their sentiment y , as well as rationales rrr annotated by people for the sentiment label.", "Our use of rationales for evaluating explanations is related to that in (Osman et al., 2020), where synthetic data are generated from apriori fixed rationales.", "Specifically, we generate new data by assigning a neutral label to an example where the human rationales are masked.", "We then use both the original data (without masking) and the new data to train the black-box model, where the training protocol forces the classifier to make a neutral prediction when the human rationales are removed from the review.", "More formally, we maximize the following training objective, (cid:88) ( xxx,rrr,y ) D log Pr( y = f ( xxx ))+ log Pr( NEUTRAL = f ( xxx rrr )) where xxx rrr denotes the input xxx with the rationale words rrr masked out, NEUTRAL is an extra label, 8 and D is the training data.", "Our classifier achieves an accuracy of 83.83% on the training set, 79.68% on the validation set and 74.5% on the test set.", "Due to the large document sizes (Table 6 in Appendix A) and the quadratic time complexity of the Ranking formulation as a function of document size, we only train L2E with the Sequence Labeling formulation; we use lexical similarity to measure IoU, due to the time-consuming computation of semantic similarity with BERT.", "Details about the dataset, the classifier and the explainer's architecture appear in Appendix F. The faithfulness and stability of the explanation methods are evaluated as follows.", "We select the topK important words generated by an explanation method and compute the precision, recall and F1 against the human-annotated rationales.", "It is worth noting that our L2E explainer is not supervised by human rationales directly.", "Instead, we use the same experimental setup as in Section 4.5 to ensure the L2E explainer is learning from the baseline algorithms rather than the human rationales.", "Table 4 displays the average values over all test instances.", "As noted by Carton et al. (2020), the rationales in the original dataset are not exhaustively identified by human annotators.", "For a particular event, we expect to observe a lower precision than recall, since the black-box model might still be able to utilize the words not being annotated in addition to the words annotated by a human.", "The results in Table 4 align with this hypothesis.", "For instance, besides LRP for the positive reviews and Kernel SHAP for both reviews, all baselines and the corresponding L2E have higher recall than precision.", "Furthermore, L2E outperforms the corresponding baseline A significantly in most cases for both positive and negative reviews, except when comparing with LIME's precision.", "This observation indicates that learning the explanations of multiple examples together, as done by L2E, achieves high faithfulness to human rationales, as well as to the black-box model.", "Stability.", "Table 5 displays stability computed in three ways: (1) no filtering (which extracts important words only, Table 3), (2) filtering non-annotated words, and (3) filtering stop-words.", "For the two filtering measures, prior to filtering, we no filter filter non-annotatedwords filter stopwords Human 5.83 0.27 5.83 0.27 3.06 0.27 Occ.", "ensure the same number of important words is selected from the explanation produced by baseline A and L2E.", "Equation 1 is then used to compute the IoU value.", "To ensure a fair comparison, we select the same number of words in L2E and a comparable baseline A before filtering.", "Similarly to the results in 4.5, as seen in Table 5, L2E yields more stable explanations than the corresponding baselines.", "The best stability, obtained with L2E (58.6 0.27) by filtering non-annotated words when learning from Occlusion, is comparable to that of the human rationales.", "This is due to the high recall (92 and 82 for positive and negative reviews respectively in Table 4) in the explanations produced by L2E, which indicates they have high overlap with human rationales.", "Further, when measuring the IoU values, the L2E explanations of similar examples have the same intersection with the human rationales, but a lower union.", "This result indicates that people favour stable rationales in similar documents, and reinforces our findings regarding the greater consistency of the explanations produced by L2E compared to the baselines.", "LRP has been proven to have explanation continuity (Montavon et al., 2018), where the explanations of two nearly equivalent instances are also equivalent.", "However, we do not observe such a pattern in our experiments.", "We hypothesize that using perturbed instances as neighbours, as done by Montavon et al. (2018), does not necessarily follow the same distribution of the data.", "Instead, we posit that finding similar examples within a dataset, as done in our experiments, is a better proxy for stability evaluation.", "We have presented a Learning to Explain (L2E) approach to learn the commonalities of the explanation generation processes across different examples.", "We have further proposed Ranking and Sequence Labeling formulations to effectively learn the explainer model by discretizing feature weights produced by existing explanation algorithms.", "Our experimental results show that our method can generate more stable explanations (i.e., not vary much across similar documents) than those generated by the explainer baselines, while maintaining the same level of faithfulness to the underlying black-box model as the baseline algorithms.", "Moreover, our L2E approach produces explanations between 5 and 7 .", "5 10 4 times faster than the six baselines, making it suitable for long documents and very large black-box models.", "Our L2E approach trains an explainer, a black-box, to mimic the behaviour of an explanation method for an existing black-box model.", "A key challenge lies in the variation in the convergence status of such an explainer for different initializations.", "In order to mitigate this problem, we evaluate the performance of our explainer by averaging three different initializations.", "The L2E approach opens up the possibility of distilling multiple explanation algorithms into one model.", "Although we focused on the stability, faithfulness and efficiency aspects of explanation generation, there are further desirable properties, e.g., transparency, comprehensibility and novelty (Robnik-Sikonja and Bohanec, 2018).", "Devising model-based explanation methods and their evaluation with these desiderata are interesting directions for future research.", "This research was supported in part by grant DP190100006 from the Australian Research Council.", "The first author was partly supported by CSIRO/Data61.", "The computational resources for this work were supported by the Multi-modal Australian ScienceS Imaging and Visualisation Environment (MASSIVE) (www.massive.org.au).", "We would like to thank the anonymous reviewers for their insightful comments." ]
[ "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "other", "other", "objective", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "method", "objective", "result", "method", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "other", "other", "other", "other" ]
[ "Computer-aided translation (CAT), the use of software to assist a human translator in the translation process, has been proven to be useful in enhancing the productivity of human translators.", "Autocompletion, which suggests translation results according to the text pieces provided by human translators, is a core function of CAT.", "There are two limitations in previous research in this line.", "First, most research works on this topic focus on sentence-level autocompletion (i.e., generating the whole translation as a sentence based on human input), but word-level autocompletion is under-explored so far.", "Second, almost no public benchmarks are available for the autocompletion task of CAT.", "This might be among the reasons why research progress in CAT is much slower compared to automatic MT. In this paper, we propose the task of general word-level autocompletion (GWLAN) from a real-world CAT scenario, and construct the first public benchmark 1 to facilitate research in this topic.", "In addition, we propose an effective method for GWLAN and compare it with several strong baselines.", "Experiments demonstrate that our proposed method can give significantly more accurate predictions than the baseline methods on our benchmark datasets.", "Machine translation (MT) has witnessed great advancements with the emergence of neural machine translation (NMT) (Sutskever et al., 2014; Bah-danau et al., 2015; Wu et al., 2016; Gehring et al., 2017; Vaswani et al., 2017), which is able to produce much higher quality translation results than statistical machine translation (SMT) models (Koehn et al., 2003; Chiang, 2005; Koehn,", "Wir haben die Meinung von zwei Fachrzten eingeholt.", "2009).", "In spite of this, MT systems cannot replace human translators, especially in the scenarios with rigorous translation quality requirements (e.g., translating product manuals, patent documents, government policies, and other official doc-uments).", "Therefore, how to leverage the pros of MT systems to help human translators, namely, Computer-aided translation (CAT), attracts the attention of researchers (Barrachina et al., 2009; Green et al., 2014; Knowles and Koehn, 2016; Santy et al., 2019).", "Among all CAT technologies (such as translation memory, terminology management, sample sentence search, etc.), autocompletion plays an important role in a CAT system in enhancing translation efficiency.", "Autocompletion suggests translation results according to the text pieces provided by human translators.", "We note two limitations in previous research on the topic of autocompletion for CAT.", "First, most of previous studies aim to save human efforts by sentence-level autocompletion (Figure 1", "a).", "Nevertheless, word-level autocompletion (Figure 1 b and", "c) has not been systematically studied.", "Second, almost no public benchmarks are available for the autocompletion task of CAT.", "Although some achievements have been made, research progress in CAT is more sluggish than that in automatic MT. The lack of benchmarks has hindered researchers from making continuous progress in this area.", "In this work, we propose a G eneral W ordL evel A utocompletio N (GWLAN) task, and construct a benchmark with automatic evaluation to facilitate further research progress in CAT.", "Specifically, the GWLAN task aims to complete the target word for human translators based on a source sentence, translation context as well as human typed characters.", "Compared with previous work, GWLAN considers four most general types of translation context: prefix, suffix, zero context, and bidirectional context.", "Besides, as in most real world scenarios, we only know the relative position between input words and the spans of translation context in the GWLAN task.", "We construct a benchmark for the task, with the goal of supporting automatic evaluation and ensuring a convenient and fair comparison among different methods.", "The benchmark is built by extracting triples of source sentences, translation contexts, and human typed characters from standard parallel datasets.", "Accuracy is adopted as the evaluation metric in the benchmark.", "To address the variety of context types and weak position information issue, we propose a neural model to complete a word in different types of context as well as a joint training strategy to optimize its parameters.", "Our model can learn the representation of potential target words in translation and then choose the most possible word based on the human input.", "Our contributions are two-fold: We propose the task of general word-level autocompletion for CAT, and construct the first public benchmark to facilitate research in this topic.", "We propose a joint training strategy to optimize the model parameters on different types of contexts together.", "2 2 Related Work Computer-aided translation (CAT) is a widely used practice when using MT technology in the industry.", "2 This approach has been implemented into a human-machine interactive translation system TranSmart (Huang et al., 2021) at www.transmart.qq.com .", "As the the MT systems advanced and improved, various efficient interaction ways of CAT have emerged (Vasconcellos and Leon, 1985; Green et al., 2014; Hokamp and Liu, 2017; Weng et al., 2019; Wang et al., 2020).", "Among those different methods, the autocompletion is the most related to our work.", "Therefore, we will first describe previous works in both sentence-level and word-level autocompletion, then show the relation to other tasks and scenarios.", "Sentence-level Autocompletion Most of previous work in autocompletion for CAT focus on sentence-level completion.", "A common use case in this line is interactive machine translation (IMT) (Green et al., 2014; Cheng et al., 2016; Peris et al., 2017; Knowles and Koehn, 2016; Santy et al., 2019).", "IMT systems utilize MT systems to complete the rest of a translation after human translators editing a prefix translation (Alabau et al., 2014; Zhao et al., 2020).", "For most IMT systems, the core to achieve this completion is prefix-constrained decoding (Wuebker et al., 2016).", "Another sentence-level autocompletion method, lexically constrained decoding (LCD) (Hokamp and Liu, 2017; Post and Vilar, 2018), recently attracts lots of attention (Hasler et al., 2018; Susanto et al., 2020; Kajiwara, 2019).", "Compared with IMT, LCD relaxes the constraints provided by human translators from prefixes to general forms: LCD completes a translation based on some unordered words (i.e., lexical constraints), which are not necessary to be continuous (Hokamp and Liu, 2017; Hu et al., 2019; Dinu et al., 2019; Song et al., 2019).", "Although it does not need additional training, its inference is typically less efficient compared with the standard NMT.", "Therefore, other works propose efficient methods (Li et al., 2020; Song et al., 2019) by using lexical constraints in a soft manner rather than a hard manner as in LCD.", "Word-level Autocompletion Word-level autocompletion for CAT is less studied than sentence-level autocompletion.", "Langlais et al. (2000); Santy et al. (2019) consider to complete a target word based on human typed characters and a translation prefix.", "But they require the target word to be the next word of the translation prefix, which limits its application.", "In contrast, in our work the proposed word-level autocompletion is more general and can be applied to real-world scenarios such as post-editing (Vasconcellos and Leon, 1985; Green et al., 2013) and LCD, where human translators need to input some words (corrections or constraints).", "Huang et al. (2015) propose a method to predict a target word based on human typed characters, however, this method only uses the source side information and does not consider translation context, leading to limited performance compared with our work.", "Others Our work may also be related to previous works in input method editors (IME) (Huang et al., 2018; Lee et al., 2007).", "However, they are in the monolingual setting and not capable of using the useful multilingual information.", "In this section, we first describe why we need word-level autocompletion in real-world CAT scenarios.", "We then present the details of the GWLAN task and the construction of benchmark.", "Why GWLAN?", "Word level autocompletion is beneficial for improving input efficiency (Langlais et al., 2000).", "Previous works assume that the translation context should be a prefix and the desired word is next to the prefix as shown in Figure 1", "(b), where the context is We asked two and the desired word is specialists.", "However, in some real-world CAT scenarios such as post-editing and lexically constrained decoding, translation context may be discontinuous and the input words (cor-rections or lexical constraints) are not necessarily conjunct to the translation context.", "As shown in Figure 1", "(c), the context is We their opinion and the human typed characters sp is conjunct to neither We nor their in the context.", "Therefore, existing methods can not perform well on such a general scenario.", "This motivates us to propose a general word-level autocompletion task for CAT.", "Suppose x = ( x 1 , x 2 , . . . , x m ) is a source sequence, s = ( s 1 , s 2 , . . . , s k ) is a sequence of human typed characters, and a translation context is denoted by c = ( c l , c r ) , where c l = ( c l, 1 , c l, 2 , . . . , c l,i ) , c r = ( c r, 1 , c r, 2 , . . . , c r,j ) .", "The translation pieces c l and c r are on the left and right hand side of s , respectively.", "Formally, given a source sequence x , typed character sequence s and a context c , the general word-level autocompletion (GWLAN) task aims to predict a target word w which is to be placed in the middle between c l and c r to constitute a partial translation.", "Note that in the partial translation consisting of c l , w and c r , w is not necessary to be consecutive to c l,i or c r, 1 .", "For example, in Figure 1", "(c), c l = ( We , ) , c r = ( their , option , . ) , s = ( sp , ) , the GWLAN task is expected to predict w = specialists to constitute a partial translation We specialists their opinion., where represents zero, one, or more words (i.e., the two words before and after it are not necessarily consecutive).", "To make our task more general in real-world scenarios, we assume that the left context c l and right context c r can be empty, which leads to the following four types of context: Zero-context: both c l and c r are empty; Suffix: c l is empty; Prefix: c r is empty; Bi-context: neither c l nor c r is empty.", "With the tuple ( x , s , c ) , the GWLAN task is to predict the human desired word w .", "Relation to most similar tasks Some similar techniques have been explored in CAT.", "Green et al. (2014) and Knowles and Koehn (2016) studied a autocompletion scenario called translation prediction (TP), which provides suggestions of the next word (or phrase) given a prefix.", "Besides the strict assumption of translation context (i.e., prefix here), compared with GWLAN, another difference is that the information of human typed characters is ignored in their setting.", "There also exist some works that consider the human typed sequences (Huang et al., 2015; Santy et al., 2019), but they only consider a specific type of translation contexts.", "Huang et al. (2015) propose to complete a target word based on the zero-context assumption.", "Despite its flexibility, this method is unable to explore translation contexts to improve the autocompletion performance.", "The word-level autocompletion methods in Langlais et al. (2000); Santy et al. (2019) have the same assumption as TP, which impedes the use of their methods under the scenarios like post editing and lexically constrained decoding, where human inputs are not necessarily conjunct to the variety of translation contexts.", "To set up a benchmark, firstly we should create a large scale dataset including tuples of ( x , s , c , w ) for training and evaluating GWLAN models.", "Ideally, we may hire professional translators to manually annotate such a dataset, but it is too costly in practice.", "Therefore, in this work, we propose to automatically construct the dataset from parallel datasets which is originally used in automatic machine translation tasks.", "The procedure for constructing our data is the same for train, validation, and test sets.", "And we construct a dataset for each type of translation context.", "Assume we are given a parallel dataset { ( x i , y i ) } , where y i is the reference translation of x i .", "Then, we can automatically construct the data c i and s i by randomly sampling from y i .", "We first sample a word w = y ik and then demonstrate how to extract c i for different translation contexts: Zero-context: both c l and c r are empty; Suffix: randomly sample a translation piece c r = y p r, 1 : p r, 2 from y , where k < p r, 1 < p r, 2 .", "The c l is empty here; Prefix: randomly sample a translation piece c l = y p l, 1 : p l, 2 from y , where p l, 1 < p l, 2 < k .", "The c r is empty here; Bi-context: sample c l as in prefix, and sample c r as in suffix.", "Then we have to simulate the human typed characters s based on w .", "For languages like English and German, we sample a position p from the character sequence and the human input s = w 1: p , where 1 p < L w .", "For languages like Chinese, the human input is the phonetic symbols of the word, since the word cannot be directly typed into the computer.", "Therefore, we have to convert w to phonetic symbols that are characters in alphabet and sample s from phonetic symbols like we did on English.", "Given a tuple ( x , c , s ) , our approach decomposes the whole word autocompletion process into two parts: model the distribution of the target word w based on the source sequence x and the translation context c , and find the most possible word w based on the distribution and human typed sequence s .", "Therefore, in the following subsections, we firstly propose a word prediction model (WPM) to de-fine the distribution p ( w | x , c ) of the target word w (4.1).", "Then we can treat the human input sequence s as soft constraints or hard constraints to complete s and obtain the target word w (4.2).", "Finally, we present two strategies for training and inference (4.3).", "The purpose of WPM is to model the distribution p ( w | x , c ) .", "More concretely, we will use a single placeholder [MASK] to represent the unknown target word w , and use the representation of [MASK] learned from WPM to predict it.", "Formally, given the source sequence x , and the translation context c = ( c l , c r ) , the possibility of the target word w is: P ( w | x , c l , c r ; ) = softmax ( ( h )) [ w ] (2) where h is the representation of [MASK] , is a linear network that projects the hidden representation h to a vector with dimension of target vocabulary size V , and softmax( d )[ w ] takes the component regarding to w after the softmax operation over a vector d RV .", "Inspired by the attention-based architectures (Vaswani et al., 2017; Devlin et al., 2019) 3 , we 3 Because the use of attention-based models has become ubiquitous recently, we omit an exhaustive background de-E the E 1 the E aircraft E 2 aircraft E [MASK] E 3 [MASK] E rapidly E 4 rapidly E [EOS] E 5 [EOS] E [SOS] E 0 [SOS] <latexit sha1_base64=\"cXNtnBINXcmwhkwKN8rhFMu8CHE=\">AAAB7XicbVDLSsNAFL3xWeur6tLNYBFclUQUXRbduKxgH9CGMplO2rGTSZi5EUroP7hxoYhb/8edf+OkzUJbDwwczrmHufcEiRQGXffbWVldW9/YLG2Vt3d29/YrB4ctE6ea8SaLZaw7ATVcCsWbKFDyTqI5jQLJ28H4NvfbT1wbEasHnCTcj+hQiVAwilZq9QYxmnK/UnVr7gxkmXgFqUKBRr/yZYMsjbhCJqkxXc9N0M+oRsEkn5Z7qeEJZWM65F1LFY248bPZtlNyapUBCWNtn0IyU38nMhoZM4kCOxlRHJlFLxf/87ophtd+JlSSIlds/lGYSoIxyU8nA6E5QzmxhDIt7K6EjaimDG1BeQne4snLpHVe8y5r7v1FtX5T1FGCYziBM/DgCupwBw1oAoNHeIZXeHNi58V5dz7moytOkTmCP3A+fwAo847a</latexit> . . . aircraft [MASK] <latexit sha1_base64=\"cXNtnBINXcmwhkwKN8rhFMu8CHE=\">AAAB7XicbVDLSsNAFL3xWeur6tLNYBFclUQUXRbduKxgH9CGMplO2rGTSZi5EUroP7hxoYhb/8edf+OkzUJbDwwczrmHufcEiRQGXffbWVldW9/YLG2Vt3d29/YrB4ctE6ea8SaLZaw7ATVcCsWbKFDyTqI5jQLJ28H4NvfbT1wbEasHnCTcj+hQiVAwilZq9QYxmnK/UnVr7gxkmXgFqUKBRr/yZYMsjbhCJqkxXc9N0M+oRsEkn5Z7qeEJZWM65F1LFY248bPZtlNyapUBCWNtn0IyU38nMhoZM4kCOxlRHJlFLxf/87ophtd+JlSSIlds/lGYSoIxyU8nA6E5QzmxhDIt7K6EjaimDG1BeQne4snLpHVe8y5r7v1FtX5T1FGCYziBM/DgCupwBw1oAoNHeIZXeHNi58V5dz7moytOkTmCP3A+fwAo847a</latexit> . . . PositionEmbeddings TokenEmbeddings Query / Input Key / Value the aircraft rapidly [EOS] [SOS] Trans.", "use a dual-encoder architecture to learn the representation h based on source sequence x and translation context c .", "Our model has a source encoder and a cross-lingual encoder.", "The source encoder of WPM is the same as the Transformer encoder, which is used to encode the source sequence x .", "As shown in Figure 2, the output of source encoder will be passed to the cross-lingual encoder later.", "The cross-lingual encoder is similar to the Transformer decoder, while the only difference is that we replace the auto-regressive attention (ARA) layer by a bidirectional masked attention (BMA) module, due to that the ARA layer cannot use the leftward information flow (i.e., c r ).", "Specifically, the BMA module is built by a multiple-layer self attention network.", "As shown in Figure 3, in each layer of BMA, each token in the attention query can attend to all words in translation context c l and c r .", "In addition, the input consists of three parts, the [MASK] token, and translation contexts c l and c r , as illustrated in Figure", "3. Note that its position embeddings E are only used to represent the relative position relationship between tokens.", "Taking the sentence in Figure 3 as an example, E 3 does not precisely specify the position of the target word w but roughly indicates that w is on the right-hand-side of c l and on the left-hand-side of c r .", "Finally, the representation of [MASK] as learnt by BMA will be passed to Add & Norm layer as shown in Figure", "2. scription of the model and refer readers to Vaswani et al. (2017) and Devlin et al. (2019).", "After learning the representation h of the [MASK] token, there are two ways to use the human input sequence s to determinate the human desired word.", "Firstly, we can learn the representation of s and use it as a soft constraint while predicting word w .", "Taking the sentence in Figure 3 as an example, supposing the human typed sequence is s = des , we can use an RNN network to learn the representation of des and concatenate it with h to predict the word descending .", "Alternatively, we can use des as a hard constraint: P s [ w ] = (cid:40) P ( w | x , c ; ) Z , if w starts with s 0 , otherwise.", "where P ( | ) is the probability distribution defined in Eq.", "(2) and Z is the normalization term indepen-dent on w .", "Then we pick w = arg max w P s [ w ] as the most possible word.", "In our preliminary experiments, the performances of these two methods are comparable, and there is no significant gain when we use them together.", "One main reason is that the model can already learn the starts-with action precisely in the soft constraint method.", "Therefore, we propose to use the human inputs as hard constraints in our later experiments, because of the method's efficiency and simplicity.", "Suppose D denotes the training data for GWLAN, i.e., a set of tuples ( x , c , s , w ) .", "Since there are four different types of context in D as presented in 3, we can split D into four subsets D zero , D prefix , D suffix and D bi .", "To yield good performances on those four types of translation context, we also propose two training strategies.", "The inference strategy differs accordingly.", "Strategy 1: One Context Type One Model For this strategy, we will train a model for each translation context, respectively.", "Specifically, for each type of context t { zero , prefix , suffix , bi } , we independently train one model t by minimizing the following loss L ( D t , ) : L ( D t ; ) = 1 |D t | (cid:88) ( x , c , s ,w ) D t log P ( w | x , c ; ) , (3) where P ( w | x , c ; ) is the WPM model defined in Eq.", "2, |D t | is the size of training dataset D t , and t can be any type of translation context.", "In this way, we actually obtain four models in total after training.", "In the inference process, for each testing instance ( x , c l , c r , s ) , we decide its context type t in terms of c l and c r and then use t to predict the word w .", "Strategy 2: Joint Model The separate training strategy is straightforward.", "However, it may also make the models struck in the local optimal.", "To address these issues, we also propose a joint training strategy, which has the ability to stretch the model out of the local optimal once the parameters is over-fitting on one particular translation context.", "Therefore, using the joint training strategy, we train a single model for all types of translation context by minimizing the following objective: L ( D ; ) = L ( D zero ; ) + L ( D prefix ; )+ L ( D suffix ; ) + L ( D bi ; ) where each L ( D t ; ) is as defined in Eq.", "3. In this way, we actually obtain a single model after training.", "In the inference process, for each testing instance ( x , c l , c r , s ) we always use to predict the target word w .", "We carry out experiments on four GWLAN tasks including bidirectional ChineseEnglish tasks and GermanEnglish tasks.", "The benchmarks for our experiments are based on the public translation datasets.", "The training set for two directional ChineseEnglish tasks consists of 1.25M bilingual sentence pairs from LDC corpora.", "The toolkit we used to convert Chinese word w to phonetic symbols is pypinyin 4 .", "As discussed in (3.2), the training data for GWLAN is extracted from 1.25M sentence pairs.", "The validation data for GWLAN is extracted from NIST02 and the test datasets for GWLAN are constructed from NIST05 and NIST06.", "For two directional GermanEnglish tasks, we use the WMT14 dataset preprocessed by Stanford 5 .", "The validation and test sets for our tasks are based on newstest13 and newstest14 respectively.", "For each dataset, the models are tuned and selected based on the validation set.", "The main strategies we used to prepare our benchmarks are shown in 3.2.", "However, lots of trivial instances may be included if we directly use the uniform distribution for sampling, e.g., predicting word the given th.", "Therefore, we apply some intuitive rules to reduce the probability of trivial instances.", "For example, we assign higher probability for words with more than 4 characters in English and 2 characters in Chinese, and we require that the lengths of input character sequence s and translation contexts c should not be too long.", "In the experiments, we evaluate and compare the performance of our methods (WPM-Sep and WPM-Joint) and a few baselines.", "They are illustrated below, WPM-SEP is our approach with the one context one model training and inference strategy in Section 4.3.", "In other words, we train our model for each translation context separately.", "TRANSTABLE : We train an alignment model 6 on the training set and build a word-level translation table.", "While testing, we can find the translations of all source words based on this table, and select out valid translations based on the human input.", "The word with highest frequency among all candidates is regarded as the prediction.", "This baseline is inspired by Huang et al. (2015).", "# Systems Zh En En Zh De En En De NIST05 NIST06 NIST05 NIST06 NT13 NT14 NT13 NT14 1 TRANSTABLE 41.40 39.78 28.00 26.99 37.43 36.64 32.99 31.12 2 TRANS-PE 34.51 35.50 32.23 34.88 34.45 33.02 31.51 30.65 3 TRANS-NPE 35.97 36.78 34.31 36.19 36.69 36.01 33.25 31.30 4 WPM-SEP 54.15 55.04 53.30 53.67 56.93 55.67 54.54 51.46 5 WPM-JOINT 55.54 55.85 53.64 54.25 57.84 56.75 56.91 52.68", "TRANS-PE: We train a vanilla NMT model using the Transformer-base model.", "During the inference process, we use the context on the left hand side of human input as the model input, and return the most possible words based on the probability of valid words selected out by the human input.", "This baseline is inspired by Langlais et al. (2000); Santy et al. (2019).", "TRANS-NPE: As another baseline, we also train an NMT model based on Transformer, but without position encoding on the target side.", "While testing, we use the averaged hidden vectors of all the target words outputted by the last decoder layer to predict the potential candidates.", "Table 1 shows the main results of our methods and three baselines on the test sets of Chinese-English and German-English datasets.", "It is clear from the results that our methods WPM-SEP and WPM-JOINT significantly outperform the three baseline methods.", "Results on Row 4 and Row 5 of Table 1 also show that the WPM-JOINT method, which uses a joint training strategy to optimize a single model, achieves better overall performance than WPM-SEP , which trains four models for different translation contexts respectively.", "In-depth analysis about the two training strategies is presented in the next section.", "The method TRANS-PE, which assumes the human input is the next word of the given context, behaves poorly under the more general setting.", "As the results of TRANS-NPE show, when we use the same model as TRANS-PE and relax the constraint of position by removing the position encoding, the accuracy of the model improves.", "One interesting finding is that the TRANSTABLE method, which is only capable of leveraging the zero-context, achieves good results on the Chinese-English task when the target language is English.", "However, when the target language is Chinese, the performance of TRANSTABLE drops significantly.", "In this section, we presents more detailed results on the four translation contexts and analyze the features of GWLAN.", "These analyses can help us to better understand the task and propose effective approaches in the future.", "Separate Training VS. Joint Training Compared with WPM-SEP , WPM-JOINT shows two advantages.", "On one hand, even there is only one model, WPM-JOINT yields better performances than WPM-SEP , enabling simpler deployment.", "This may be caused by that training on multiple related tasks can force the model learn more expressive representations, avoiding over-fitting.", "On the other hand, the variance of results on different translation contexts of WPM-JOINT is smaller, which can provide an more steady autocompletion service.", "From the viewpoint of joint training, the lower variance may be caused by that WPM-JOINT spends more efforts to minimize the one with maximal risk (i.e., zero-context), although sometimes it may slightly sacrifice the task with minimal risk (i.e., bi-context).", "The results of WPM-SEP and WPM-JOINT also have some shared patterns.", "Firstly, the performances of the two methods on prefix and suffix translation contexts are nearly the same.", "Although the prefix and suffix may play different roles in the SVO language structure, they have little impact on the the autocompletion accuracy using our method.", "Moreover, among the results on four translation contexts, the performances on bi-context are better than prefix and suffix, and prefix and suffix are better than zero-context.", "This finding shows that more context information can help to reduce the uncertainty of human desired words.", "Comparison with baselines The TRANS-PE method in previous works is more sensitive to the position of human input.", "The statistical results shows that the averaged distances in the original sentence between the prediction words and translation contexts are various for different translation contexts, which are 7 .", "4 , 6 .", "5 , 14 .", "1 , and 3 .", "2 for prefix, suffix, zero-context, and bi-context, respectively.", "When the desired words are much closer to the context, TRANS-PE can achieve better performances.", "Moreover, TRANS-PE can achieve more than 80 accuracy scores when the prediction word is the next word of the given prefix, however, its performance drops significantly when the word is not necessarily conjunct to the prefix.", "We can also find that TRANS-NPE, which removes the position information of target words, achieves better overall performances compared with TRANS-PE.", "In contrast, the performance of TRANSTABLE is less affected by the position of the prediction words, which is demonstrated by the low variances on both tasks in Table", "2. The results of TRANSTABLE have also surprised us, which achieves more than 41 accuracy scores on the Zh En task.", "This observation shows the importance of alignment and the potential of statistical models.", "Compared with the results on the Zh En task, the overall accu-Figure 4: Robustness Analysis.", "The x-axis represents the percentage of words that have been replaced by noise tokens in NIST02.", "The model used for this analysis is the WPM-JOINT , which is trained on the Zh En task without noisy translation context.", "racy on En Zh task is much lower, likely due to that the number of valid words after filtered by the human input on Chinese is much more than that on English.", "Therefore, it is easier for TRANSTABLE to determine the human desired words in English.", "In this work, the translation contexts are simulated using the references.", "However, in real-world scenarios, translation contexts may not be perfect, i.e., some words in the translation contexts may be incorrect.", "In this section, we evaluate the robustness of our model on noisy contexts.", "We first use the translation table constructed by TRANSTABLE to find some target words that share the same source words with the original target words, and then use those found words as noise tokens.", "The robustness results are shown in Figure 4.", "For all the translation context types except for zero-context, the performance drops slowly when the percentage of noise tokens increases.", "However, even with 80% words in the context, the performance of WPM-JOINT outperforms the case of zero-context, which shows that our WPM-JOINT method is noise tolerant.", "In this work, we formalize the task as a classifi-cation problem.", "However, the generation formalization also deserves to be explored in the future.", "For example, the generation may happen in two circumstances: word-level completion based on subwords, and phrase-level completion.", "In the first case, although the autocompletion service provided for human translators is word-level, in the internal system we can generate a sequence of subwords (Sennrich et al., 2015) that satisfy the human typed characters, and provide human translators with the merged subwords.", "This subword sequence generation can significantly alleviate the OOV issue in the word-level autocompletion.", "In the phrase-level autocompletion case, if we can predict more than one desired words, the translation efficiency and experience may be improved further.", "We would like to leave it as future work.", "It is also worth noting that we did not conduct human studies in this work.", "We think evidences in previous work can already prove the effectiveness of word-level autocompletion when assisting human translators.", "For example, TransType (Langlais et al., 2000) is a simple rule-based tool that only considers the prefix context, but the majority of translators said that TransType improved their typing speed a lot.", "Huang et al. (2015) hired 12 professional translators and systematically evaluate their word autocompletion tool based on zero-context.", "Experiments show that the more keystrokes are reduced, the more time can be saved for translators.", "Since the prediction accuracy is highly correlated with the keystrokes, we think higher accuracy will make translators more productive.", "That is the main reason that we use accuracy to automatically evaluate the model performance.", "Besides, the automatic evaluation metric also makes the GWLAN task easier to follow.", "We propose a General Word-Level AutocompletioN (GWLAN) task for computer-aided translation (CAT).", "In our setting, we relax the strict constraints on the translation contexts in previous work, and abstract four most general translation contexts used in real-world CAT scenarios.", "We propose two approaches to address the variety of context types and weak position information issues in GWLAN.", "To support automatic evaluation and to ensure a convenient and fair comparison among different methods, we construct a benchmark for the task.", "Experiments on this benchmark show that our method outperforms baseline methods by a large margin on four datasets.", "We believe that this benchmark to be released will push forward future research in CAT.", "We would like to thank three anonymous reviewers for their invaluable discussions on this work.", "The corresponding is Lemao Liu." ]
[ "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "objective", "method", "objective", "objective", "objective", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "objective", "method", "method", "method", "method", "objective", "objective", "objective", "method", "method", "abstain", "method", "method", "method", "method", "method", "abstain", "method", "other", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "method", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "other", "method", "method", "abstain", "method", "method", "result", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "objective", "method", "result", "method", "other", "other" ]
[ "This paper presents a multilingual study of word meaning representations in context.", "We assess the ability of both static and contextualized models to adequately represent different lexical-semantic relations, such as homonymy and synonymy.", "To do so, we created a new multilingual dataset that allows us to perform a controlled evaluation of several factors such as the impact of the surrounding context or the overlap between words, conveying the same or different senses.", "A systematic assessment on four scenarios shows that the best monolingual models based on Transformers can adequately disambiguate homonyms in context.", "However, as they rely heavily on context, these models fail at representing words with different senses when occurring in similar sentences.", "Experiments are performed in Galician, Portuguese, English, and Spanish, and both the dataset (with more than 3,000 evaluation items) and new models are freely released with this study.", "Contrary to static vector models, which represent the different senses of a word in a single vector (Erk, 2012; Mikolov et al., 2013), contextualized models generate representations at token-level (Pe-ters et al., 2018; Devlin et al., 2019), thus being an interesting approach to model word meaning in context.", "In this regard, several studies have shown that clusters produced by some contextualized word embeddings (CWEs) are related to different senses of the same word (Reif et al., 2019; Wiedemann et al., 2019), or that similar senses can be aligned in cross-lingual experiments (Schuster et al., 2019).", "However, more systematic evaluations of polysemy (i.e., word forms that have different related meanings depending on the context (Apres-jan, 1974)), have shown that even though CWEs present some correlations with human judgments (Nair et al., 2020), they fail to predict the similarity of the various senses of a polysemous word (Haber and Poesio, 2020).", "As classical datasets to evaluate the capabilities of vector representations consist of single words without context (Finkelstein et al., 2001) or heavily constrained expressions (Kintsch, 2001; Mitchell and Lapata, 2008), new resources with annotations of words in free contexts have been created, including both graded similarities (Huang et al., 2012; Armendariz et al., 2020) or binary classification of word senses (Pilehvar and Camacho-Collados, 2019; Raganato et al., 2020).", "However, as these datasets largely include instances of polysemy, they are difficult to solve even for humans (in fact, the highest reported human upper bound is about 80%) as the nuances between different senses depend on non-linguistic factors such as the annotator procedure or the target task (Tuggy, 1993; Kilgarriff, 1997; Hanks, 2000; Erk, 2010).", "In this paper, we rely on a more objective and simple task to assess how contextualized approaches (both neural network models and contextualized methods of distributional semantics) represent word meanings in context.", "In particular, we observe whether vector models can identify unrelated meanings represented by the same word form (homonymy) and the same sense conveyed by different words (synonymy).", "In contrast to polysemy, there is a strong consensus concerning the representation of homonymous senses in the lexicon, and it has been shown that homonyms are cognitively processed differently than polysemous words (Klepousniotou et al., 2012; MacGregor et al., 2015).", "In this regard, exploratory experiments in English suggest that some CWEs correctly model homonymy, approximating the contextualized vectors of a homonym to those of its paraphrases (Lake and Murphy, 2020), and showing stronger correlation with human judgments to those of polysemous words (Nair et al., 2020).", "However, as homonyms convey unrelated meanings depending on the context, it is not clear whether the good performance of CWEs actually derives from the contextualization process or simply from the use of explicit lexical cues present in the sentences.", "Taking the above into account, we have created a new multilingual dataset (in Galician, Portuguese, English, and Spanish) with more than 3,000 evaluation items.", "It allows for carrying out more than 10 experiments and controlling factors such as the surrounding context, the word overlap, and the sense conveyed by different word forms.", "We use this resource to perform a systematic evaluation of contextualized word meaning representations.", "We compare different strategies using both static embeddings and current models based on deep artificial neural networks.", "The results suggest that the best monolingual models based on Transformers (Vaswani et al., 2017) can identify homonyms having different meanings adequately.", "However, as they strongly rely on the surrounding context, words with different meanings are represented very closely when they occur in similar sentences.", "Apart from the empirical conclusions and the dataset, this paper also contributes with new BERT and fastText models for Galician.", "1 Section 2 presents previous studies about word meaning representation.", "Then, Section 3 introduces the new dataset used in this paper.", "In Section 4 we describe the models and methods to obtain the vector representations.", "Finally, the experiments and results are discussed in Section 5, while Section 6 draws some conclusions of our study.", "A variety of approaches has been implemented to compute word meaning in context by means of standard methods of distributional semantics (Schtze, 1998; Kintsch, 2001; McDonald and Brew, 2004; Erk and Pad, 2008).", "As compositional distributional models construct sentence representations from their constituents vectors, they take into account contextualization effects on meaning (Mitchell and Lapata, 2008; Baroni and Zam-parelli, 2010; Baroni, 2013).", "However, these approaches often have scalability problems as their representations grow exponentially with the size of the sentences.", "Therefore, the datasets used to 1 Dataset, models, and code are available at https:// github.com/marcospln/homonymy_acl21/ .", "evaluate them are composed of highly restricted phrases (Grefenstette and Sadrzadeh, 2011).", "The rise of artificial neural networks on natural language processing popularized the use of vector representations, and the remarkable performance of neural language models (Melamud et al., 2016; Peters et al., 2018) led to a productive line of research exploring to what extent these models represent linguistic knowledge (Rogers et al., 2020).", "However, few of these works have focused on lexical semantics, and most of the relevant results in this field come from evaluations in downstream tasks.", "In this regard, Wiedemann et al. (2019) found that clusters of BERT embeddings (Devlin et al., 2019) seem to be related to word senses, while Schuster et al. (2019) observed that clusters of polysemous words correspond to different senses in a cross-lingual alignment of vector representations.", "Probing LSTMs on lexical substitution tasks, Aina et al. (2019) showed that these architectures rely on the lexical information from the input embeddings, and that the hidden states are biased towards contextual information.", "On an exploration of the geometric representations of BERT, Reif et al. (2019) found that different senses of a word tend to appear separated in the vector space, while several clusters seem to correspond to similar senses.", "Recently, Vulic et al. (2020) evaluated the performance of BERT models on several lexical-semantic tasks in various languages, including semantic similarity or word analogy.", "The results show that using special tokens ([CLS] or [SEP]) hurts the quality of the representations, and that these tend to improve across layers until saturation.", "As this study uses datasets of single words (without context), type-level representations are obtained by averaging the contextualized vectors over various sentences.", "There are several resources to evaluate word meaning in free contexts, such as the Stanford Contextual Word Similarity (Huang et al., 2012) and CoSimLex (Armendariz et al., 2020), both representing word similarity on a graded scale, or the Word-in-Context datasets (WiC), focused on binary classifications (i.e., each evaluation item contains two sentences with the same word form, having the same or different senses) (Pilehvar and Camacho-Collados, 2019; Raganato et al., 2020).", "These datasets include not only instances of homonymy but mostly of polysemous words.", "In this regard, studies on polysemy using Transformers have obtained diverse results: Haber and Poesio (2020) found that BERT embeddings correlate better with human ratings of co-predication than with similarity between word senses, thus suggesting that these representations encode more contextual information than word sense knowledge.", "Nevertheless, the results of Nair et al. (2020) indicate that BERT representations are correlated with human scores of polysemy.", "An exploratory experiment of the latter study also shows that BERT discriminates between polysemy and homonymy, which is also suggested by other pilot evaluations reported by Lake and Murphy (2020) and Yu and Ettinger (2020).", "Our study follows this research line pursuing objective and unambiguous lexical criteria such as the representation of homonyms and synonyms.", "In this context, there is a broad consensus in the psycholinguistics literature regarding the representation of homonyms as different entries in the lexicon (in contrast to polysemy, for which there is a long discussion on whether senses of polysemous words are stored as a single core representation or as independent entries (Hogeweg and Vicente, 2020)).", "In fact, several studies have shown that homonyms are cognitively processed differently from polysemous words (Klepousniotou et al., 2012; Rabagliati and Snedeker, 2013).", "In contrast to the different senses of polysemous words, which are simultaneously activated, the meanings of homonyms are in conflict during processing, with the not relevant ones being deactivated by the context (MacGre-gor et al., 2015).", "To analyze how vector models represent homonymy and synonymy in context, we have built a new multilingual resource with a strong inter-annotator agreement, presented below.", "This section briefly describes some aspects of lexical semantics relevant to our study, and then presents the new dataset used in the paper.", "Homonymy and homography: Homonymy is a well-known type of lexical ambiguity that can be described as the relation between distinct and unrelated meanings represented by the same word form, such as match , meaning for instance sports game' or stick for lighting fire'.", "In contrast to polysemy (where one lexeme conveys different related senses depending on the context, e.g., newspaper as an organization or as a set of printed pages), it is often assumed that homonyms are different lexemes that have the same lexical form (Cruse, 1986), and therefore they are stored as independent entries in the lexicon (Pustejovsky, 1998).", "There are two main criteria for homonymy iden-tification: Diachronically, homonyms are lexical items that have different etymologies but are accidentally represented by the same word form, while a synchronic perspective strengthens unrelatedness in meaning.", "Even if both approaches tend to identify similar sets of homonyms, there may be ambiguous cases that are diachronically but not synchronically related (e.g., two meanings of banco bench' and financial institution' in Portuguese or Spanish could be considered polysemous as they derive from the same origin, 2 but as this is a purely historical association, most speakers are not aware of the common origin of both senses).", "In this study, we follow the synchronic perspective, and consider homonymous meanings those that are clearly unrelated (e.g., they unambiguously refer to completely different concepts) regardless of their origin.", "It is worth mentioning that as we are dealing with written text we are actually analyzing homographs (different lexemes with the same spelling) instead of homonyms.", "Thus, we discard instances of phonologically identical words which are written differently, such as the Spanish hola hello' and ola wave', both representing the phonological form /ola/ .", "Similarly, we include words with the same spelling representing different phonological forms, e.g., the Galician-Portuguese sede , which corresponds to both /sede/ thirst', and /sEde/ head-quarters'.", "In this paper, homonymous senses are those unrelated meanings conveyed by the same (homonym) word form.", "For instance, coach may have two homonymous senses (bus' and trainer'), which can be conveyed by other words (synonyms) in different contexts (e.g., by bus or trainer ).", "Structure of the dataset: We have created a new resource to investigate how vector models represent word meanings in context.", "In particular, we want to observe whether they capture", "(i) different senses conveyed by the same word form (homonymy), and", "(ii) equivalent senses expressed by different words (synonymy).", "The resource contains controlled sentences so that it allows us to observe how the context and word overlap affect word representations.", "2 In fact, several dictionaries organize them in a single entry: https://dicionario.priberam.org/banco , https://dle.rae.es/banco .", "and different contexts, we have included five sentences for each meaning (see Table 1 for examples): three sentences containing the target word, a synonym, and a word with a different sense, all of them in the same context (sentences 1 to 3), and two additional sentences with the target word and a synonym, representing the same sense (sentences 4 and 5, respectively).", "Thus, for each sense we have four sentences (1, 2, 4, 5) with a word conveying the same sense (both in the same and in different contexts) and another sentence (3) with a different word in the same context as sentences 1 and 2.", "From this structure, we can create datasets of sentence triples, where the target words of two of them convey the same sense, and the third one has a different meaning.", "Thus, we can generate up to 48 triples for each pair of senses (24 in each direction: sense 1 vs. sense 2, and vice-versa).", "These datasets allow us to evaluate several semantic relations at the lexical level, including homonymy, synonymy, and various combinations of homonymous senses.", "Interestingly, we can control for the impact of the context (e.g., are contextualized models able to distinguish between different senses occurring in the same context, or do they incorporate excessive contextual information into the word vectors?), the word overlap (e.g., can a model identify different senses of the same word form depending on the context, or it strongly depends on lexical cues?), or the POS-tag (e.g., are homonyms with different POS-tags easily disambiguated?).", "Construction of the dataset: We compiled data for four languages: Galician, Portuguese, Spanish, and English.", "3 We tried to select sentences compatible with the different varieties of the same language 3 Galician is generally considered a variety of a single (Galician-)Portuguese language.", "However, they are divided in this resource, as Galician has recently been standardized using a Spanish-based orthography that formally separates it from Portuguese (Samartim, 2012).", "(e.g., with the same meaning in UK and US English, or in Castilian and Mexican Spanish).", "However, we gave priority to the European varieties when necessary (e.g., regarding spelling variants).", "The dataset was built using the following procedure: First, language experts (one per language) compiled lists of homonyms using dedicated resources for language learning, together with WordNet and other lexicographic data (Miller, 1995; Montraveta and Vzquez, 2010; Guinovart, 2011; Rademaker et al., 2014).", "Only clear and unambiguous homonyms were retained (i.e., those in the extreme of the homonymy-polysemy-vagueness scale (Tuggy, 1993)).", "These homonyms were then enriched with frequency data from large corpora: Wikipedia and SLI GalWeb (Agerri et al., 2018) for Galician, and a combination of Wikipedia and Europarl for English, Spanish and Portuguese (Koehn, 2005).", "From these lists, each linguist selected the most frequent homonyms, annotating them as ambiguous at type or token level ( absolute homonymy and partial homonymy in Lyons' terms (Lyons, 1995)).", "As a substantial part were noun-verb pairs, only a few of these were included.", "For each homonym, the language experts selected from corpora two sentences (1 and 4) in which the target words were not ambiguous.", "4 They then selected a synonym that could be used in sentence 1 without compromising grammaticality (thus generating sentence 2), and compiled an additional sentence for it (5), trying to avoid further lexical ambiguities in this process.", "5 For each homonym, the linguists selected a word with a different meaning (for sen-4 Sentences were selected, adapted, and simplified using GDEX-inspired constraints (Kilgarriff et al., 2008) (i.e., avoiding high punctuation ratios, unnecessary subordinate clauses, etc.), which resulted in the creation of new sentences.", "5 In most cases, this synonym is the same as that of sentence 2, but this is not always the case.", "Besides, in some cases we could not find words conveying the same sense, for which we do not have sentences 2 and 5.", "tence 3), trying to maximize the following criteria:", "(i) to refer unambiguously to a different concept, and to preserve", "(ii) semantic felicity and", "(iii) grammaticality.", "The size of the final datasets varies depending on the initial lists and on the ease of finding synonyms in context.", "Results: Apart from the sentence triples explained above, the dataset structure allows us to create evaluation sets with different formats, such as sentence pairs to perform binary classifications as in the WiC datasets.", "Table 2 shows the number of homonyms, senses, and sentences of the multilingual resource, together with the size of the evaluation datasets in different formats.", "As the original resource was created by one annotator per language, we ensured its quality as follows: We randomly extracted sets of 50 sentence pairs and gave them to other annotators (5 for Galician, and 1 for each of the other three varieties, all of them native speakers of the target language).", "We then computed the Cohen's inter-annotator agreement (Cohen, 1960) between the original resource and the outcome of this second annotation (see the right column of Table 2).", "We obtained a micro-average = 0 .", "94 across languages, a result which supports the task's objectivity.", "Nevertheless, it is worth noting that few sentences have been carefully modified after this analysis, as it has shown that several misclassifications were due to the use of an ambiguous synonym.", "Thus, it is likely that the final resource has higher agreement values.", "This section introduces the models and procedures to obtain vector representations followed by the evaluation method.", "We have used static embeddings and CWEs based on Transformers, comparing different ways of obtaining the vector representations in both cases:", "Static embeddings: We have used skip-gram fastText models of 300 dimensions (Bojanowski et al., 2017).", "6 For English and Spanish, we have used the official vectors trained on Wikipedia.", "For Portuguese, we have used the model provided by Hartmann et al. (2017), and for Galician we have trained a new model (see Appendix C for details).", "7 Contextualized embeddings: We have evaluated multilingual and monolingual models: 8 Multilingual models: We have used the official multilingual BERT (mBERT cased, 12 layers) (De-vlin et al., 2019), XLM-RoBERTa (Base, 12 layers) (Conneau et al., 2020), and DistilBERT (Distilm-BERT, 6 layers) (Sanh et al., 2019).", "Monolingual models: For English, we have used the official BERT-Base model (uncased).", "For Portuguese and Spanish, BERTimbau (Souza et al., 2020) and BETO (Caete et al., 2020) (both cased).", "For Galician, we trained two BERT models (with 6 and 12 layers; see Appendix C).", "Static models: These are the methods used to obtain the representations from the static models: Word vector ( WV ): Embedding of the target", "word (homonymous senses with the same word form will have the same representation).", "6 In preliminary experiments we also used word2vec and GloVe models, obtaining slightly lower results than fastText .", "7 These Portuguese and Galician models obtained better results (0.06 on average) than the official ones.", "8 To make a fair comparison we prioritized base models (12 layers), but we also report results for large (24 layers) and 6 layers models when available.", "Syntax ( Syn ): Up to four different representations obtained by adding the vector of the target word to those of their syntactic heads and dependents.", "This method is based on the assumption that the syntactic context of a word characterizes its meaning, providing relevant information for its contextualized representation (e.g., in He swims to the bank', bank may be disambiguated by combining its vector with the one of swim ).", "9 Appendix D describes how heads and dependents are selected.", "Sentence vector ( Sent ): Vector of the sentence built by averaging all words (except for the special tokens [CLS] and [SEP]), each of them represented by the standard approach of concatenating the last 4 layers (Devlin et al., 2019).", "Word vector ( WV ): Embedding of the target word, combining the vectors of the last 4 layers.", "We have evaluated two operations: vector concatenation ( Cat ), and addition ( Sum ).", "Word vector across layers ( Lay ): Vector of the target word on each layer.", "This method allows us to explore the contextualization effects on each layer.", "Vectors of words split into several sub-words are obtained by averaging the embeddings of their components.", "Similarly, MWEs vectors are the average of the individual vectors of their components, both for static and for contextualized embeddings.", "Given a sentence triple where two of the target words ( a and b ) have the same sense and the third ( c ) a different one, we evaluate a model as follows (in a similar way as other studies (Kintsch, 2001; Lake and Murphy, 2020)): First, we obtain", "three cosine similarities between the vector representations: sim 1 = cos ( a, b ) ; sim 2 = cos ( a, c ) ; sim 3 = cos ( b, c ) .", "Then, an instance is labeled as correct if those words conveying the same sense ( a and b ) are closer together than the third one ( c ).", "In other words, sim 1 > sim 2 and sim 1 > sim 3 : Otherwise, the instance is considered as incorrect .", "This section presents the experiments performed using the new dataset and discusses their results.", "Among all the potential analyses of our data, we have selected four evaluations to assess the behavior of a model by controlling factors such as the context and the word overlap:", "Homonymy (Exp1): The same word form in three different contexts, two of them with the same sense (e.g., coach in sentences [1:1, 1:4, 2:1] 10 in Table 1).", "This test evaluates if a model correctly captures the sense of a unique word form in context.", "Hypothesis: Static embeddings will fail as they produce the same vector in the three cases, while models that adequately incorporate contextual cues should correctly identify the outlier sense.", "Synonyms of homonymous senses (Exp2): A word is compared with its synonym and with the synonym of its homonym, all three in different contexts (e.g., coach = bus (cid:54) = trainer in [1:1, 1:5, 2:2]).", "This test assesses if there is a bias towards one of the homonymous senses, e.g., the most frequent one (MacGregor et al., 2015).", "Hypothesis: Models with this type of bias may fail, so as in Exp1, they should also appropriately incorporate contextual information to represent these examples.", "10 First and second digits refer to the sense and sentence ids.", "different contexts (e.g., coach = bus (cid:54) = coach in [1:1, 1:5, 2:1]).", "Here we evaluate whether a model adequately represents both", "(i) synonymy in context two word forms with the same sense in different contexts and", "(ii) homonymy one of the former word forms having a different meaning.", "Hypothesis: Models relying primarily on lexical knowledge are likely to represent homonyms closer than synonyms (giving rise to an incorrect output), but those integrating contextual information will be able to model the three representations correctly.", "Synonymy (Exp4): Two synonyms vs. a different word (and sense), all of them in the same context (e.g., [2:1, 2:2, 2:3]).", "It assesses to what extent the context affects word representations of different word forms.", "Hypothesis: Static embeddings may pass this test as they tend to represent type-level synonyms closely in the vector space.", "Highly contextualized models might be puzzled as different meanings (from different words) occur in the same context, so that the models should have an adequate trade-off between lexical and contextual knowledge.", "Table 3 displays the number of sentence triples for each experiment as well as the total number of triples of the dataset.", "To focus on the semantic knowledge encoded in the vectors rather than on the morphosyntactic information, we have evaluated only those triples in which the target words of the three sentences have the same POS-tag (num-bers on the right).", "11 Besides, we have also carried out an evaluation on the full dataset.", "Table 4 contains a summary of the results of each experiment in the four languages.", "For reasons of clarity, we include only fastText embeddings and the best contextualized model (BERT).", "Results for all models and languages can be seen in Appendix A. BERT models have the best performance overall, both on the full dataset and on the selected experiments, except for Exp4 (in which the three sentences share the context) where the static models outperform the contextualized representations.", "In Exp1 and Exp2, where the context plays a crucial role, fastText models correctly labeled between 50%/60% of the examples (depending on the language and vector type, with better results 11 On average, BERT-base models achieved 0 .", "24 higher results ( Add ) when tested on all the instances (including different POS-tags) of the four experiments.", "for Sent and Syn ).", "For BERT, the best accuracy surpasses 0 .", "98 (Exp1 in English), with an average across languages of 0 .", "78 , and where word vectors outperform sentence representations.", "These high results and the fact that WVs work better in general than Sent may be indicators that Transformers are properly incorporating contextual knowledge.", "Solving Exp3 requires both dealing with contextual effects and homonymy (as two words have the same form but different meaning) so that static embeddings hardly achieve 0 .", "5 accuracy ( Sent , with lower results for both WV and Syn ).", "BERT's performance is also lower than in Exp1 and Exp2, with an average of 0 .", "67 and Sent beating WVs in most cases, indicating that the word vectors are not adequately representing the target senses.", "Finally, fastText obtains better results than BERT on Exp4 (where the three instances have the same context), reaching 0 .", "81 in Spanish with an average across languages of 0 .", "64 (always with WVs ).", "BERT's best performance is 0 .", "41 (in two languages) with an average of 0 .", "42 , suggesting that very similar contexts may confound the model.", "To shed light on the contextualization process of Transformers, we have analyzed their performance across layers.", "Figure 1 shows the accuracy curves (vs. the macro-average Sent and WV vectors of the contextualized and static embeddings) for five Transformers models on Galician, the language with the largest dataset (see Appendix A for equivalent figures for the other languages).", "In Exp1 to Exp3 the best accuracies are obtained at upper layers, showing that word vectors appropriately incorporate contextual information.", "This is true especially for the monolingual BERT versions, as the multilingual models' representations show higher variations.", "Except for Galician, Exp1 has better results than Exp2, as the former primarily deals with context while the latter combines contextualization with lexical effects.", "In Exp3 the curves take longer to rise as initial layers rely more on lexical than on contextual information.", "Furthermore, except for English (which reaches 0 . 8 ), the performance is low even in the best hidden layers ( 0 . 4 ).", "In Exp4 (with the same context in the three sentences), contextualized models cannot correctly represent the word senses, being surpassed in most cases by the static embeddings.", "Finally, we have observed how Transformers representations vary across the vector space.", "Figure 2 shows the UMAP visualizations (McInnes et al., Model Vec.", "across layers, producing a suitable representation since layer 7.", "However, 2b shows how the model is not able to adequately represent match close to its", "synonym game , as the vectors seem to incorporate excessive information (or at least limited lexical knowledge) from the context.", "Additional visualizations in Galician can be found in Appendix B. In sum, the experiments performed in this study allow us to observe how different models generate contextual representations.", "In general, our results confirm previous findings which state that Transformers models increasingly incorporate contextual information across layers.", "However, we have also found that this process may deteriorate the representation of the individual words, as it may be incorporating excessive contextual information, as suggested by Haber and Poesio (2020).", "This paper has presented a systematic study of word meaning representation in context.", "Besides static word embeddings, we have assessed the ability of state-of-the-art monolingual and multilingual models based on the Transformers architecture to identify unambiguous cases of homonymy and synonymy.", "To do so, we have presented a new dataset in four linguistic varieties that allows for controlled evaluations of vector representations.", "The results of our study show that, in most cases, the best contextualized models adequately identify homonyms conveying different senses in various contexts.", "However, as they strongly rely on the surrounding contexts, they misrepresent words having different senses in similar sentences.", "with multiword expressions of different degrees of idiomaticity and to include less transparent but still unambiguous contexts of homonymy.", "Finally, we also plan to systematically explore how multilingual models represent homonymy and synonymy in cross-lingual scenarios.", "We would like to thank the anonymous reviewers for their valuable comments, and NVIDIA Corporation for the donation of a Titan Xp GPU.", "This research is funded by a Ramn y Cajal grant (RYC2019-028473-I) and by the Galician Government (ERDF 2014-2020: Call ED431G 2019/04)." ]
[ "method", "abstain", "objective", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "other", "abstain", "objective", "result", "abstain", "abstain", "abstain", "objective", "abstain", "method", "method", "abstain", "abstain", "objective", "abstain", "objective", "result", "result", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "result", "result", "method", "method", "objective", "result", "abstain", "abstain", "objective", "other", "other" ]
[ "Abstract Contrastive learning has achieved impressive success in generation tasks to militate the exposure bias problem and discriminatively exploit the different quality of references.", "Existing works mostly focus on contrastive learning on the instance-level without discriminating the contribution of each word, while keywords are the gist of the text and dominant the constrained mapping relationships.", "Hence, in this work, we propose a hierarchical contrastive learning mechanism, which can unify hybrid granularities semantic meaning in the input text.", "Concretely, we first propose a keyword graph via contrastive correlations of positive-negative pairs to iteratively polish the keyword representations.", "Then, we construct intra-contrasts within instance-level and keyword-level, where we assume words are sampled nodes from a sentence distribution.", "Finally, to bridge the gap between independent contrast levels and tackle the common contrast vanishing problem, we propose an inter-contrast mechanism that measures the discrepancy between contrastive keyword nodes respectively to the instance distribution.", "Experiments demonstrate that our model outperforms competitive baselines on paraphrasing, dialogue generation, and storytelling tasks.", "Generation tasks such as storytelling, paraphrasing, and dialogue generation aim at learning a certain correlation between text pairs that maps an arbitrary-length input to another arbitrary-length output.", "Traditional methods are mostly trained with teacher forcing and lead to an exposure bias problem (Schmidt, 2019).", "Incorporating the generation method with contrastive learning achieved impressive performance on tackling such issues, which takes an extra consideration of synthetic negative samples contrastively (Lee et al., 2021).", "Existing contrastive mechanisms are mainly focused on the instance level (Lee et al., 2021; Cai et al., 2020).", "However, word-level information is also of great importance.", "Take the case shown in the upper part of Figure 1 for example, the keyword covers the gist of the input text and determines the embedding space of the text.", "The text representation will be significantly affected if adding a slight perturbation on the keyword, i.e., changing cosmology to astrophysics.", "In addition, as shown on the bottom part, under some circumstances, it is too easy for the model to do the classification since the semantic gap between contrastive pairs is huge.", "Thus, the model fails to distinguish the actual discrepancy, which causes a contrast vanishing 4432 problem at both instance-level and keyword-level.", "Based on the above motivation, in this paper, we propose a hierarchical contrastive learning method built on top of the classic CVAE structure.", "We choose CVAE due to its ability in modeling global properties such as syntactic, semantic, and discourse coherence (Li et al., 2015; Yu et al., 2020).", "We first learn different granularity representations through two independent contrast, i.e., instance-level and keyword-level.", "Specifically, we use the universal and classic TextRank (Mihalcea and Tarau, 2004) method to extract keywords from each text, which contain the most important information and need to be highlighted.", "On the instance-level, we treat the keyword in the input text as an additional condition for a better prior semantic distribution.", "Then, we utilize KullbackLeibler divergence (Kullback and Leibler, 1951) to reduce the distance between prior distribution and positive posterior distribution, and increase the distance with the negative posterior distribution.", "While on the keyword-level, we propose a keyword graph via contrastive correlations of positive-negative pairs to learn informative and accurate keyword representations.", "By treating the keyword in the output text as an anchor, the imposter keyword is produced by neighboring nodes of the anchor keyword and forms the keyword-level contrast, where the similarity between the imposter keyword and the anchor keyword is poorer than the positive keyword.", "To unify individual intra-contrasts and tackle the contrast vanishing problem in independent contrastive granularities, we leverage an inter-contrast, the Mahalanobis contrast, to investigate the contrastive enhancement based on the Mahalanobis distance (De Maesschalck et al., 2000), a measure of the distance between a point and a distribution, between the instance distribution and the keyword representation.", "Concretely, we ensure the distance from the anchor instance distribution to the ground-truth keyword vector is closer than to the imposter keyword vector.", "The Mahalanobis contrast plays an intermediate role that joins the different granularities contrast via incorporating the distribution of instance with the representation of its crucial part, and makes up a more comprehensive keyword-driven hierarchical contrastive mechanism, so as to ameliorate the generated results.", "We empirically show that our model outperforms CVAE and other baselines significantly on three generation tasks: paraphrasing, dialogue generation, and storytelling.", "Our contributions can be summarized as follows: To our best knowledge, we are the first to propose an inter-level contrastive learning method, which unifies instance-level and keyword-level contrasts in the CVAE framework.", "We propose three contrastive learning measurements: KL divergence for semantic distribution, cosine distance for points, and Mahalanobis distance for points with distribution.", "We introduce a global keyword graph to obtain polished keyword representations and construct imposter keywords for contrastive learning.", "Contrastive learning is used to learn representations by teaching the model which data points are similar or not.", "Due to the excellent performance on self-supervised and semi-supervised learning, it has been widely used in natural language processing (NLP).", "Firstly, Mikolov et al. (2013) proposed to predict neighboring words from context with noise-contrastive estimation.", "Then, based on word representations, contrastive learning for sentence has been utilized to learn semantic representations.", "Lee et al. (2021) generated positive and negative examples by adding perturbations to the hidden states.", "Cai et al. (2020) augmented contrastive dialogue learning with group-wise dual sampling.", "Moreover, contrastive learning has also been utilized in caption generation (Mao et al., 2016), summarization (Liu and Liu, 2021) and machine translation (Yang et al., 2019).", "Our work differs from previous works in focusing on hierarchical contrastive learning on hybrid granularities.", "The Mahalanobis distance is a measure of the distance between a point and a distribution (De Maesschalck et al., 2000).", "The distance is zero if the point is on the distribution.", "Recently, Mahalanobis distance is popularly applied to the NLP tasks (Tran et al., 2019).", "Podolskiy et al. (2021) showed that while Transformer is capable of constructing homogeneous representations of in-domain utterances, the Mahalanobis distance captures geometrical disparity from out of domain utterances.", "Further, Ren et al. (2021) considered that the raw density from deep generative models may fail at out-of-domain detection and proposed to fix this using a likeli-4433 hood ratio between two generative models as a confidence score.", "Variational autoencoder (VAE) was proposed by Kingma and Welling (2013), and has been widely used in various tasks such as headline generation (Li et al., 2021), dialogue generation (Ser-ban et al., 2017) and story generation (Yu et al., 2020).", "Based on VAE, a more advanced model, Conditional VAE (CVAE), was proposed to generate diverse images conditioned on certain attributes, which was also applied to generate diverse outputs in NLP tasks (Zhao et al., 2017; Qiu et al., 2019).", "Existing works concentrate on generating diverse outputs, and we take one step further to utilize prior and posterior latent distribution to compare positive and negative samples, which helps to learn more accurate semantic information.", "VAE: Variational auto-encoder (VAE) is a typical encoder-decoder structural model with certain types of latent variables.", "Given an input x , VAE models the latent variable z through the prior distribution p ( z ) , and the observed data x is reconstructed by the generative distribution p ( x | z ) which is the likelihood function that generates x conditioned on z .", "Since z is unknown, it should be estimated according to the given data x as p ( z | x ) .", "While the posterior density p ( z | x ) = p ( x | z ) p ( z ) /p ( x ) is intractable, VAE introduces a recognition posterior distribution q ( z | x ) approximates to the true posterior p ( z | x ) .", "Thus, VAE is trained by optimizing the lower bound on the marginal likelihood of data x as: logp ( x ) E z q ( z | x ) [ logp ( x | z )] DKL ( q ( z | x ) || p ( z )) , (1) where DKL is the KullbackLeibler divergence.", "CVAE: The conditional variational auto-encoder (CVAE) is the supervised version of VAE with an additional output variable.", "Giving a dataset { x i , y i } Ni =1 consisting of N samples, CVAE is trained to maximize the conditional log-likelihood, and the variational lower bound of the model is written as follows: logp ( y | x ) E z q ( z | x,y ) [ logp ( y | x, z )] DKL ( q ( z | x, y ) || p ( z | x )) .", "Assuming the type of latent variable obeys Gaussian distribution, the first right-hand side term can be approximated by drawing samples { z i } Ni =1 from the recognition posterior distribution q ( z | x, y ) , where z N ( , 2 I ) , and then objective of the CVAE with Gaussian distribution can be written as:", "L cvae ( x, y ; , ) = 1 NN (cid:88) i =1 logp ( y | x, z i ) + DKL ( q ( z | x, y ) || p ( z | x )) , (3)", "where z i = g ( x, y, i ) , i N (0 , I ) .", "The distribution q ( z | x, y ) is reparameterized with a differentiable function g , which enables the model trainable via stochastic gradient descent.", "Inspired by Wu et al. (2019), we add keyword u as an additional condition to the prior distribution to control the generation process, which turns the p ( z | x ) in Equaton 3 into p ( z | x, u ) .", "In this section, we introduce our hierarchical contrastive learning method, which is comprised of three parts: instance-level contrast based on KL divergence (sec.3.2.1), keyword-level contrast based on keyword graph (sec.3.2.2), and inter-contrast: Mahalanobis contrast (sec.3.2.3).", "To tackle the exposure bias problem and discriminatively exploit the different quality of references, instance-level contrastive learning is introduced to learn discrepancies of targets.", "Specifically, in addition to the observed input data x and positive output y + , a negative output y is added to construct a contrastive pair { ( x, y + ) , ( x, y ) } .", "In this case, the prior distribution p ( z | x ) is learned from a prior network, which is denoted as f ( x ) .", "The approximate posteriors q ( z | x, y + ) and q ( z | x, y ) are learned from a posterior network and represented as f ( x, y + ) and f ( x, y ) , respectively.", "The objective here is to make the distance between a prior distribution and positive posterior distribution closer than with the negative posterior distribution.", "Thus, the instance-level contrastive loss function can be written as: L ins = E f [ log (1 e h ( f ( x,y + ) ,f ( x )) / (cid:80) y Y e h ( f ( x,y ) ,f ( x )) / )] , where the y Y can be positive sample y + or negative sample y , and the is a temperature 4434 Input sentence Prior network candidate sentence 2 candidate sentence 3 Ground-truth output Posterior network Decoder Negative output keyword candidate sentence 1 Prior distribution KLDiv loss MLE loss cond input keywords Train Dataset output keywords imposter keyword Mahalanobis loss Keyword loss Generated sentence Posterior distribution (1) Keyword-level Contrast (3) Mahalanobis Contrast (2) Instance-level Contrast keyword graph Figure 2: The architecture of hierarchical contrastive learning, which consists of three parts: (1) Keyword-level contrast from keyword graph; (2) Instance-level contrast based on KL divergence for semantic distribution; and (3) Mahalanobis contrast between instance-level and keyword-level.", "parameter to control push and pull force.", "The function h ( ) denotes the distance between elements, which is set as KullbackLeibler divergence (Kull-back and Leibler, 1951) in instance-level contrast, DKL ( f ( x, y ) || f ( x )) , to measure the difference between two distributions.", "Since the instance-level contrast focuses on learning high-level information and fails to discriminate the contribution of each word, we incorporate it with a keyword-level contrast to pay more attention to the specific keyword.", "Keyword Graph: Given an input-output text pair ( x, y ) , keywords k x , k y can be extracted from x and y , respectively.", "For an input text x i with keyword k x,i , input texts that contain the same keyword are gathered into a cluster C i = { x j } nj =1 , k x,j x j , where n is the number of texts in C i .", "Each text x j C i has a positive-negative output text pair { ( y + j , y j ) } containing a positive output keyword k + y,j and a negative one k y,j , respectively.", "Thus, spreading to the entire cluster C i , for the output text y i , there exists positive relations r + i,j between its keyword k y,i and each of the surrounded positive keywords { k + y,j } n j =1 .", "Likewise, negative relations r i,j correlates the output keyword k y,i and the surrounded negative ones { k y,j } nj =1 .", "keyword graph G k is constructed.", "Each node representation h 0 i is initialized as the average BERT embedding (Devlin et al., 2018) of texts in the cluster C i with the same corresponding keyword k x,i .", "Then, the relation edge r 0 ij that connects node i and node j is learned via a feedforward layer r 0 ij = FFN ([ h 0 i ; h 0 j ]) .", "Then, the representations of nodes and relation edges are iteratively updated with their connected nodes via the graph attention (GAT) layer and the feed-forward (FFN) layer.", "In the t -th iteration, we first update each edge representation by paying attention to the connected nodes, denoted as: tr = softmax (( r tij W p )( h t W h ) T d ) , (4) p tij = tri h ti + trj h tj , (5) r t +1 ij = FFN ( r tij + p tij ) , (6) where h t can be h ti or h tj .", "Then, based on the obtained edge representation r t +1 ij , we update the node representations considering both the related nodes and relation edges by the graph attention layer, GAT ( h ti , h tj , r tij ) , which is designed as: e tij = ( h ti W q )( h tj W k + r t +1 ij W r ) T d , (7) tij = exp ( e tij ) (cid:80) l Ni exp ( e til ) , (8) u ti = (cid:80) j N i tij ( h tj W v + r t +1 ij ) , (9) 4435 where W q , W k , W r and W v are all learnable parameters, and the tij is the attention weight between h ti and h tj .", "Besides, to avoid gradient vanishing after several iterations, a residual connection is added to the output u ti and the updated node representations h t +1 i is obtained.", "In this way, the new representation of each keyword node consists of the relation dependency information from neighbor nodes N i .", "We take the node representations from the last iteration as the final keyword representations, denoted as u for brevity.", "Keyword-level Contrast: The keyword-level contrastive learning arises from input keywords against positive output keywords and negative impostor keywords.", "The input keyword u in is extracted from the input text as an anchor, and the output keyword u out is extracted from ground-truth output text.", "While the impostor keyword is calculated from the negative neighbours of the output keyword u out , written as u imp = (cid:80) i W i u i , where u i is the representation of keyword node which is obtained by the keyword graph learning procedure described above.", "In this way, with the help of neighbour nodes in the graph, we can obtain a more indistinguishable and difficult negative sample.", "The loss of keyword level contrastive learning thus can be written as: L keyword = E [ log e h ( u in ,u out ) / (cid:80) u U e h ( u in ,u ) / ] , (10) where u U denotes the positive output keyword u out or imposter keyword u imp .", "In keyword-level contrast, h ( ) utilizes cosine similarity to calculate the distance between points.", "Note that there exists a space gap between the instance-level contrast and the keyword-level contrast, which disturbs the completeness of this hierarchical contrastive architecture.", "Besides, the contrastive values vanish when the distance metric is hard to measure the actual discrepancy between positive and negative merely in instance distributions or in keyword representations.", "To mitigate such problems, we design a Mahalanobis contrastive mechanism to correlate the instance distribution and keyword representation, where the objective is to minimize the margin between the output keyword u out and the posterior semantic distribution q ( z | x, y ) f ( x, y ) and maximize the margin between the imposter keyword u imp and the posterior distribution f ( x, y ) : L ma = E f [ log (1 e h ( f ( x,y ) ,u out ) / (cid:80) u U e h ( f ( x,y ) ,u ) / )] , (11) where u U can be the positive output keyword u out or negative imposter keyword u imp .", "In Mahalanobis contrast, h ( ) utilizes Mahalanobis distance (De Maesschalck et al., 2000) to measure the similarity from keyword point to the instance distribution.", "In the univariate Gaussian case, z p ( z | x, y ) = N ( , 2 ) , then the h ( f ( x, y ) , u ) DMA ( p ( z | x, y ) || u ) = ( u ) 2 I ( u ) .", "Finally, we equip the CVAE model with the proposed hierarchical contrastive learning framework to unify hybrid granularities by adding L ins , L keyword and L ma to the reconstructed loss of Equation 3. 4 Experiment 4.1 Tasks and Datasets We conduct experiments on three public datasets QQP, Douban, RocStories for paraphrasing, dialogue generation, and storytelling task, respectively.", "The details of the datasets are as follows: Dialogue (Douban) Douban (Cai et al., 2020) consists of Chinese daily conversations between pairs of speakers, collected from a popular social network website, Douban group 1 .", "The dataset contains 218,039/10,000/10,000 context-response pairs for training/validation/test, with an average of 3.94 turns per context and 38.32 characters per utterance.", "We concatenate historical dialogues and turn it into a single-turn dialogue training corpus.", "Paraphrasing (QQP) QQP (Iyer et al., 2017; Wang et al., 2019) is a dataset published by the community question-answering website Quora on whether a pair of questions is semantically consistent.", "To adapt it to the contrastive learning task, we only keep question pairs that have positive and negative rewriting for the same input.", "Thus, there remain 44,949 samples in the dataset, which are split into training/validation/test sets of 40,441/2,254/2,254 samples.", "Storytelling (RocStories) RocStories consists of 98,163 high-quality hand-crafted stories, which capture causal and temporal commonsense relations of daily events (Mostafazadeh et al., 2016).", "Each story paragraph contains 5 sentences with an average of 43 words.", "Following the previous work Yu et al. (2021), we split the dataset into 8:1:1 for training, validation, and test.", "For the above three datasets, in order to construct different levels of contrastive learning, we performed the same preprocessing of extracting keywords.", "We utilize the TextRank model (Mi-halcea and Tarau, 2004) to extract keywords from each input and output sample, respectively.", "Besides, the vocabulary size of both datasets is the same as BERT (Devlin et al., 2018) setting.", "Our experiments are implemented in Tensor-flow (Abadi et al., 2016) on an NVIDIA Tesla P100 GPU.", "For our model and all baselines, we follow the same setting as described below.", "We pad or cut the input to 100, 20, 100 words for dialogue generation, paraphrasing, and storytelling, respectively.", "The truncation length is decided based on the observation that there is no significant improvement when increasing input length.", "The minimum decoding step is 5, and the maximum step is 20 for all tasks.", "Experiments were performed with a batch size of 256, and we use Adam optimizer (Kingma and Ba, 2015) as our optimizing algorithm.", "During the test stage, the beam-search size is set to 4 for all methods and the checkpoint with the smallest validation loss is chosen.", "Note that for better performance, our model is built based on BERT, and the decoding process is the same as Transformer (Vaswani et al., 2017).", "Finally, due to the limitation of time and memory, small settings are used in the pre-training baselines.", "We compare our method against several traditional generation models, pretrained-based generation models, and contrastive learning models.", "Traditional generation models: (1) CVAE (Zhao et al., 2017) generates sentences based on latent variables, sampling from potential semantic distribution.", "(2) Seq2Seq (Sutskever et al., 2014) is a sequence-to-sequence framework combined with attention mechanism and pointer network.", "(3) Transformer (Vaswani et al., 2017) is an abstractive method based solely on attention mechanisms.", "Pretrained-based generation models: (4) Seq2Seq-DU (Feng et al., 2021) is concerned with dialogue state tracking in a task-oriented dialogue system.", "(5) DialoGPT (Zhang et al., 2020) proposes a large, tunable neural conversational response generation model trained on more conversation-like exchanges.", "(6) BERT-GEN (De-vlin et al., 2018) augments Seq2Seq with BERT as the encoder.", "(7) T5 (Raffel et al., 2020) introduces a unified framework that converts all text-based language problems into a text-to-text format.", "Contrastive learning methods: (8) Groupwise (Cai et al., 2020) augments contrastive dialogue learning with group-wise dual sampling.", "(9) T5-CLAPS (Lee et al., 2021) generates negative and positive samples for contrastive learning by adding small and large perturbations, respectively.", "To evaluate the performance of our model against baselines, we adopt the following metrics widely used in existing studies.", "BLEU We utilize BLEU score (Papineni et al., 2002) to measure word overlap between the generated text and the ground-truth.", "Specifically, following the conventional setting of (Gu et al., 2019), we adopt BLEU-1 4 scores under the smoothing techniques (smoothing 7).", "Embedding To evaluate our model more comprehensively, we also capture the semantic matching degrees between the bag-of-words (BOW) embeddings of generated text and reference (Gu et al., 2019).", "Particularly we adopt three metrics: 1) Extrema , cosine similarity between the largest extreme values among the word embeddings in the two texts; 2) Average , cosine similarity between the averaged word embeddings of generated text and reference; 3) Greedy , greedily matching words in the two texts based on cosine similarities.", "Automatic Evaluation The experimental", "effects of traditional generation methods such as Seq2Seq and Transformer , and the lower part shows the latest pretrained-based methods including DialoGPT and T5 .", "Overall, pretrained-based methods generally outperform traditional methods, and this also proves the effectiveness of the pre-trained language model on the generation tasks.", "Secondly, we can find that the performance is significantly improved after adding contrast learning.", "Finally, our method outperforms T5-CLAPS by 2.7%, 3.6% on QQP, by 20.3%, 24.9% on Douban, and by 3.9%, 6.3% on RocStories in terms of BLEU-1, BLEU-2, respectively, which proves the superiority of our model.", "Human Evaluation We also assessed system performance by eliciting human judgments on 100 randomly selected test instances on QQP dataset.", "Three annotators are asked to rate paraphrasing questions generated by T5-CLAPS , DialoGPT , Seq2Seq-DU , and our model according to Fluency (Flu), Meaningfulness (Mean), and Differential (Diff).", "The rating score ranges from 1 to 3, with 3 being the best.", "Table 3 lists the average scores of each model, showing that our model outperforms other baselines among all metrics, which indicates that our model generates paraphrasing sentences more readable successfully.", "The kappa statistics are 0.53, 0.61, and 0.56 for fluency, meaningfulness, and differential, respectively, which indicates the moderate agreement between annotators.", "We conduct ablation tests to assess the importance of the keyword graph architecture ( w/o graph ), keyword ( w/o keyword ), as well as the Maha-4438", "lanobis contrast ( w/o MA contrast ), and the results are shown in Table 2. Concretely, after removing the keywords (w/o keyword), using only instance-level contrastive, the effect of our model is greatly reduced by about 10.4%, which illustrates the desirability of considering the contributions of words in a sentence.", "On this basis, adding keyword contrastive learning with removing the keyword graph, the effect of the model has been improved but is still lower than our model by 2.1%.", "This shows that keywords are indeed conducive to capturing important information, and it also illustrates the significance of a keyword graph.", "Finally, the experiment of removing the Mahalanobis contrastive loss indicates that only with granularity independent contrast is not sufficient, and the Mahalanobis contrast plays a critical intermediate role.", "To study the hierarchical contrastive learning, we visualize the vectors of keyword, input text, positive and negative output text on randomly sampled cases from QQP dataset, as shown in Figure 3. For visualization purposes, we reduce the dimension of the latent vector with t-SNE (Maaten and Hinton, 2008).", "It can be observed that the input sentence representation is located close to the keyword, which shows that the keyword, as the most important information in the sentence, determines the semantic distribution.", "Moreover, in contrastive learning, it can be seen that after training, the position of the input sentence is close to the positive samples and far away from the negative samples.", "This suggests that contrastive learning can correct the semantic distribution.", "We finally investigate the influence of sampling different keywords.", "As shown in Table 4, for an input question, we provide keywords extracted by TextRank and randomly-selected keywords as the condition to control the semantic distribution and examine the quality of the generated text.", "As the most important information unit, different keywords lead to different semantic distributions and will result in different generated texts.", "The more properly the keywords are selected, the more accurately the sentences will be generated.", "When utilizing the keywords extracted by TextRank as a condition, the information belly fat is focused during the generation of paraphrasing questions, and the generated sentences are more accurate.", "On the contrary, after adding the random-selected keyword disposable, the generated question emphasizes one-off exer-cise, which brings incorrect information.", "We also compare our model with several baselines in Table 4. Most baselines can generate fluent questions in this case.", "However, they focus on lose weight, and miss the significant information belly fat.", "Based on the above analysis, we can observe that keywords can emphasize and protect the highlight information in sentences, and affect the semantic distribution of as a condition.", "In this paper, we propose a hierarchical contrastive learning mechanism, which consists of intra-contrasts within instance-level and keyword-level and inter-contrast with Mahalanobis contrast.", "The experimental results yield significant out-performance over baselines when applied in 4439 the CVAE framework.", "In the future, we aim to extend the contrastive learning mechanism to different basic models, and will explore contrastive learning methods based on external knowledge.", "We would like to thank the anonymous reviewers for their constructive comments.", "This work was supported by National Key Research and Development Program of China (No. 2020AAA0106600), National Natural Science Foundation of China (NSFC Grant No. 62122089, No. 61832017 & No. 61876196), and Beijing Outstanding Young Scientist Program No.", "BJJWZYJH012019100020098.", "This work was also supported by Alibaba Group through Alibaba Research Intern Program.", "In this paper, we propose an inter-level contrastive learning method, which unifies instance-level and keyword-level contrasts in the CVAE framework.", "The positive impact lies in that it can help improve the capability of generation models on paraphrasing, dialogue generation, and storytelling tasks.", "The negative impact may be that the generation process of the system is not fully controllable, so it is possible to generate inaccurate or unreasonable content in some extreme cases.", "Hence, extra processing steps might be needed if this method were to be used in scenarios where high accuracy is required." ]
[ "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "objective", "result", "abstain", "method", "objective", "abstain", "abstain", "abstain", "abstain", "result", "objective", "objective", "result", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "objective", "abstain", "objective", "other", "other", "other", "other", "abstain", "abstain", "abstain", "abstain" ]
[ "We propose a new automatic evaluation metric for machine translation.", "Our proposed metric is obtained by adjusting the Earth Mover's Distance (EMD) to the evaluation task.", "The EMD measure is used to obtain the distance between two probability distributions consisting of some signatures having a feature and a weight.", "We use word embeddings, sentence-level tf (cid:1) idf , and cosine similarity between two word embeddings, respectively, as the features, weight, and the distance between two features.", "Results show that our proposed metric can evaluate machine translation based on word meaning.", "Moreover, for distance, cosine similarity and word position information are used to address word-order differences.", "We designate this metric as W ord E mbedding-based automatic MT evaluation using W ord P osition I nformation (WEWPI).", "A meta-evaluation using WMT16 metrics shared task set indicates that our WE WPI achieves the highest correlation with human judgment among several representative metrics.", "Recent advances in neural machine translation (NMT) (Sutskever et al., 2014; Luong et al., 2015) are remarkable.", "Results based on human evaluation have demonstrated that NMT outperforms statistical machine translations significantly (Chiang, 2005; Tufis and Ceausu, 2009).", "The NMT achieved especially high performance in terms of fluency.", "However, it tends to generate more omission errors than statistical machine translations generate.", "Unfortunately, it is difficult for automatic evaluation metrics to evaluate outputs with omission errors because those errors are not included as non-match words between the translation and reference.", "For such cases, the word embedding-based automatic MT evaluation metric, which is based on word position information, is effective.", "Various automatic evaluation metrics have been proposed for machine translation, but none is suf-ficient for NMT.", "Actually, BLEU ( Papineni et al., 2002) is the representative metric based on n-gram matching.", "Unfortunately, because it is a surface-level metric, it is difficult to address word meaning during evaluation for MT outputs.", "The word-embedding-based distance measure for document ( Kusner et al., 2016) and the word-alignment-based automatic evaluation metric using word embedding ( Matsuo et al., 2017) are effective to address word meanings.", "Nevertheless, they can only ineffectively accommodate word order differences between the translation and reference.", "Given those circumstances, a new metric with word embedding-based automatic MT evaluation metric using word position information is proposed in which the evaluation score is obtained by adjusting the Earth Mover's Distance (EMD) (Rubner et al., 1998, 2000) to the evaluation task.", "The EMD measure represents the distance between two probability distributions.", "Moreover, the EMD distance is obtained based on a signature consisting of the feature and the weight, and the distance between two features.", "The feature, weight, and distance must therefore be defined to adjust EMD to the evaluation task.", "In our proposed metric, the word embeddings and the sentence-level tf (cid:1) idf respectively denote the feature and the weight.", "Consequently, our proposed metric can produce an evaluation based on the word meaning.", "Moreover, our proposed metric uses word position information in the distance between two word embeddings.", "The distance is obtained using cosine similarity and the difference of word position between the translation and reference.", "Results demonstrate that our proposed metric can evaluate translations also considering word order differences.", "We designate this new metric as W ord E mbedding-based automatic MT evaluation using W ord P osition I nformation (WE WPI).", "The experimentally obtained results based on the WMT16 metrics shared task (Bojar et al., 2016) demonstrated that our WE WPI achieves the highest correlation with human judgment among several metrics: BLEU, METEOR (Banerjee and Lavie, 2005), IMPACT (Echizen-ya and Araki, 2007), and RIBES (Isozaki et al., 2010).", "Moreover, the correlation of WE WPI is better than that of WEWPI without word position information (WE).", "Results therefore confirmed the effectiveness of WE WPI using word position information.", "Kusner et al. (2016) proposed the Word Mover's Distance (WMD) as a distance measure using word embedding and word alignment.", "This measure obtains the distance between two documents adjusting EMD to a document.", "However, it cannot accommodate differences of word order between the translation and reference.", "Matsuo et al. (2017) also proposed a word-alignment-based automatic evaluation metric using word embeddings for segment-level evaluation.", "As described in that paper, Maximum Alignment Similarity (MAS) was found to have higher correlation with human evaluation than BLEU for European-to-English, which has similar grammar structures.", "For Japanese-to-English, which has different grammar structures, Average Alignment Similarity (AAS) showed better correlation with human evaluation than other metrics.", "However, neither MAS nor AAS uses word position information.", "Therefore, neither can sufficiently accommodate word order differences.", "Actually, WEWPI uses not only the word alignment but also word position information.", "learns distributed word representations from a neural network model and from distributed sentence representations computed with a recursive autoencoder.", "Moreover, it uses a penalty based on translation and reference lengths.", "By contrast, the WE WPI system specifically examines the difference between the word positions of the translation and reference, not the difference of lengths between the translation and reference.", "Therefore, it can sufficiently accommodate word order differences.", "Moreover, it can evaluate the translation efficiently using word embeddings of target languages without requiring large amounts of data or learning time.", "Our WE WPI requires no learning of bilingual knowledge or a relation between translation and reference.", "It needs only a model of word embeddings in advance to apply EMD to the automatic MT evaluation task.", "In a non-trained evaluation metric, MEANT 2.0 ( Lo, 2017; Bojar et al., 2017) uses a distributional word vector model to evaluate lexical semantic similarity and shallow semantic parses to evaluate structural semantic similarity between the translation and reference.", "It is a new version of MEANT ( Lo and Wu, 2011), which is a non-ensemble and untrained metric.", "Moreover, MEANT 2.0 nosrl is a subversion of MEANT 2.0 to evaluate the translation for any output language by removing the dependence on semantic parsers for semantic role labeling (SRL).", "In that case, phrasal similarity is calculated using n-gram lexical similarities.", "However, MEANT 2.0 series do not specifically examine the position of each word in the translation and reference.", "Results show that it is difficult to deal sufficiently with language pairs for which the grammar differs.", "In WEWPI, the evaluation score is calculated using the relative difference between the positions of each word in the translation and reference.", "Therefore, WEWPI can evaluate translations dealing with word order in languages pairs for which the grammar differs.", "As described herein, we propose WEWPI as the automatic MT evaluation metric obtained by adjusting the E arth M over's D istance (EMD) to the", "automatic MT evaluation task.", "First, we describe EMD.", "Figure 1 depicts an outline of EMD.", "In Figure 1, two probability distributions are presented respectively as P and Q .", "The P and Q consist of some P i and Q j , which are the respective signatures.", "Each signature consists of a feature ( i.e. , p i in P i and q j in Q j ) and a weight ( i.e. , w p i in P i and w q j in Q j ).", "Therefore, two probability distributions P and Q are defined respectively as P = f ( p 1 ; w p 1 ) ::: ( p m ; w p m ) g and Q = f ( q 1 ; w q 1 ) ::: ( q n ; w q n ) g .", "Moreover, d ij represents the distance between two features p i and q j .", "The goal of EMD is to obtain total flow F = [ f ij ] that minimizes the overall cost from the perspective of a transportation problem.", "In that case, the overall cost is defined as Eq.", "(1).", "Moreover, four constraints are defined for f ij , which is the transportation amount in the transportation problem, to find minimum F as the following Eqs.", "(2)(5): f ij (cid:21) 0 1 < i < m; 1 < j < n (2) n j =1 f ij < w p i 1 < i < m (3) m i =1 f ij < w q j 1 < j < n (4) m i =1 n j =1 f ij = min 0 @ m i =1 w p i ; n j =1 w q j 1 A (5) Constraint (2) shows that each amount of weight f ij is transported only in the direction from signature P i to signature Q j to be nonnegative.", "In Constraint (3), the amount of weight which is supplied from P i ( i.e. , nj =1 f ij ) does not exceed w p i , which is the weight of P i .", "Moreover, in Constraint (4), the amount of weight which Q j receives ( i.e. , mi =1 f ij ) does not exceed w q j , which is the weight of Q j .", "Finally, the total amount of weight is equal to the weight of the lighter distribution in Constraint (5).", "In Eqs.", "(1)(5), m shows the number of signatures in P ; n shows the number of signatures in Q .", "The EMD is defined as shown below.", "In Eq.", "(6), the min ( W ORK ( P; Q; F )) is normalized by the minimum amount of work of Eq.", "(5).", "We describe the computation of EMD using two probability distributions P and Q in two dimensional surface.", "Tables 1 and 2 respectively present the examples of P and Q signatures.", "In Tables 1 and 2, all features p i and q j correspond to the coordinate ( x; y ) of two dimensional surface.", "Figure 2 depicts an example of an EMD calculation based on the signatures in Tables 1 and", "2. In Figure 2, the green arrow indicates the amount of weight f ij .", "All f ij are transported only in the direction from P i to Q j according to Constraint (2).", "In each signature P i , nj =1 f ij does not exceed w p i by Constraint (3).", "For example, in P 3 , 3 j =1 f 3 j is 0.6 (=0.2+0.0+0.4).", "It does not exceed 0.6, which is the weight of P 3 .", "Moreover, in each signature Q j , mi =1 f ij does not exceed w q j according to Constraint (4).", "For example, in Q 1 , 4 i =1 f i 1 is 0.8 (=0.6+0.0+0.2+0.0).", "It does not exceed 0.8, which corresponds to the weight of Q 1 .", "The total amount of weight by mi =1 nj =1 f ij is 2.4.", "It is equal to 2.4 by mi =1 w p i or 2.4 by nj =1 w q j .", "Therefore, this example of Figure 2 conforms to Constraint (5).", "Moreover, the distance between two features is necessary to obtain EMD.", "When the Euclidean distance is used as the calculation of distance in this example, 2.236 (= p 1 2 + 2 2 ) is obtained as d 11 , d 22 , d 31 , d 33 , d 42 , and d 43 , and other distances are 3.606 (= p 2 2 + 3 2 ) in Figure", "2. As a result, 5.366 (=2.236 (cid:2) (0.6+0.6+0.2+0.4+0.2+0.4)) is obtained as the value of EMD by two probability distributions P and Q in Tables 1 and", "2. We obtain WE WPI adjusting EMD to the automatic MT evaluation task.", "Details of application of EMD to WE WPI are presented in 3.2.2.", "3.2 New Automatic MT Evaluation Metric: WE WPI 3.2.1 Word Alignment using Position Information For the application of EMD to automatic MT evaluation, we use word alignment results.", "Word alignment is done using cosine similarity based on word embeddings and the relative difference between the word positions in the translation and reference.", "In that case, WEWPI obtains align score using Eqs.", "In Eq.", "(7), t i and r j respectively represent the word embeddings of word T i in the translation and word R j in the reference.", "The cos sim ( t i ; r j ) denotes the cosine similarity between t i and r j .", "Moreover, pos inf ( T i ; R j ) represents the relative difference between the position of word T i in the translation and the position of word R j in the reference.", "It is defined as Eq.", "(8).", "In Eq.", "(8), pos ( T i ) and pos ( R j ) respectively denote the positions of word T i in the translation and word R j in the reference.", "Actually, m and n respectively denote the word numbers in the translation and reference.", "The pos inf ( T i ; R j ) becomes larger as the relative difference between pos ( T i ) and pos ( R j ) becomes larger .", "Therefore, (1 : 0 (cid:0) pos inf ( T i ; R j )) is used as the negative weight for cos sim ( t i ; r j ) .", "The ranges of cos sim ( t i ; r j ) and pos inf ( T i ; R j ) are both 0.0 1.0.", "Figure 3 depicts an example of word alignment using Eqs.", "(7) and (8).", "The WE WPI calculates align score between a word in the translation and all words in reference.", "Based on those results, the word with the highest align score in the reference is selected as the corresponding word to the word in the translation.", "In Figure 3, the align score between that in the translation and you in the reference is the highest ( i.e. , 0.478) among the align score between that in the translation and all words in the reference.", "However, it is lower than the align score 0.833 between you in the translation and you in the reference.", "Therefore, the word which corresponds to that in the translation cannot be obtained in the reference.", "Similarly, the word which 1878 reference Are there topics you want to get the world talking about ?", "corresponds to should in the translation cannot be obtained in the reference.", "In contrast, discuss in the translation corresponds to talking in the reference using pos inf ( T i ; R j ) of Eq.", "(8) although discuss in the translation corresponds to topics in the reference when (1 : 0 (cid:0) pos inf ( T i ; R j )) is not used in Eq.", "(7) ( i.e. , align score = cos sim ( t i ; r j ) ).", "The 0.477, which is the cos sim between discuss in the translation and topics in the reference, is greater than 0.460, which is the cos sim between discuss in the translation and talking in the reference.", "Here, pos inf ( T i ; R j ) between discuss in the translation and talking in the reference is 0.033 ( (cid:12)(cid:12)(cid:12) 810 (cid:0) 1012 (cid:12)(cid:12)(cid:12) ).", "That between discuss in the translation and topics in the reference is 0.550 ( (cid:12)(cid:12)(cid:12) 810 (cid:0) 312 (cid:12)(cid:12)(cid:12) ).", "Consequently, the align score of discuss in the translation and talking in the reference is 0.445 ( 0 : 460 (cid:2) (1 : 0 (cid:0) 0 : 033) ).", "That of discuss in the translation and topics in the reference is 0.215 ( 0 : 477 (cid:2) (1 : 0 (cid:0) 0 : 550) ) using Eq.", "(7).", "The WEWPI can select talking in the reference as the corresponding word for discuss in the translation using pos inf ( T i ; R j ) .", "The use of pos inf ( T i ; R j ) is effective for the correct word alignment.", "We obtain WEWPI as new automatic MT evaluation metrics by adjusting EMD to the automatic MT evaluation task.", "In WE WPI, the variables P and Q in Figure 1 respectively correspond to a translation T and reference R .", "Moreover, the features ( i.e. , p i and q j in Figure 1), the weight ( i.e. , w p i and w q j in Figure 1), and distance ( i.e. , d ij in Figure 1) are required as parameters to adjust EMD to the automatic MT evaluation task.", "As described herein, we use the word embeddings as features and the sentence-level tf (cid:1) idf as the weight.", "The weight definition is presented in Eq.", "(9) below.", "In Eq.", "(9), tf denotes the appearance frequency of a word in a translation or reference.", "In addition, df represents the number of sentences in which the word appears in all translations or references.", "In addition, N is the total number of translations or references.", "Actually, WE WPI distinguishes the function word and the content word using Eq.", "(9).", "Furthermore, w t i of the word in the translation and w r i of the word in the reference by Eq.", "(9) are normalized respectively using the following Eqs.", "(10) and (11).", "The dependence of w in Eq.", "(9) by difference of dataset can be kept to the minimum by normalizing Eqs.", "(10) and (11).", "Moreover, we define distance d ij , which is ascertained from the result of the word alignment described in 3.2.1.", "The d ij is obtained using the following Eq.", "(12): d ij = 8>>>< >>>: 1 : 0 (cid:0) cos sim ( t i ; r j ) (cid:2) e (cid:0) pos inf ( T i ;R j ) if T i corresponds to R j 1 : 0 if T i does not correspond to R j (12) In Eq.", "(12), 1 : 0 (cid:0) cos sim ( t i ; r j ) (cid:2) e (cid:0) pos inf ( T i ;R j ) is used as d ij when word T i in the translation corresponds to word R j in the reference by the word alignment result.", "The pos inf ( T i ; R j ) is obtained by Eq.", "(8).", "Here, t i and r j respectively correspond to the word embeddings of the words in the translation and reference.", "The e (cid:0) pos inf ( T i ;R j ) represents the penalty 1879 cs-en de-en fi-en ro-en ru-en tr-en Human RR DA RR DA RR DA RR DA RR DA RR DA Systems 6 6 10 10 9 9 7 7 10 10 8 8 mtevalBLEU .992 .989 .905 .808 .858 .864 .899 .840 .962 .837 .899 .895 METEOR .995 .991 .935 .887 .952 .963 .934 .909 .987 .930 .965 .980 IMPACT .997 .990 .925 .841 .908 .915 .903 .819 .962 .840 .952 .959 RIBES .995 .990 .948 .891 .894 .901 .954 .794 .972 .864 .850 .868 MEANT 2.0 (Lo, 2017) .989 .990 .947 .950 .953 .966 .940 .946 .990 .959 .980 .990 MEANT 2.0 nosrl .985 .988 .928 .942 .969 .979 .917 .930 .984 .958 .978 .987 WE .986 .976 .918 .903 .954 .963 .885 .884 .989 .938 .976 .991 WE WPI .991 .980 .958 .927 .955 .957 .919 .877 .991 .926 .977 .993 Table 4: Absolute Pearson correlation of to-English system-level metric with human assessment variants: RR, standard WMT relative ranking; DA, direct assessment of translation adequacy.", "to cos sim ( t i ; r j ) because it becomes smaller as pos inf ( T i ; R j ) becomes larger.", "As a result, d ij becomes large when the relative difference between the position of word T i in the translation and the position of word R j in the reference ( i.e. , pos inf ( T i ; R j ) ) is large.", "The d ij by Eq.", "(12) is 1.0 when word T i does not correspond to word R j .", "Finally, the range of d ij becomes 0.0 1.0.", "Moreover, the WE WPI generates the distance matrix using d ij in Eq.", "(12).", "Table 3 presents the distance matrix between the translation Are there topics that you think should discuss world? and the reference Are there topics you want to get the world talking about? in Figure", "3. In Table 3, the bold typeface represents the distance between the two aligned words.", "The distance matrix using Eq.", "(12) is effective because it is not influenced by the words which are not aligned between the translation and reference.", "The WE WPI obtains the evaluation score by word embedding, sentence-level tf (cid:1) idf , and the distance matrix based on Eq.", "(12).", "The evaluation score of WEWPI is obtained as Eq.", "(13).", "W E W P I ( T; R ) = 1 : 0 (cid:0) min ( W ORK ( T; R; F )) mi =1 nj =1 f ij (13) In that equation, the range of min ( WORK ( T;R;F )) m i =1 n j =1 f ij becomes 0.0 1.0 using the weights normalized by Eqs.", "(10) and (11).", "Near 0.0, the distance between T and R is small.", "However, in the automatic MT evaluation metrics, the score is close to 1.0 when the evaluation for the translation is generally high.", "Therefore, we obtain W E W P I by taking the value of min ( WORK ( T;R;F )) m i =1 n j =1 f ij from 1.0.", "As a result, in between the translation Are there topics that you think should discuss world? and the reference Are there topics you want to get the world talking about?, 0.608 is obtained as the score using Eq.", "(13).", "The WEWPI can evaluate the translation based on the meanings of words using word embedding.", "Moreover, it can deal with the word order using the relative difference between the positions of words in the translation and the reference.", "We conducted evaluation experiments to confirm the effectiveness of WEWPI.", "The new-stest2016 set, which is the main test set in WMT16 metrics shared task (Bojar et al., 2016), was used.", "The script is available at http://www.statmt.org/wmt16/results.html .", "Therefore, we can readily obtain the correlation coefficient between the metrics and human judgments in WMT16 metrics shared task.", "The WMT16 metrics task includes English paired with Czech, German, Finnish, Romanian, Russian, and Turkish.", "For all translations, references and scores by human judgment in these language pairs are obtained from the url described above.", "For these experiments, we used different automatic MT evaluation metrics for comparison with our WE WPI: BLEU, METEOR, IMPACT, RIBES, and WE.", "Here, IMPACT and RIBES, which are surface-based metrics, are effective for language pairs with greatly different word order, such as English and Japanese.", "In addition, WE is an automatic MT evaluation metric that does not perform word alignment.", "It uses only d ij = 1 : 0 (cid:0) cos sim ( t i ; r j ) as the d ij of Eq.", "(12) in the WE WPI.", "In both WE and WE WPI, the word vectors for seven languages ( i.e. , English, Czech, German, Finnish, Romanian, Russian, and Turkish) were obtained using fastText (Grave et al., 2018).", "Tables 4 and 5 respectively present the correlation coefficient of to-English and out-of-English at the system level.", "Tables 6 and 7 respectively present the correlation coefficients of to-English and out-of-English at the segment level.", "In Tables 47, RR represents the correlation based on the relative ranking by human judgment to 5 translations at a time.", "The bold typeface shows the highest correlation coefficient among all correlation coefficients of metrics.", "Moreover, the coefficients of MEANT 2.0 described in (Lo, 2017) are added to Tables 4", "6. Here, WEWPI achieves the highest correlation with human judgment in Table 5, DA in Table 6, and Table", "7. Especially, the correlation coefficients of WE WPI are high with language pairs for which the grammar differs ( i.e. , English-to-German (en-de), German-to-English (de-en), 1881 Figure 4: To-English system-level metric significance test of results for human assessment variants, where DA denotes the direct assessment of translation adequacy.", "English-to-Turkish (en-tr), and Turkish-to-English (tr-en)).", "Therefore, the WE WPI is effective with such language pairs because it uses word position information.", "Moreover, we investigated the significance of WEWPI results and those of other metrics except those of MEANT 2.0 and MEANT 2.0 nosrl.", "As described herein, Williams significance test (Williams, 1959) was used to assess differences in dependent correlations.", "Figures 49 present significance test results for every competing pair of metrics, including those of our WEWPI.", "However, the language pairs for which significant differences could not be obtained in any competing pair of metrics are excluded from Figures 49 ( i.e. , cs-en and fi-en in Figure 4, cs-en, fi-en and ro-en in Figure 5, en-cs in Figure 7).", "In Figures 49, green cells signify that the metric shows significant difference from other metrics with 95% or greater confidence.", "Results demonstrated that our WEWPI yielded significantly different results among metrics.", "Particularly, WE WPI was found to have significantly better results than those of WE at the segment level, as shown in Figures 8 and 9.", "This particular result demonstrates that the word position information in WEWPI is effective for segment-level evaluation.", "Moreover, WE WPI does not need much time to calculate the scores described in 3.2.2.", "However, it takes time to calculate tf (cid:1) idf of words and to change the surface-level words to the word vectors.", "It is efficient to calculate tf (cid:1) idf of all words in the translations and references, and to extract the word vectors, which correspond to the words in the translations and references, from the fastText models in advance.", "As described herein, we proposed WEWPI as a new automatic MT evaluation metric.", "It produces an evaluation based on the meanings of words using word embedding.", "Moreover, it can accommodate word-order differences.", "Evaluation experiments demonstrated that our WE WPI obtains the highest correlation with human judgments among several representative metrics in language pairs for which the grammar differs, and demonstrated that it is significantly better than other metrics at segment-level evaluation.", "Our future work will improve WE WPI to obtain high-quality evaluation scores in combination with other metrics.", "We will conduct evaluation experiments using various data.", "Moreover, we will use WEWPI to improve NMT quality.", "For instance, WEWPI can be used easily in Minimum Risk Training (MRT) (Shen et al., 2016), which minimizes the expected loss on the training data.", "This work was partially supported by grants from Hokkai-Gakuen University." ]
[ "objective", "objective", "abstain", "method", "objective", "abstain", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "abstain", "objective", "objective", "abstain", "abstain", "result", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "objective", "result", "method", "result", "abstain", "other" ]
[ "Ellipsis is a natural language phenomenon where part of a sentence is missing and its information must be recovered from its surrounding context, as in Cats chase dogs and so do foxes..", "Formal semantics has different methods for resolving ellipsis and recovering the missing information, but the problem has not been considered for distributional semantics, where words have vector embeddings and combinations thereof provide embeddings for sentences.", "In elliptical sentences these combinations go beyond linear as copying of elided information is necessary.", "In this paper, we develop different models for embedding VP-elliptical sentences.", "We extend existing verb disambiguation and sentence similarity datasets to ones containing elliptical phrases and evaluate our models on these datasets for a variety of non-linear combinations and their linear counterparts.", "We compare results of these compositional models to state of the art holistic sentence encoders.", "Our results show that non-linear addition and a non-linear tensor-based composition outperform the naive non-compositional baselines and the linear models, and that sentence encoders perform well on sentence similarity, but not on verb disambiguation.", "Compositional distributional semantics has so far relied on a tight connection between syntactic and semantic resources.", "Based on the assembly principle of compositionality, these models assign a sentence vector by applying a linear map to the individual word embeddings therein.", "The meaning of cats chase dogs is as follows in (1) additive, (2) multiplicative, and (3) tensor-based models: (1) cats + chase + dogs (2) cats (cid:12) chase (cid:12) dogs (3) cats (cid:62) ( chase dogs ) Some linguistic phenomena, however, rely on copying resources while computing meaning; canonical examples thereof are anaphora and ellipsis, exemplified below:", "These lend themselves to a strict (dogs chase the cat's tail) and a sloppy reading (dogs chase their own tail).", "In these examples, the meaning of at least one part of the sentence is used twice, e.g. the subject in a, the verb phrase chase dogs in b.", "Such cases can often be extended to a situation in which a meaning is used more than twice, e.g. in Cats chase their tail, dogs too, and so do foxes.", "In order to develop distributional semantics for such sentences while respecting the principle of compositionality, one has a choice between a linear or a non-linear composition of resources.", "In the linear case, no information is copied, resulting in vector embeddings such as the following one (when only considering content words): cats + chase + dogs + children In the non-linear case, the necessary resources are copied to resolve the ellipsis, resulting in vectors embeddings such as: cats + chase + dogs + children + chase + dogs One has the same choice when dealing with multiplicative and tensor-based models.", "The question is which of these composition frameworks, i.e. linear versus non-linear, provides a better choice for embedding elliptical sentences.", "To our knowledge, this has remained an open question: although some theoretical work has been done to model verb phrase ellipsis in compositional distributional semantics (Wijnholds and Sadrzadeh, 2018), none of the existing datasets or evaluation methods for distributional semantics focus on elliptical phenomena.", "In this paper, we provide some answers.", "Our starting point is the lambda logical forms of sentences, e.g. those produced by the approach of Dalrymple et al. (1991), which uses a higher order unification algorithm to resolve ellipsis.", "We apply to these the lambdas-to-vectors mapping of Muskens and Sadrzadeh (2016, 2017) to homo-morphically map the lambda terms into concrete vector embeddings resulting from a multitude of composition operators, such as addition, multiplication, and tensor-based.", "We work with four vector spaces (count-based, Word2Vec, GloVe, FastText) and three different verb embeddings, and contrast our compositional models with state of the art holistic sentence encoders.", "We evaluate the sentence embeddings by using them in a verb disambiguation and in a sentence similarity task, created by extending previous SVO tasks from Grefenstette and Sadrzadeh (2011a) and Kartsaklis and Sadrzadeh (2013) to an elliptical setting, and obtaining new human judgements using the Amazon Mechanical Turk crowd-sourcing tool.", "Our experiments show that in both tasks, the models that use a non-linear form of composition perform better than the models whose composition framework is linear, suggesting that resolving ellipsis contributes to the quality of the sentence embedding.", "Single-Word Embeddings: Distributional semantics on the word level relies on the embedding of word meaning in a vectorial form: by taking context words as the basis of a vector space one computes the vector components of each word by considering its distribution among corpus data.", "Then a similarity measure is defined on the vector space via the cosine similarity.", "In a count-based model, the context is taken to be a linear window and the corpus is traversed to collect raw co-occurrence counts.", "Then, a weighting scheme is applied to smooth the raw frequencies in the meaning representation.", "More discussion on count-based vector space models can be found in (Tur-ney and Pantel, 2010), and a systematic study of the parameters of count-based word embeddings is given by (Kiela and Clark, 2014).", "With the rise of deep learning techniques, much attention has been given to neural word embeddings (Mikolov et al., 2013; Pennington et al., 2014; Bojanowski et al., 2017), which try to predict rather than observe, the context of a word by optimising an objective function based on the probability of observing a context.", "Compositional Models: The key idea of compositional models is that the meaning of elementary constituents can be combined in a structured way to obtain a representation for larger phrases.", "In a distributional setting, having a compositional operator is imperative: a data-driven model would not be adequate given the sparsity of full sentences in a corpus.", "Moreover, it is not clear that sentences follow the distributional hypothesis.", "Concrete composition operators can roughly be classified as simple and tensor-based.", "Simple models add or multiply the word vectors to obtain a sentence vector.", "The work of Mitchell and Lap-ata (2010) experiments with these models.", "Tensor-based models differ in that they represent complex words as vectors of a higher order: Baroni and Zamparelli (2010) represents adjectives as matrices which, applied to a word vector produce a vector representation of the compound adjective-noun combination.", "The account of (Coecke et al., 2010, 2013; Clark, 2015) generalises this to higher-order tensors, e.g. cubes for transitive verbs and hy-percubes for ditransitive verbs.", "The benefit of a type-driven approach over the simple models is that they respect the grammatical structure of sentences: the meaning of man bites dog is distinct from that of dog bites man whereas in an ad-ditive/multiplicative model they would be identical.", "The trade-off is that the tensors themselves have to be learnt; where Baroni and Zamparelli (2010) apply regression learning to learn the content of adjective matrices, for transitive verbs there have been several approaches using multistep regression learning (Grefenstette et al., 2013), relational learning (Grefenstette and Sadrzadeh, 2011a), or a combination of co-occurrence information with machine learning techniques (Polaj-nar et al., 2014a,b; Fried et al., 2015).", "A comparative study between count-based and neural embeddings in a compositional setting was carried out by (Milajevs et al., 2014).", "Neural composition turns the problem of compositionality around by learning the composition operator instead of predicting the result.", "Examples are Skip-Thought Vectors (Kiros et al., 2015), the Distributed Bag of Words model (Le and Mikolov, 2014), InferSent (Conneau et al., 2017), and Universal Sentence Encoder (Cer et al., 2018).", "Ellipsis, Formally: There exists many formal approaches to ellipsis and anaphora in the literature.", "These have generally taken either a syntactic or a semantic form 1 .", "Examples of the syntactic approaches are in the work of Hendriks and Dekker (1995); Morrill and Valentn (2015); Jager (2006); Kubota and Levine (2017); these use directional extensions of categorial grammars that allow for the syntactic types at the site of ellipsis be unified with copies of the types at the antecedent of the elliptical phrase.", "Another approach deletes the syntactic structure at the ellipsis site and reconstruct it by copying across the antecedent structure (Fiengo and May, 1994; Merchant, 2004).", "Semantic approaches (Dalrymple et al., 1991; Szabolcsi, 1987; Pulman, 1997) assume that ellipsis involves underspecification of content and resolve this by producing a predicate via a suitable abstraction from the antecedent.", "For instance, the elliptical phrase", "(b) Cats chase dogs, children do too, will take an initial logical form ( b 1 ) ; a resolution step ( b 2 ) provides it with the lambda term in ( b 3 ) , which constitutes its final semantic form: ( b 1 ) chase ( cats, dogs ) P ( children ) ( b 2 ) P = x.chase ( x, dogs ) ( b 3 ) ( b 1 ) (cid:59) chase ( cats, dogs ) chase ( children, dogs ) The ambiguous example (d) Cats chase their tails, dogs too is treated similarly, but can now obtain its respective strict and sloppy readings by producing predicates ( d 1 ) and ( d 2 ) below: ( d 2 ) P = x.chase ( x, tail ( cats )) ( d 3 ) P = x.chase ( x, tail ( x )) Mixed syntactic/semantic approaches have also been proposed to cover wider ranges of phenomena; see Kempson et al. (2015) for an overview.", "The only existing work attempting to join ellipsis analysis with vector embeddings is the proposal of (Kartsaklis et al., 2016), which is preliminary work and gives unwanted results 2 .", "Below, we develop a new such approach.", "1 Although pragmatics approaches exist (Merchant, 2010), we focus here on syntactic and semantic approaches.", "2 The meaning of Bill brought apples and John pears coincides with that of Bill and John brought apples and pears.", "Vectors and their basic operations can be emulated using a lambda calculus with constants for the relevant operations, as shown in (Muskens and Sadrzadeh, 2016).", "They assume a type I (a finite index set) and R (modelling the real numbers) and model any vector as a term of type V := IR ; that is, as a function from indices to real numbers.", "Matrices can then be represented by types M := IIR and in general a tensor of rank n will have type T n := I 1 ...I n R .", "The standard operations like scalar multiplication, addition, element wise multiplication and tensor contraction can be modelled with lambda terms as follows: := rvi.r v i : RV V + := vwi.v i + w i : V V V (cid:12) := vwi.v i w i : V V V 1 := mvij.", "The first three definitions above extend the arithmetic operations of addition and multiplication on real numbers in R to lists of numbers in IR and define corresponding definitions on vectors, and so (cid:12) defines the pointwise multiplication of two vectors.", "The operation 1 defines matrix multiplication; 2 defines the tensor contraction between a cube c (in I 3 R ) and a list of numbers v .", "The vector semantics of a lambda term m is computed by taking a homomorphic image over the set of its constants c .", "This image is computed compositionally from the vector or tensor embeddings of the constants c of m via their homomorphic images H ( c ) , whose types are denoted by T ( c ) .", "Examples of these are given in Table 1 for a tensor-based composition model, where the boldface c denotes the vector/tensor embedding of c .", "Using this table, we obtain homomorphic images of any lambda term over the constants.", "For instance, the lambda term of our exemplary resolved ellipsis phrase ( b 3 ) chase ( cats , dogs ) chase ( children , dogs ) is given the following semantic, obtained by computing H ( b 3 ) : (( chase 2 dogs ) 1 cats ) (( chase 2 dogs ) 1 children ) The constituents of the H ( c ) entries of Table 1 are only exemplary.", "Many other interpretations are possible.", "For instance, taking vector embeddings for all words and replacing all tensor contractions and by + defines a purely additive model.", "The concrete models for transitive sentences that were evaluated by Milajevs et al. (2014) can all be derived by varying the H ( c ) entries.", "Below are the sentences obtained by using the Copy Object (CO), Frobenius Additive (FA), Frobenius Multiplicative (FM) and Frobenius Outer (FO) instantiations of the verb, respectively: CO : os.o (cid:12) ( verb (cid:62) s ) FA : os.s (cid:12) ( verb o ) + o (cid:12) ( verb (cid:62) s ) FM : os.s (cid:12) ( verb o ) (cid:12) o (cid:12) ( verb (cid:62) s ) FO : os.s (cid:12) ( verb o ) o (cid:12) ( verb (cid:62) s ) The vector semantics of the extensions of transitive sentences with VP elliptical phrases are obtained by taking each of the above as the semantics of each conjunct of the lambda logical form and interpreting the conjunction operation of as either sum or multiplication.", "For the evaluation of the model(s) in the previous section, we built two new datasets and experimented with count based and neural vector spaces, and sentence encoders.", "3 .", "In order to experiment with ellipsis, we extended the verb disambiguation dataset of Grefenstette and Sadrzadeh (2011a) and the transitive sentence similarity dataset of Kartsaklis and Sadrzadeh (2013), henceforth GS2011 and KS2013.", "3 All models, the new datasets, and evaluation code are available at github.com/gijswijnholds/ compdisteval-ellipsis 4.1.1 GS2011 The GS2011 verb disambiguation dataset contains 10 verbs, each with two possible interpretations.", "For each verb v and its two interpretations v 1 and v 2 , the dataset contains human similarity judgments for 10 subject-object combinations.", "For instance, for the verb meet ambiguous between visit and satisfy the dataset contains the pairs (cid:104) system meet requirements, system satisfy requirements (cid:105) and (cid:104) system meet requirements, system visit requirements (cid:105) .", "The more likely interpretation is marked as HIGH whereas the unlikely interpretation is marked LOW.", "We extended this dataset as follows: for each combination of a verb triple ( v, v 1 , v 2 ) and a subject-object pair ( s, o ) , where (cid:104) s v o, s v 1 o (cid:105) is expected to have LOW similarity in the dataset and (cid:104) s v o, s v 2 o (cid:105) is thus expected to have HIGH similarity, we selected a new subject s from the list of most frequent subjects for the verb v 2 such that it was significantly more frequent for v 2 than for v 1 4 .", "By doing so we strengthened the disambiguating effect of the context for each verb.", "The subject was selected such that the resulting elliptical phrase pairs made sense.", "For each combination and new subject considered, we added the two sentence pairs in the elliptical form (cid:10) s v o and s does too, s v 1 o and s does too (cid:11) (cid:10) s v o and s does too, s v 2 o and s does too (cid:11) For example, for the verb triple ( draw, depict, attract ), and original sentence pairs (cid:104) man draw sword, man depict sword (cid:105) (cid:104) man draw sword, man attract sword (cid:105) we selected the new subject artist and added two pairs, comparing man draw sword and artist does too with man depict sword and artist does too man attract sword and artist does too We selected two new subjects for each combination, and in this way we obtained a dataset of roughly 400 entries.", "New human judgments were collected through Amazon Mechanical Turk, by prepending the to each noun and putting the phrase in the past tense.", "As with the original dataset, participants were asked to judge the similarity between sentence pairs using a discrete number between 1 and 7; 1 for highly dissimilar, 7 for highly similar.", "By inserting gold standard pairs of identical sentences we checked if participants were 4 As found in the combined ukWaC+WackyPedia corpus.", "trustworthy.", "We collected 25 judgments per sentence pair but excluded participants that annotated less than 20 entries of the total dataset.", "We ended up with 55 different participants who ranked more than 20 entries of the total dataset, to give a final amount of ca.", "9200 annotations.", "As an example, the verb show was a very hard case to disambiguate in the GS2011 dataset: child show sign had an average score of 2.5 with both child picture sign and child express sign .", "In the new dataset, with the extra subject patient , it got much clearer that the verb had to be interpreted as express with an average score of 5.869, versus 4.875 for picture .", "The KS2013 sentence similarity dataset contains 108 transitive sentence pairs annotated with human similarity judgments.", "As opposed to the GS2011 dataset, subjects and objects of each sentence pair are not the same, so several different contexts get compared to one another.", "In this sense, the KS2013 dataset aims to investigate the role of content of individual words versus the role of composition, as the similarity of sentences might be predictable from the contribution of individual words rather than the specific way of composing them.", "We extend this dataset to cover VP ellipsis by following a similar procedure as for GS2011.", "For each transitive sentence of the form s v o in the dataset, we selected a new subject s from a list of most frequent subjects of the verb 5 and built elliptical entries s v o and s does too in such a way that the meaning of the original transitive sentence got changed as little as possible and that the resulting elliptical phrase made sense.", "We then considered every transitive sentence pair in the dataset and added the new respective subjects to both sentences.", "For example, for the pair (cid:10) school encourage child, employee leave company (cid:11) we selected parent and student to get the new pair (cid:10) school encourage child and parent does too, employee leave company and student does too (cid:11) We chose two subjects for every original sentence, generating four possibilities for each sentence pair, and a new dataset of 432 entries.", "This dataset was also annotated using Amazon Mechanical Turk, after putting each verb in the past tense and prepending the to each noun in the 5 Again taken from the ukWaC+Wackypedia corpus.", "dataset.", "Gold standard pairs of identical sentences were inserted to validate trustworthiness of participants.", "The final dataset contains ca.", "9800 annotations by 42 different participants.", "To provide a comprehensive study with robust results, we used four vector spaces: a count based vector space, and newly trained Word2Vec, GloVe, and FastText spaces, as detailed below.", "Count-Based: We used the combined ukWaC and Wackypedia corpora 6 to extract raw co-occurrence counts, using as a basis the 2000 most frequently occurring tokens (after excluding the 50 most frequent ones).", "When extracting counts, we disregarded a list of stopwords that do not contribute to the content of the vectors.", "We used a context window of 5 around the focus word, and PPMI as weighting scheme.", "These settings were use in the original KS2013 dataset (Kartsaklis and Sadrzadeh, 2013).", "Word2Vec: The Word2Vec embeddings we used were trained with the continuous bag of words model of (Mikolov et al., 2013) (CBOW).", "We trained this model on the combined and lemmatised ukWaC and Wackypedia corpora, using the implementation for Python available in the gensim package 7 , with a minimum word frequency of 50, a window of 5, dimensionality 300, and 5 training iterations.", "GloVe: The GloVe model (Pennington et al., 2014) considers the ratio of co-occurrence probabilities by minimising the least-squares objective between the dot product of two word embeddings and the log-probability of the words' co-occurrence.", "We trained a GloVe space on the combined and lemmatised ukWaC and Wackypedia corpora, using the code provided by the original authors 8 .", "Similar to the Word2Vec settings above, we trained 300 dimensional vectors with a minimum word frequency of 50 and a window of 5, but we trained with 15 iterations.", "FastText: The FastText vectors are like Word2Vec, except the word vector takes into account subword information: words are represented as n -grams, for which vectors are trained.", "The final word vector will then be the sum of its constituent n -gram vectors (Bojanowski et al., 2017).", "We trained a FastText space with the same settings 6 wacky.sslmit.unibo.it 7 radimrehurek.com/gensim 8 nlp.stanford.edu/projects/glove as the Word2Vec space (CBOW, minimum word frequency 50, dimensions 300, window 5, with 5 iterations), again using gensim .", "In order to work with tensor-based models we had to represent verbs as matrices rather than as vectors.", "We generated verb tensors using two methods that have been used previously in the literature (Grefenstette and Sadrzadeh, 2011a; Kartsaklis and Sadrzadeh, 2014).", "Relational: For each verb, its corresponding matrix is obtained by summing over the tensor product of the respective subject and object vectors of the verb (subjects and objects collected from the corpus): verb = (cid:88) i subj i obj i Kronecker: For each verb, its corresponding matrix is obtained by taking the tensor product of the verb vector with itself: (cid:103) verb = verb verb In the case of the count-based space, we trained verb matrices of dimensions 2000 2000 , for the neural word embeddings the matrices had dimensions 300 300 .", "We also experimented with the skip-gram extension of Maillard and Clark (2015) and the plausibility model of Polajnar et al. (2014a) but excluded the results because the obtained verb matrices were far below par.", "For the experiments, we had two main goals in mind: primarily, we wanted to verify that resolving ellipsis contributes to the performance of a compositional model.", "For this purpose we experimented with non-linear models, i.e. models that resolve the ellipsis (and thus use the verb and object resources twice) versus linear models, which do not resolve the ellipsis (and thus only use the verb and object once).", "Our second goal was to investigate whether amongst the models that resolve the ellipsis, the ones that did so in a tensor-based way, i.e. using tensors instead of vectors to represent the verbs, performed better than additive and multiplicative models, and how these compare to holistic sentence encoders.", "Hence, we considered three classes of models: linear vector models, nonlinear vector models and tensor-based models.", "Linear Vector Models: These models use every resource exactly once, following the pattern w 1 (cid:63) w 2 ... (cid:63) w n for any sequence of words w 1 w 2 ...w n .", "For an elliptical phrase subj verb obj and subj does too it will compute the vector subj (cid:63) verb (cid:63) obj (cid:63) and (cid:63) subj (cid:63) does (cid:63) too where (cid:63) denotes either addition or multiplication.", "Non-Linear Vector Models: Here, the as-sumption is that ellipsis is resolved but models do not respect word order.", "The meaning of subj verb obj and subj does too now is subj (cid:63) verb (cid:63) obj (cid:63) subj (cid:63) verb (cid:63) obj Tensor-Based Models: These models all are assumed to resolve ellipsis and are based on various previous models (Grefenstette and Sadrzadeh, 2011b,a; Kartsaklis et al., 2012; Kartsaklis and Sadrzadeh, 2014).", "Essentially, the tensor-based meaning of subj verb obj and subj does too is T ( subj, verb, obj ) (cid:63) T ( subj , verb, obj ) where T is a transitive model from (Milajevs et al., 2014) and (cid:63) interprets the conjunction of the two subclauses.", "For the verb matrix we used either the relational verb or the Kronecker verb, and for (cid:63) we tried both addition and multiplication.", "We did consider a model which simply adds or multiplies the second subject without duplicating the verb phrase, but it performed worse than non-linear addition and multiplication so we did not include it in this paper.", "Sentence Encoders: To compare the mentioned compositional models with state of the art neural baselines, we carried out our experiments with a four types of holistic sentence encoders, that take arbitrary text as input and produce an embedding.", "To properly compare with the compositional models above, we gave three different inputs to the encoders: a baseline encoding (Base), a resolved encoding (Res), and an encoding without functional words (Abl), all as below: Base : subj verb obj and subj does too Res : subj verb obj and subj verb obj Abl : subj verb obj subj We used six concrete pretrained encoders, available online: 4800-dimensional embeddings from the Skip-Thought model 9 , 300-dimensional embeddings from two Doc2Vec implementations 9 github.com/ryankiros/skip-thoughts (Lau and Baldwin, 2016) 10 , 4096-dimensional embeddings from two InferSent encoders 11 , and 512-dimensional embeddings from Universal Sentence Encoder 12 .", "To validate the quality of the trained word spaces, we evaluate on several standard word similarity tasks: we used Rubenstein & Goodenough (RG, 1965), WordSim353 (WS353, 2001), Miller & Charles (MC, 1991), SimLex-999 (SL999, 2015), and the MEN dataset (Bruni et al., 2012).", "The results are displayed in Table 2, for the spaces described in the previous section.", "Verb Disambiguation: Table 3 shows the results of the linear, non-linear and tensor-based models for this task, compared against a baseline in which only the verb vector or verb matrix is compared.", "Our first observation is that generally, the highest performing models were tensor-based.", "The highest found correlation score was 0.5385 in the count based space for a tensor-based model ( CO model above, Kronecker matrix, = + ), with the Frobenius Additive model giving the second best result of 0.5263 ( FA model above, Kronecker matrix, = + ).", "For the neural spaces, the highest performing models were mostly tensor-based as well; they were always the Frobenius Additive ( FA ) model and the Frobenius Outer ( FO ) model, using the relational tensor and addition for the coordinator, except in the case of GloVe, where the Copy Object ( CO ) model was the second best.", "The only exception to this observation is the GloVe space, for which the baseline Vector Only model in fact has a higher correlation than any other model on that space.", "Our second observation is that the non-linear variants of the additive and multiplicative models (which resolve ellipsis but in a naive way) show 10 github.com/jhlau/doc2vec 11 github.com/facebookresearch/InferSent 12 tfhub.dev/google/ universal-sentence-encoder CB W2V GloVe FT Verb Only Vector .4363 .2406 .4451 .2290 Verb Only Tensor .3295 .4376 .3942 .3876 Add.", "an increased performance over the linear models (which do not resolve ellipsis).", "All of this holds for all the four vector spaces, except for the FastText space where the linear multiplicative model achieves significantly higher correlation (0.2928) than its non-linear counterpart (0.0440).", "Overall, these results suggests that a logical resolving of ellipsis and further grammatical sensitivity benefits the performance of composition.", "One interesting fact about our results is that the best compositional methods across the board were those that interpret the coordinator and' as addition; in set-theoretic semantics one interprets this coordinator as set intersection, which corresponds to multiplication rather than addition in a vectorial setting.", "We suggest that the feature intersection approach using multiplication leads to sparsity in the resulting vectorial representation, which then has a negative effect on the overall result.", "This would explain the case of FastText, since those vectors take into account subword information one would expect them to be more fine-grained and therefore conflate more of their features under multiplication.", "The choice of verb matrix was mixed: for the count-based models the Kronecker matrix worked best, for the neural embeddings it was best to use the relational matrix.", "In comparison, the sentence encoder results of Table 4 show the same trend that suggests that resolving ellipsis improves the quality of the embeddings: with the exception of the two InferSent encoders, the resolved models gave a higher correlation than their linear baseline.", "However, none of the encoder models come near the results achieved using the compositional models.", "Since the verb disambiguation dataset contains pairs of sentence that only differ in the verb, the task becomes very much grammar-oriented, and so we argue that the tensor-based models work better since they explicitly emphasise syntactic structure.", "Sentence Similarity: For the extension of the KS2013 sentence similarity dataset, the results are shown in Table 5.", "We again wanted to see if resolving ellipsis benefits the compositional process.", "This was in general true, although we observed a different pattern to the previous experiment.", "In all cases, except for the FastText space, we saw that non-linear models in fact perform better than their linear counterparts.", "But this time the best tensor-based models only outperformed addition for the count-based space: the best models scored 0.7410 and 0.7370 (respectively for the FO and FA models above, Kronecker matrix, = (cid:12) ).", "Both Word2Vec and GloVe worked best with a non-linear additive model, with Word2Vec achieving the overall highest correlation score of 0.7617, and GloVe achieving 0.7103.", "For FastText, the highest score of 0.7408 was achieved by linear addition.", "What is more, the multiplicative model did not benefit from a non-linear approach in the case of GloVe (from 0.3666 to 0.2439), and the additive model had a similar decline in performance for the count-based space (from 0.7000 to 0.6808) and FastText (0.7408 to 0.7387).", "We can see that for the neural word embeddings the additive models work best, with all of them seeing a drop in performance for the tensor-based models.", "Again, the best count-based models use the Kronecker matrix whereas the neural models benefit the most from using the relational matrix.", "However, this time the best count-based models used multiplication for coordination, the neural models preferring addition.", "The sentence encoders worked a lot better in the similarity task, with all non-linear resolved models outperforming the baseline model, and the InferSent model even outperforming non-linear ad-CB W2V GloVe FT Verb Only Vector .4562 .5833 .4348 .6513 Verb Only Tensor .3946 .5664 .4426 .5337 Add.", "dition on a Word2Vec space.", "We argue this is the case for two reasons: first, the similarity dataset is more diffuse than the verb disambiguation dataset since sentence pairs now differ for every word in the sentence, giving more opportunity to exploit semantic similarity rather than syntactic similarity.", "Second, the embeddings from the sentence encoder are larger (4096), allowing them to effectively store more information to benefit the similarity score.", "Overall we conclude again that resolving ellipsis improves the performance of composition, but this time the InferSent sentence encoder seems to work best, followed by the non-linear additive compositional model on Word2Vec, with tensor-based models only performing well in a count-based space.", "In this paper we experimented with vector space semantics for VP ellipsis, working with a large variety of compositional models.", "We created two new datasets and compared the performance of several compositional methods, both linear and non-linear, across four vector spaces, and against state of the art holistic sentence encoders.", "Our main conclusion is that resolving ellipsis improves performance: non-linear models almost always performed better than linear ones in both a verb disambiguation and a sentence similarity task.", "The highest performance on the verb disambiguation task was given by a grammar-driven, tensor-based model in a count-based vector space, whereas for the similarity task, the highest performance was achieved by the InferSent sentence encoder, followed by a non-linear additive model on a Word2Vec space.", "Although the neural word embeddings and sentence encoders were largely outperformed on the disambiguation dataset that places more emphasis on syntactic structure than on semantic similarity, they generally performed better in the sentence similarity case, where the distinction between syntactic and semantic similarity is more diffuse.", "The authors gratefully acknowledge the three anonymous reviewers for their valuable comments.", "Mehrnoosh Sadrzadeh is grateful to the Royal Society for an International Exchange Award IE161631 Dynamic Vector Semantics for Lambda Calculus Models of Natural Language and discussion with Reinhard Muskens in this context.", "Gijs Wijnholds would like to express gratitude for support by a Queen Mary Principal Studentship, and the Theory group of the School of Electronic Engineering and Computer Science at Queen Mary University of London.", "Both authors would like to thank Ruth Kempson and Matthew Purver for many helpful discussions." ]
[ "abstain", "abstain", "abstain", "objective", "objective", "result", "result", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "method", "method", "result", "objective", "abstain", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "other", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "method", "objective", "result", "abstain", "abstain", "other", "other", "other", "other" ]
[ "The allure of superhuman-level capabilities has led to considerable interest in language models like GPT-3 and T5, wherein the research has, by and large, revolved around new model architectures, training tasks, and loss objectives, along with substantial engineering efforts to scale up model capacity and dataset size.", "Comparatively little work has been done to improve the generalization of these models through better optimization.", "In this work, we show that Sharpness-Aware Minimization (SAM), a recently proposed optimization procedure that encourages convergence to flatter minima, can substantially improve the generalization of language models without much computational overhead.", "We show that SAM is able to boost performance on SuperGLUE, GLUE, Web Questions, Natural Questions, Trivia QA, and TyDiQA, with particularly large gains when training data for these tasks is limited.", "Over the last several years, remarkable progress has been made within the domain of natural language understanding, with machine-learned models able to solve some tasks at near or above human-level performance.", "This progress has, by and large, been fueled by research centered around 1) better inductive biases , such as the attention-enabled Transformer architecture (Vaswani et al., 2017), 2) the clever leverage of massive corpora of textual data that was historically disregarded as un-labeled, usually in the form of pre-training objectives that strive to teach the model the structure of language (Radford et al., 2019; Devlin et al., 2018), 3) scaling up model capacity and the methods to support it (Shazeer and Stern, 2018), 4) multi-task learning (Raffel et al., 2019), and lastly, 5) larger and more diverse datasets along with ever-improving benchmarks that attempt to test the true capabilities of these models.", "Although these efforts all share the single goal of improving the model's generalization, doing so by explicit changes to the optimization of the loss function has received less attention in comparison.", "Recently, motivated by the both empirical and theoretical findings that flatter minima lead to better generalization (Kleinberg et al., 2018; Shirish Keskar et al., 2016; Chaudhari et al., 2019; Smith and Le, 2017), Foret et al. (2020) proposed a novel modification to vanilla stochastic gradient descent they term Sharpness-Aware Minimization, or SAM.", "They show theoretically and empirically that optimizing with SAM encourages convergence to flatter points in the loss landscape and with it comes the anticipated improvement in out-of-sample error.", "While their empirical findings are limited to computer vision tasks and datasets using convolutional neural networks (ResNets), follow-up work (Chen et al., 2021) showed how SAM is particularly effective on Vision transformers (ViTs) (Dosovitskiy et al., 2020) and MLP-Mixers (Tolstikhin et al., 2021), architectures that are more prone than convolutional ones to land in sharp minima.", "Crucially, they show that when equipped with SAM, ViTs outperform ResNets of similar size and throughput without the need for large-scale pre-training .", "Encouraged by wins in the vision domain, we ask whether SAM can deliver similar gains in the language domain.", "Our contributions are as follows:", "1. We show that blithely applying SAM when fine-tuning public pre-trained checkpoints of the text-to-text transformer (T5) (Raffel et al., 2019) and its multilingual counterpart, mT5 (Xue et al., 2020) on SuperGLUE (Wang et al., 2019), GLUE (Wang et al., 2018), TyDiQA-GoldP (Clark et al., 2020) and the Closed-Book Question Answering (CBQA) tasks from Roberts et al. (2020) Web Questions (Berant et al., 2013), Natural Questions (Kwiatkowski et al., 2019), and Trivia QA (Joshi et al., 2017) improves test perfor-7360 mance quite markedly.", "Furthermore, by employing an approximation suggested by Brock et al. (2021), these gains come only at the cost of about 25% extra compute.", "2. The improvement brought by SAM often increases with less labeled training data, making SAM indispensable for data-limited tasks.", "We test this by subsampling the training splits of CBQA and SuperGLUE datasets at rates ranging from 2% to 80% .", "Better Generalization.", "In light of flatter minima generalizing better, Smith and Le (2017) showed that the inherent noise in SGD serves as a form of implicit regularization , preventing the optimization from ever entering sharp valleys.", "Like SAM, entropy SGD (Chaudhari et al., 2019) explicitly encourages flatter minima.", "Smith et al. (2021); Barrett and Dherin (2020) analyzed SGD's generalization formally by way of continuous-time gradient flow.", "Optimization routines based on adversarial risk (Zhu et al., 2019; He et al., 2020) and trust regions (Jiang et al., 2019; Aghajanyan et al., 2020) have been proposed and shown to improve generalization across settings.", "While the number of methods which provide implicit or explicit regularization is overwhelmingly large, methods like early stopping, weight decay (or (cid:96) 2 -regularization), dropout (Srivastava et al., 2014), teacher-student or self-distillation (Hinton et al., 2015; Mobahi et al., 2020), label smoothing (Mller et al., 2019), batch normalization (Ioffe and Szegedy, 2015), mixup (Zhang et al., 2017), and data-augmentation more broadly are among the most widely used in practice.", "Marginalization of Bayesian neural networks, though challenging, has been shown to result in superior generalization in some settings (Wilson and Izmailov, 2020; MacKay, 1995).", "While first-order optimization via SGD has been the prevailing way of training neural networks due to its efficiency and effectiveness, alternative second-order methods like K-FAC (Martens and Grosse, 2015) and Shampoo (Gupta et al., 2018) have slowly gained traction, often enabled by clever engineering to make them feasible at scale.", "Notably, Anil et al. (2020) presents a scalable implementation of Shampoo that provides significant convergence and wall-clock time improvements compared to first-order methods.", "They demonstrate superior performance on machine translation and language modeling.", "SAM.", "While this is, to the best of our knowledge, the first work detailing the benefits of SAM for language tasks, there have been successful applications of SAM in the vision domain.", "Notably, Chen et al. (2021) showed that convolution-free vision models like vision transformers (ViTs) (Dosovit-skiy et al., 2020) and MLP-Mixers (Tolstikhin et al., 2021) suffer from sharp minima and that SAM indeed smooths their loss landscapes.", "They crucially show that ViTs and MLP-Mixers outperfom ResNets of similar and greater size on ImageNet without the use of pre-training or data augmentations that would otherwise be necessary to achieve reasonable performance.", "They show that SAM induces sparsity in both architectures and leads to more perceptive attention maps in ViTs.", "They observe empirically that data augmentation and SAM are alike in that they both smooth the landscape on average, but the latter does so by explicitly controlling the worst-case curvature, whereas the former smooths over the directions induced by the augmentations.", "Furthermore, they observe that SAM encourages linearity with respect to the input, exhibiting an effect similar to that of mixup (Zhang et al., 2017).", "Lastly, they show that SAM helps contrastive learning and that it enables better robustness on corrupted examples from ImageNet-C (Hendrycks and Dietterich, 2019) and ImageNet-R (Hendrycks et al., 2021).", "In a similar spirit, Brock et al. (2021) proposed speeding up SAM significantly by using fewer examples when computing the ascent step, a strategy which we employ in this work, and they were able to apply it to ResNet model variants to advance the state of the art on ImageNet without extra data.", "Meanwhile, in an attempt to make SAM's radius invariant to the scale of the model parameters, Kwon et al. (2021) proposed an adaptive version named Adaptive Sharpness-Aware Minimization (ASAM), which they then show empirically to outperform normal SAM on a set of benchmark vision tasks.", "We begin by briefly reviewing the SAM algorithm; interested readers can see the original paper for a thorough treatment.", "In our presentation, we use the (cid:96) 2 norm ( p = 2 using notation from the origi-7361 nal paper), assume a general optimizer (instead of vanilla SGD), and use the approximation proposed by Brock et al. (2021) to compute the ascent gradient (adversarial point) efficiently.", "Given a loss function L : W X Y R + , SAM seeks to find the parameter w whose neighborhood has low training loss by optimizing the minimax objective: min w max || (cid:15) || 2 L train ( w + (cid:15) ) .", "Finding the exact optima (cid:15) of the inner-maximization is challenging, so Foret et al. (2020) employ a first-order approximation, resulting in: (cid:15) ( w ) = argmin || (cid:15) || 2 L train ( w ) + (cid:15) T w L train ( w ) = w L train ( w ) / || w L train ( w ) || 2 .", "That is, (cid:15) is just a scaling of the loss gradient at the current parameters.", "After computing (cid:15) ( w ) , SAM performs gradient descent using the gradient w L train ( w ) | w adv at the nearby adversarial point w adv ( w ) (cid:44) w + (cid:15) ( w ) .", "Put another way, SAM plugs-and-plays with any first-order optimizer by simply replacing the gradient of the mini-batch B at the current model weights w t W with the gradient computed at w adv .", "w adv itself is computed by taking a gradient ascent step of size along the unit gradient vector w LM ( w ) / || w LM ( w ) || 2 | w t , where M can be the mini-batch B , or a subset of it for enhanced efficiency.", "We found that setting M to be 1 / 4 -th of B sped up the method significantly with little loss in quality, in line with the recommendation of Brock et al. (2021).", "The end-to-end algorithm is outlined in Algorithm", "1. 4 Experiments With SAM reviewed, we now discuss our experiments.", "We evaluate SAM on a range of natural language understanding tasks using the T5 (text-to-text Transformer) framework (Raffel et al., 2019).", "T5 casts NLU tasks as sequence-to-sequence ones that are learned using an encoder-decoder Transformer (Vaswani et al., 2017) architecture setup.", "These Transformer models are typically pre-trained on large corpora, like the Colossal Clean Crawled Corpus (C4) (Raffel et al., 2019), with, for example, the objective of predicting a short contiguous span of text that was intentionally corrupted in a snippet of input text.", "The pre-trained model is typically fine-tuned on a single task or a mixture of Algorithm 1 Efficient SAM Algorithm.", "2: initialize parameters w 0 , t = 0 .", "3: while not converged do 4: sample batch B = { ( x 1 , y 1 ) , ..., ( x b , y b ) } .", "5: sample ascent micro-batch M = { ( x 1 , y 1 ) , ..., ( x a , y a ) } .", "6: compute adversarial (ascent) point: w adv = w t + w LM ( w ) || w LM ( w ) || 2 | w t .", "7: compute gradient approximation for the SAM objective: g adv = w LB ( w ) | w adv .", "8: update parameters: w t +1 = opt( w t , g adv ) .", "9: t = t + 1 .", "10: end while 11: return w t multiple tasks, the latter enabled by the fact that the framework treats all tasks as simple input-to-target sequence predictions.", "To this end, we evaluate SAM in two ways:", "1. When publicly available pre-trained checkpoints of the T5.1.1 model variant are fine-tuned with and without SAM, on SuperGLUE, GLUE, TyDiQA, and the Closed-Book Question Answering benchmarks: Web Questions, Natural Questions, TriviaQA.", "We show SAM improves generalization across benchmarks and four model sizes: Small (77M parame-ters), Base (250M), Large (800M), and XL (3B).", "2. To show how it helps when task data is limited, we report results when the training splits of these benchmarks at various rates, ranging from 2% to 80% .", "Framework.", "For all experiments, we train using Jax (Bradbury et al., 2018) and Google Cloud TPUs.", "To ensure fair comparisons, eliminate the impact of exogenous factors, and reduce the possibility of software bugs, we train both standard and SAM-enabled models using the same codebase and settings, so that the code paths are identical except for the gradient calculation at each step, wherein 7362 Model SGlue BoolQ CB CoPA MultiRC ReCoRD RTE WiC WSC Small 67.7 72.6 89.4 / 89.3 67.0 68.5 / 21.4 61.7 / 60.8 69.3 65.4 72.1 Small + SAM (0.05) 68.4 73.5 92.1 / 89.3 61.0 68.5 / 22.8 62.1 / 61.0 69.7 65.7 79.8 Base 75.3 80.0 91.7 / 94.6 71.0 75.4 / 35.4 76.2 / 75.4 80.9 69.3 76.9 Base + SAM (0.15) 78.5 82.2 93.7 / 94.6 78.0 77.5 / 39.1 78.2 / 77.2 85.9 70.4 81.7 Large 84.3 86.6 99.4 / 98.2 89.0 83.7 / 51.0 86.5 / 85.6 89.2 72.9 84.6 Large + SAM (0.15) 84.6 88.0 95.0 / 96.4 86.0 84.0 / 53.7 87.3 / 86.4 89.2 75.2 86.5 XL 87.2 88.6 93.7 / 96.4 95.0 86.9 / 61.1 89.5 / 88.4 91.3 74.9 89.4 XL + SAM (0.15) 89.1 89.4 100.0 / 100.0 95.0 87.9 / 63.7 90.9 / 90.0 92.1 75.5 94.2 Table 1: Experimental results (dev scores) on the (full) SuperGLUE benchmark.", "Efficient SAM.", "In Foret et al. (2020), the idea of partitioning the ascent mini-batch into m disjoint micro-batches and computing a distinct adversarial point for each micro-batch and then averaging the SAM-gradients at each of these points was proposed under the name m -sharpness.", "It was noted there and in follow-up work (Chen et al., 2021) that m > 1 can result in better performance.", "This modification incurs m -times more compute under a naive sequential implementation (though it can be parallelized well if multiple devices are available).", "Meanwhile, Brock et al. (2021) suggests (in the Appendix) using roughly 20% of the examples from the mini-batch for computing the adversarial point, observing little loss in model quality.", "With m = 1 , this approximation roughly reduces SAM's relative runtime from 2x to 1 .", "2 x.", "Since we understand how a 2 m -x slow-down of model training may be prohibitive or significantly deter SAM's widespread adoption, we, at the possible 1 https://github.com/google-research/ sam 7363 Model Natural Q. Web Q. TriviaQA Small 16.7 / 12.4 22.8 / 16.5 10.2 / 7.3 Small + SAM (0.05) 17.5 / 13.1 23.5 / 16.9 11.0 / 7.8 Base 23.2 / 18.1 29.7 / 22.5 19.3 / 15.3 Base + SAM (0.15) 25.7 / 20.6 31.0 / 24.5 21.5 / 17.4 Large 27.4 / 22.3 34.3 / 27.6 25.2 / 20.9 Large + SAM (0.15) 30.6 / 25.0 36.4 / 29.6 28.5 / 24.2 XL 33.5 / 27.5 39.3 / 31.6 36.5 / 31.1 XL + SAM (0.15) 34.7 / 28.8 40.7 / 33.3 38.0 / 32.6 Table 4: Experimental results (F1/EM) (test scores) on the (full) CBQA tasks.", "loss of larger improvements, set m = 1 and use 1/4-th (25%) of the mini-batch, or the number of available training devices (TPU cores in our case), whichever is larger, to compute SAM's adversarial point.", "This is necessary because the mini-batch gradient computation is parallelized over devices and each device must receive at least one example.", "We've observed from wall-clock times that with these settings, SAM is all in all about 25% slower than standard training.", "Hyper-parameters.", "SAM has a single hyper-parameter , which is size of the step taken along the unit adversarial gradient vector.", "We search the range [0 . 02 , 0 . 05 , 0 . 1 , 0 . 15 , 0 . 2 , 0 . 3] a single time only when fine-tuning on SuperGLUE.", "We found that 0 .", "05 is a reasonable choice for T5.1.1 small models, and 0 .", "15 for the Base, Large, and XL variants, and so for all subsequent experiments except for TyDiQA, we use these choices without additional tuning.", "For the mT5 model on Ty-DiQA, we found that a smaller was necessary for good performance.", "For this, we searched the range [0 . 01 , 0 . 02 , 0 . 05] .", "For all fine-tuning, we use the AdaFactor optimizer with learning rate 1e-3, 128 batch size, and the T5.1.1 settings.", "For SuperGLUE, we use 10% dropout rate, 512 input sequence length, 62 target sequence length, and fine-tune for 250k steps.", "For Natural Questions, Web Questions, and TriviaQA, we use 5% dropout, 38 input sequence length, 18 target sequence length, and fine-tune for 20k steps.", "For TyDiQA, we use the official, public mT5 checkpoints, 10% dropout, 1024 input sequence length, 512 target sequence length, and fine-tune for 20k steps.", "We run each experiment once, due to resource constraints, and we take the best checkpoint (stored every 1k steps for SuperGLUE and GLUE and every 200 steps for all other datasets) across 7364 Figure 1: CBQA results at various training data sampling rates, for the Small ( top half ) and Base ( bottom half ) models.", "training steps.", "Following standard practice, we report the best checkpoint for each task-metric pair (e.g. SuperGLUE CB F1) individually.", "Results for SuperGLUE and GLUE are shown in Table 1 and Table 2 respectively.", "We observe that SAM improves the overall scores for both benchmarks across all T5 model sizes.", "For Base and XL sizes on SuperGLUE, SAM brings 4.2% and 2.1% relative gains in overall score respectively, while the gain for Large on GLUE is 2.4%.", "As shown in Table 4, on Natural Questions, Web Questions, and Trivia QA tasks, we observe improvements for each task, metric (F1 and EM), and model size.", "For Base, we see a 13.8%, 8.8%, and 13.7% gain on the exact match metric for Natural Questions, Web Questions, and Trivia QA respectively.", "For Large, these figures are 12.1%, 7.2%, and 15.7%.", "Table 3 shows the results for TyDiQA-GoldP.", "Here, we observe more modest improvements in the 1-2% range.", "SAM improves performance on all model sizes.", "In light of the conventional wisdom that larger models generalize better, we suspected, a priori, that SAM would be more helpful for the smaller models we consider, like Small and Base, and that we should expect substantial diminishing returns 7365 Model SGlue BoolQ CB CoPA MultiRC ReCoRD RTE WiC WSC Small 50.2 60.8 37.0 / 55.4 52.0 60.1 / 11.0 33.9 / 32.5 54.5 54.4 65.4 Small + SAM (0.05) 51.9 60.5 45.6 / 66.1 53.0 61.1 / 12.5 36.7 / 34.5 52.3 55.2 66.3 Base 52.9 59.6 32.3 / 55.4 53.0 60.2 / 11.8 47.8 / 46.5 58.5 57.2 68.3 Base + SAM (0.15) 56.7 61.4 41.8 / 64.3 55.0 62.4 / 15.7 59.7 / 57.9 62.5 55.3 68.3 Large 62.8 65.3 40.1 / 62.5 62.0 71.6 / 24.0 80.4 / 78.9 69.7 57.7 69.2 Large + SAM (0.15) 64.3 77.3 47.9 / 69.6 59.0 69.0 / 20.4 81.5 / 80.0 65.0 59.6 69.2 XL 75.9 84.5 57.0 / 82.1 86.0 82.4 / 48.7 83.3 / 81.7 78.7 66.0 74.0 XL + SAM (0.15) 77.0 82.5 58.9 / 83.9 85.0 79.9 / 45.3 86.8 / 85.6 80.5 64.4 83.7 Table 6: SuperGLUE results when only 5% of the training data is available.", "as we scale up the model size.", "Surprisingly, we did not observe any clear pattern with regards to size: indeed, sometimes the gains on XL were larger than those on Small.", "Thus, we lean to recommend SAM to all practitioners regardless of the regime in model capacity they are working in.", "SAM improves single-task and multi-task learning alike.", "Thus far, SAM has been trained on a mixture of tasks, where the influence of a particular task is proportional to the number of examples in its training split (i.e. no artificial up or down-weighting).", "To rule out the possibility that the gains observed are solely due to some ability of SAM's to leverage multi-task learning and improve cross-task transfer, we conduct the following ablation.", "For each of the three CBQA tasks, we train only on a single task and report the performance on that task's test set.", "Results are shown in Table 5.", "Indeed, we see similar gains when training and testing on each single task individually.", "We conclude that the mechanism driving SAM's improvements affect single-task and multi-task learning alike.", "We now switch gears and evaluate whether or not SAM helps when training data is scarce.", "Prior work (Chen et al., 2021) showed that for vision models and tasks, SAM helps more when there is less training data to learn from.", "To test whether this holds for language, we do as follows: we subsample the training splits for both SuperGLUE and CBQA datasets at rates ranging from 2% to 80% , and observe test performance when the public checkpoint is fine-tuned with and without SAM.", "SuperGLUE and CBQA results at a 5% sampling rate are shown in Tables 6 and 7 respectively.", "In both cases we see again that SAM boosts performance across the board, adding, for example, a whopping 7 .", "2% relative improvement on the Base model on 5% SuperGLUE and a relative 8 .", "86% / 16 .", "6% to F1/EM on Natural Questions.", "Figure 1 plots the performance on the three CBQA tasks as a function of the sampling rate.", "We observe consistent gains from SAM across the size of the subsampled training set, with the relative improvement appearing largest when the subsampling 7366 rate is around 20% .", "Figure 2 shows the impact of SAM's hyper-parameters , the ascent micro-batch size a , and the sharpness factor m on the (full) SuperGLUE benchmark for the Base model.", "For , we see that all tested values perform better than fine-tuning without SAM.", "However, 0 .", "15 is a sweet spot, performing better than values below or above it.", "Thus, practitioners with little computational budget for hyper-parameter tuning may still see large gains by using a non-optimal , while those with a generous budget should consider tuning.", "For the ascent micro-batch size a , we see that when the normal (descent) batch size is 128, there is improvement as a is increased to 32 but little past this point.", "Thus, setting a to be 1/4-th the descent batch size, as we do throughout our experiments, provides a good trade-off between performance and computational overhead.", "Increasing the sharpness m , where each of the m ascent micro-batches has size 32 /m , does not improve performance here.", "We thus recommend a default of 1, which is the setting used across our experiments.", "Full results are shown in the Appendix.", "To the best of our knowledge, this paper is the first to demonstrate how the recently-proposed Sharpness-Aware Minimization can be applied for fine-tuning the ubiquitous text-to-text Transformer (T5) and its multilingual counterpart mT5 on language tasks of broad interest.", "We thereby corroborate the already-documented success the method has had in the vision domain.", "Furthermore, we reveal SAM's benefits when data is limited by fine-tuning on subsamples of the original task training split.", "By approximating the ascent step of the algorithm via fewer samples, we show how large gains can be had across benchmarks and model sizes while adding only around 25% additional compute and wall-clock training time.", "Our hope is that this work will spur SAM's adoption in the natural language processing community the way it is starting to in the vision one." ]
[ "abstain", "abstain", "objective", "result", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "result", "objective", "objective", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "other", "other", "abstain", "other", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "method", "method", "method", "method", "method", "method", "method", "abstain", "method", "abstain", "result", "abstain", "result", "result", "abstain", "abstain", "result", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "objective", "abstain", "result", "result", "abstain" ]
[ "Machine reading is an ambitious goal in NLP that subsumes a wide range of text understanding capabilities.", "Within this broad framework, we address the task of machine reading the time of historical events, compile datasets for the task, and develop a model for tackling it.", "Given a brief textual description of an event, we show that good performance can be achieved by extracting relevant sentences from Wikipedia, and applying a combination of task-specific and general-purpose feature embeddings for the classification.", "Furthermore, we establish a link between the historical event ordering task and the event focus time task from the information retrieval literature, showing they also provide a challenging test case for machine reading algorithms.", "1 1 Introduction Machine reading concerns the extraction of entities and relations from text and the ability to use them meaningfully, for instance by answering questions based on them, inferring other relations from them, or using them to compile knowledge bases.", "Such an inclusive task definition necessarily builds on a wide range of NLP capabilities, from syntactic and semantic analysis, to the use of world knowledge and common sense.", "The inclusive nature of the task supports the development of general-purpose methods, but also results in low performance in absolute terms, difficulty in defining widely agreed-upon evaluation protocols, and difficulties identifying the sources of prediction errors (Stanovsky and Dagan, 2016; Rajpurkar et al., 2016; Clark et al., 2018).", "event took place.", "This distinguishes it from traditional question answering (Rajpurkar et al., 2016) as the answer may not be given in the text but the models should still be able to place events in the correct period of time.", "In turn, this means that models trained for historical event ordering may have real-world applications such as to serve as a fallback for temporal question answering when the answers are not present in the text and to improve search engines that leverage the implicit time of queries (Gupta and Berberich, 2016).", "Concretely, given a short text description of a historical event, and an external data source (hence-forth contextual information or CI), the task is to predict the year in which the event happened.", "The external source in our case is Wikipedia.", "For example, given the event description The Government of Turkey expels Patriarch Constantine VI from Istanbul, the task is to infer the year it took place (i.e., 1925).", "We select Wikipedia as a source for contextual information, due to its broad coverage, and the wide interest it receives in the NLP community.", "Indeed, Wikipedia has often featured as a semi-structured knowledge base, e.g., as a source of concept grounding (Bunescu and Pasca, 2006) and indirect supervision (Mintz et al., 2009).", "We hypothesize that aside from time expressions, the CI words themselves give an approximate time in which an event happened.", "For example, the presence of the word spacecraft in the CI probably indicates an event that occurred after 1900, while the presence of the word sword most likely indicates an event that occurred before 1900.", "The task is therefore different from tasks addressing the extraction and normalization of time expressions, or from related tasks pursued in the context of information retrieval (see 8).", "Our results support this hypothesis, and demonstrate that even when time expressions are not present in the data, it is still possible to predict the approximate year in which an event happened.", "We compile two datasets for the task, based on the websites Wikipedia On This Day webpages (WOTD) and On This Day (OTD).", "We consider WOTD as an in-domain setting, given that it is taken from Wikipedia as well (albeit from an entirely disjoint part of Wikipedia).", "The OTD setting was selected to be maximally challenging for leveraging external data sources, since (1) event descriptions are taken from a different website, and may be formulated very differently from Wikipedia; (2) it is an order of magnitude larger, and so the clas-sifier has plenty of data to train on, even without relying on external data sources.", "Our results show that on WOTD, good performance can be obtained by detecting relevant sentences from Wikipedia and extracting year mentions in them, but that substantially better performance can be reached when additionally encoding the entire sentences, using neural machinery.", "In OTD, CI yields more modest improvements.", "Results in absolute terms are high: the best models obtain a mean Kendall's correlation with the correct event ordering of 0.77 (WOTD) and 0.71 (OTD).", "The historical event ordering (HEO) task is defined as follows.", "Given a set of brief event descriptions and some textual resource, the task is either to predict the year in which each event occurred, or to find a ranking of events such that they are ordered by date of occurrence.", "The first variant is stronger than the second, as it implies a ranking.", "Our evaluation uses both rank correlation (Kendall's ) and measures of the distance between the year the event took place and the predicted year.", "See Section 5.2 for details.", "Differences from Question Answering.", "While traditional question answering tasks require the answer to be in the text (e.g., Hermann et al., 2015; Rajpurkar et al., 2016), the HEO task is based on estimating the time of occurrence of an event.", "This estimation is based solely on lexical cues, and does not require an explicit answer in any text.", "This is a major advantage of HEO models, as explicit answers are not always present in the text for two reasons:", "(i) we would need a massive amount of text for good coverage of historical events, which may be unfeasible to use in real-world applications; and", "(ii) new events are constantly occurring, and existing machine reading comprehension models will invariably fail on those (e.g., When was Donald Trump elected president? will not be covered in old data, but could be inferred to have happened recently based on recognizing the named entity Donald Trump).", "As answers are not guaranteed to be in the text, the HEO task is somewhat more challenging than traditional question answering tasks.", "The task's challenge is also evidenced in that it requires temporal commonsense reasoning and in being challenging for humans (see 6).", "Real-World Applications.", "As previously mentioned, HEO models do not assume the presence of the answer in the source text, and can thus be used for temporal question answering when the answers are not present in it.", "By leveraging the lexical information that exists only in the question itself, these models can serve as a fallback for such cases.", "Other possible applications are dating of historical documents based solely on the documents' text, improving search engines that leverage the time of queries (Gupta and Berberich, 2016), as well as making inferences that involve rough temporal placement of the statement (e.g., inferences involving refrigerators are unlikely to be relevant before the 20th century).", "This work introduces two datasets: WOTD and OTD.", "Despite the similarity in their names, we are not aware of any influence or other relation between them.", "Using both datasets thus makes our experimental analysis less prone to be biased by dataset-specific artifacts.", "Wikipedia On This Day (WOTD) was scraped from Wikipedia's On this day webpages.", "2 The dataset contains 6,809 entries.", "Some example entries are presented in Table", "1. Events in Wikipedia's On This Day pages are crowdsourced, 2 E.g., https://en.wikipedia.org/wiki/ Wikipedia:On_this_day/Today , accessed 03/2018.", "but must adhere to specific guidelines 3 which include the validity and overall relevance of the historical event.", "The earliest label in this dataset is 1302, and the latest is 2018.", "The median year is 1855.0 whereas the mean is 1818.7.", "The standard deviation is 156.5 years.", "On This Day (OTD) is a scrape of the On This Day Today in History, Film, Music and Sport (Li, 2018).", "4 On This Day has a dedicated team that adds, verifies content, and responds to corrections from the public.", "5 The dataset contains 75,135 entries consisting of a sentence describing the event and the event's date.", "We removed 96 events from the original dataset, which happened BCE (Before Common Era), and also removed events that had not happened yet.", "The earliest event in the dataset occurred on year 1 CE (Common Era), and the latest occurred in 2018 CE.", "The median label is 1960.0, while the mean label is 1913.8, so the distribution of labels is not uniform: there are more events occurring in recent times.", "The standard deviation for the labels is 172.3 years.", "Examples of entries from OTD are presented in Table", "1. We note that the overwhelming majority of events in the datasets are real historical events, and though we did not conduct an exhaustive analysis, the only two we identified as fictional were removed by our filters.", "There are 8 events that are dated in the future, and all but one of those 3 https://en.wikipedia.org/wiki/ Wikipedia:Selected_anniversaries , accessed 04/2020.", "4 https://www.onthisday.com , accessed 01/2019.", "5 https://www.onthisday.com/about.php , accessed 04/2020.", "(Earth's 1st contact with the extra-terrestrial Vulcan species in the Star Trek universe, on 2063 CE) correspond to either calendar occurrences (e.g., Beginning of 2nd Julian Period (1/1 OS), on 3268 CE) or astronomical events (e.g., Comet Swift-Tuttle approaches close to Earth, on 2126 CE).", "Our pruning strategy (discard events before 1 CE) was deliberately aggressive, removing 88 events including widely accepted ones (e.g., Battle of Actium, on 31 BCE); however, it is also effective in removing potentially fictitious events (e.g., Creation of the world begins according to the calculations of Archbishop James Ussher, on 4004 BCE) or whose exact date may not be known (e.g., Battle of Megiddo dated to 1457 BCE, but subject to debate).", "Figure 1 shows the distribution of event years in OTD and WOTD.", "Both datasets have significantly more recent events from the last few centuries.", "We use a random 80/10/10 split of each dataset to form the training, validation and test sets.", "We propose two models: a bag of embeddings model (BOE) and a recurrent neural network model (LSTM).", "Both take a training example and output a timestamp, in our case the year of the event.", "We explore two supervised settings: a classification setting, where each possible year corresponds to a different class, and a regression setting, where the labels are the numerical value of the timestamp.", "As baselines, we define two models: one predicts the mean year of the training set (MEAN ), and one predicts the median year present in the extracted CI, falling back to the other baseline if no years are found (CIYEAR ).", "Key Entities And Actions.", "We first identify the key entities and actions in each event description.", "Concretely, for a given event description e , we define its key entities to be phrases from e that are likely to be the topic of a Wikipedia article that contains information relevant to e .", "We define key actions to be a tuple of all verbs in e , excluding some aspectual (e.g., begin) and auxiliary verbs.", "We lemmatize all key actions.", "For example, given the event description The Sixth Coalition attacks Napoleon Bonaparte in the Battle of Leipzig, we mark (Sixth Coalition, Napoleon Bonaparte, Battle of Leipzig) 6 as the key entities, and attack as the key action.", "Entities and actions are extracted using a set of pre-defined rules, based on linguistic features such as part-of-speech (POS) tag, syntactic dependency labels, and entity type, for words recognized as named entities.", "Linguistic features, including named entities, are extracted using spaCy.", "7 Some example rules for detecting key entities are:", "1. Take all named entities, excluding some entity types such as MONEY , PERCENT and ORDINAL", ".", "2. Take all nominal subjects, except pronouns and nominalized adjectives.", "For example, for The Sixth Coalition attacks Napoleon Bonaparte in the Battle of Leipzig, Sixth Coalition is marked as a key entity.", "The complete set of rules can be found in the supplementary material.", "The majority of key entities are named entities and are therefore identified by the first rule above.", "Article Retrieval.", "We use the extracted key entities to retrieve relevant Wikipedia articles.", "For each key entity, we retrieve the first search result returned for the entity name, as proposed by the Wikipedia API.", "We use the Python Wikipedia library 8 for performing the queries.", "Sentence Filtering.", "Filtering seeks to identify sentences related to the historical event in question.", "For example, for the event The Skye Bridge is opened, the sentence Construction began in 1992 and the bridge was opened by Secretary of State for Scotland Michael Forsyth on 16 October 1995 from the article Skye Bridge is relevant.", "1. Sentences from an article with title t i that contain one or more t j for j (cid:54) = i, and a key action.", "2. Sentences from an article with title t i that contain one or more t j for j (cid:54) = i.", "3. Sentences from an article with title t i that contain all t j for j (cid:54) = i.", "4. Sentences that contain a date.", "6 In some cases, overlapping entities are extracted.", "During the next step of extracting Wikipedia's articles, we remove duplicate articles.", "7 www.spacy.io .", "We used spaCy's v2 en core web lg model.", "8 www.pypi.org/project/wikipedia", "5. Sentences from an article with title t i contain one or more t j for j (cid:54) = i, and a date.", "Following a manual inspection of the extracted sentences with each of the methods, we find the following method works best: (1) find all sentences according to the first filter; (2) if no relevant sentences are found, apply the second filter instead.", "In addition, we add the original textual description of the event (taken from OTD/WOTD) to the list of relevant sentences.", "Extracting Year Mentions.", "Given the relevant sentences for each event, we extract from them all year mentions.", "Years are extracted using the following method: first, we use named entity recognition to extract all dates.", "Second, of the words recognized as dates, we keep only those whose POS tag is NUMBER .", "9 We then parse the dates and extract years, using a simple rule-based parser.", "10 We present here some statistics regarding years extracted for the WOTD validation set.", "For 1 .", "8% of the events, the real year appeared in the event title itself.", "For 59 .", "5% of the events, at least one year appeared in the contextual information extracted from Wikipedia.", "Out of the events for which at least one year was extracted, 59 .", "5% had the correct year in the extracted information.", "In total, for 35 .", "4% of the events, the correct year appeared in the contextual information extracted.", "To obtain an estimate of the difficulty of the task, we design two baseline models.", "The MEAN model predicts the mean year seen in the training set, adding Gaussian noise (cid:15) N (0 , 1 yr ) to break ties and induce an ordering.", "The CIYEAR model extracts year mentions, as detailed above, and predicts the median of all extracted years entities.", "If no years are found, the model defaults to MEAN .", "We use two types of features: (1) the average of the word embeddings for all lemmatized words in the extracted sentences, excluding stop words and punctuation (as defined by SpaCy); (2) the median value of all year mentions.", "To represent the median year, we use one-hot encoding for the tens, 9 Again, all linguistic features are extracted using spaCy.", "hundreds and thousands of the median year and concatenate this encoding to the average embedding.", "We experimented with encoding the least significant digit as well, but find this lowers results.", "We explore two variants of the model: Classification.", "In the classification setting, the final module consists of a multilayer perceptron (MLP), where class labels are the target years.", "We note that in the classification settings, the predicted years can only be those that appear in the training set.", "Since most of our evaluation metrics do not require an exact prediction, but rather an approximate prediction, the classification still yields good results.", "The final layer is a softmax layer, and the loss function used is log-loss.", "Regression.", "In the regression setting, the network architecture is an MLP with a single output.", "The regression target is the year of occurrence.", "The loss function used is L1 loss.", "We experimented with mean squared error loss (L2) as well, but this gave lower performance.", "The LSTM model takes as input the tokens for the event text and the extracted sentences.", "A bidirectional LSTM (Hochreiter and Schmidhuber, 1997; Graves et al., 2005) is used to compute an encoding of the event sentence ( e ) and each CI sentence ( c 1 , . . . , c n ).", "We then use an attention mechanism (Bahdanau et al., 2015) to compute a similarity score between the event sentence and each CI sentence, and compute an attention-weighted average of the CI encodings, c (cid:48) .", "When training models with CI, we concatenate both e and c (cid:48) and use that as input to an MLP that performs the final year prediction.", "When not using CI, the only input to the MLP is e .", "The structure of the MLP depends on whether the model is operating on a classification or a regression setting.", "The two variants we explore are: Classification.", "In the classification setting, the final module is composed of an MLP that computes the logits of the event happening in a specific year.", "All years between the minimum and maximum year present in the training set are valid targets.", "We minimize the cross-entropy loss of the predicted year.", "Regression.", "In the regression setting, the final module consists of an MLP with a single output.", "The regression target is the normalized year of the event.", "We normalize by subtracting the mean year of the training set and dividing the result by the standard deviation.", "We experimented with regression to unnormalized targets, but found this degraded performance.", "We minimize the L2 loss of the predicted year.", "In this section we describe our experimental setup and the evaluation metrics we use.", "For the BOE model, in the classification setting we set it to have two hidden layers, each with 1000 neurons.", "We ran experiments with Glove (Penning-ton et al., 2014) and FastText (Bojanowski et al., 2016) word embeddings and found that Glove vectors with dimension 300, pretrained on Wikipedia 2014, performed best.", "The initial learning rate of the MLP is set to 0.001.", "We use L2 regularization with = 10 4 .", "In the regression setting the model has one hidden layer with 32 units.", "We use Glove with dimension 300.", "The initial learning rate is set to 0.01.", "In both settings, we use ReLU as an activation function and Adam for an optimizer (Kingma and Ba, 2014).", "We experimented with L1 and L2 regularization but found that this doesn't improve performance.", "We found the LSTM model to be sensitive to hyperparameter values, and therefore tuned it individually for each setting.", "The final hyperparameters are shown in Table", "2. We use the Adam optimizer (Kingma and Ba, 2014) with = 0 .", "001 , 1 = 0 .", "9 and 2 = 0 .", "999 , and use PReLU activations (He et al., 2015) in the MLP.", "We train for a maximum of 100 epochs, doing early stopping if the validation loss has not improved in 25 epochs.", "Furthermore, we decay the learning rate by a factor of 0 .", "1 if there is no reduction in validation set loss for 10 epochs.", "Preliminary experiments with Glove (Pennington et al., 2014), ELMo (Pe-ters et al., 2018) and FastText (Bojanowski et al., 2016) word embeddings showed that concatenating 200-dimensional Glove and 300-dimensional FastText embeddings performed best.", "We experimented with L2 regularization and dropout on both the MLP and LSTM but found that the performance improvement was negligible, and so we did not use them for our final experiments.", "Our LSTM implementation was done using AllenNLP (Gardner et al., 2018).", "All hyperparameter tuning was done against the development data.", "Kendall's Tau ( ), formally Kendall's rank correlation coefficient (Kendall, 1938, 1945), is a standard metric used to measure two different rankings of the same set.", "Formally, for two rankings X and Y , the form of a general correlation coefficient (Daniels, 1944) is = (cid:80) ni,j =1 a ij b ij ( (cid:80) ni,j =1 a 2 ij )( (cid:80) ni,j =1 b 2 ij ) , (1) where a ij is the score given to a pair ( X i , X j ) and b ij to the pair ( Y i , Y j ) .", "For Kendall's , a ij = 1 if X i < X j and a ij = 1 if X i > X j , and similarly for b ij and Y .", "In plain words, is the number of pairs which X and Y order in the same way minus the number of pairs that are not ordered in the same way, divided by the total number of pairs.", "For the case where there are no ties, Kendall's is a shifted and scaled version of pairwise accuracy, where = 1 .", "0 corresponds to zero accuracy and 1 .", "0 to perfect accuracy.", "To accommodate for ties, we set a ij = 0 when X i = X j , and b ij = 0 when Y i = Y j , as described by Kendall (1945).", "This has the same effect as replacing tied members in each set with all permutations of a contiguous set of integer ranks and averaging by the total number of permutations.", "Exact Match.", "Percentage of events in which the predicted year exactly matches the gold-standard.", "Distance under 20Y and 50Y.", "The percentage of events whose prediction error was under 20/50 years.", "Table 3 presents the results of our experiments.", "We report the average of each statistic over 6 runs, alongside the standard error of the mean at 95% confidence.", "We include a detailed comparison of the different architectures on the WOTD dataset.", "We additionally select the best performing BOE and LSTM models on the WOTD development set and train them on the OTD dataset.", "Our results show that the Wikipedia enrichment is an essential component of the protocol.", "For the WOTD dataset, all models exhibit a statistically significant improvement in ordering when adding CI, with the smallest improver being the LSTM classification model, with a +0 .", "053 change in , and the largest improver being the BOE regression model, with a change +0 .", "098 in .", "For the OTD dataset, the LSTM model showed a modest but statistically significant improvement when adding CI.", "The BOE model presents a mi-nor decrease in performance; however, we obtain a statistically significant improvement of +0 .", "027 in by restricting the CI to only include year mentions.", "11 As the OTD and the extracted CI are from different domains, the words of the contextual information most probably add too much noise for the BOE model to handle, which is why a performance improvement is observed when only including years, which are not domain-specific.", "This indicates that leveraging CI is important, even in this more challenging scenario, where the training data is large and the CI is from another domain, but also suggests that additional improvements, such as using domain adaptation techniques (Ziser and Reichart, 2017) for bridging the domain difference, are required to obtain better performance.", "One difference between the regression and classification settings is that the latter has higher exact match metrics than the former.", "This reflects the nature of the two architectures: when using L1/L2 regression, the loss is proportional to the difference in the prediction, whereas in classification what matters is the probability assigned to the exact year.", "On the whole, the LSTM model produces better predictions than the BOE model, according to most measures.", "This is perhaps unsurprising, as it is able to capture word context when analyzing the inputs, leading to more effective reasoning.", "11 This experiment gave the following results: KT 0.615 0.002, EM 10.8 0.2, 20Y 67.6 0.3, 50Y 84.4 0.2, MAE 36.7 0.4.", "Ablation study.", "Table 4 presents the results of two ablation studies on the best performing models on the WOTD development set, which are the LSTM regressor and BOE classifier.", "Both studies are conducted on the WOTD development set.", "To save space, we omit confidence intervals, but a table including those can be found in the appendix.", "Study A was conducted only on datapoints from the WOTD dataset with contextual information.", "We observe that for both models, the impact of removing the event text and using only the extracted contextual information leads to a change of 0 .", "043 for BOE and 0 .", "071 for LSTM.", "This shows that the heuristics we propose for extracting CI are effective at retrieving relevant information.", "Study B was conducted on all datapoints from the WOTD dataset.", "We report the impact of removing tokens denoting years, dates and numbers from both the CI and the event text.", "We remove years using the method described in 4.1.", "We remove dates by removing any tokens within a DATE entity.", "We remove numbers using the like num property of the spaCy tokenizer, which includes different forms that may be considered numerical (e.g. 1 and one).", "Clearly, the removal of dates subsumes the removal of years, and we expect the removal of numbers to remove at least part of all dates, including years, alongside other date-unrelated numbers.", "was 0 .", "041 , 0 .", "042 and 0 .", "051 when removing years, dates and numbers, respectively.", "BOE presents similar differences in performance when removing those features, with a change in of 0 .", "045 / 0 .", "031 / 0 .", "054 when removing years/dates/numbers.", "These results support our hypothesis that substantial information about the time of an event is encoded in the vocabulary used, and not only in the time expressions.", "Human Performance We compare our results to human performance on this task.", "Three participants were given 100 randomly selected events from the WOTD dataset and were asked to predict years of occurrences, without using any contextual information.", "All participants consider themselves as having good knowledge of history, but are not history experts.", "On average, their error was 52 .", "3 years.", "The participant who had the best results had a mean error of 34 .", "6 years, which is only 3 .", "8 years less than our best result on the WOTD dataset.", "In order to demonstrate the challenges put forth by the addressed task, we examine some events from the OTD development dataset on which our best performing models, LSTM regressor and BoE classifier, got significant prediction errors.", "We observe that some events contain words that are usually associated with a different period in time than the year the event occurred in.", "For exam-Model Accuracy KT EM 20Y 50Y MAE A BOE 0.674 7.9 49.6 63.1 48.2 event text 0.631 7.0 48.2 60.6 54.6 LSTM 0.765 1.8 50.2 78.8 39.0 event text 0.694 1.3 39.9 69.7 50.6 B BOE 0.668 9.1 55.3 71.0 50.7 years 0.623 7.4 46.1 65.0 61.1 dates 0.637 7.4 46.7 64.2 60.5 numbers 0.614 8.0 46.3 64.2 63.1 LSTM 0.774 1.8 50.4 77.3 39.9 years 0.733 1.3 41.0 68.5 48.3 dates 0.732 1.4 42.5 70.1 47.7 numbers 0.723 1.3 40.1 68.8 49.1 Table 4: Ablation study for BOE and LSTM models.", "ple, Portuguese expel Jesuits occurred in 1911, but most Jesuits-related events in our training data occurred in the 16th century.", "One of these events which is particularly similar to the above is En-glish parliament expels Jesuits, which is dated to 1584.", "Probably for these reasons the LSTM and BOE had similar outputs for this event 1559 and 1581, respectively.", "Another example for such an event is Order of Merit instituted by King Edward VII, which occurred in 1902, but the word King normally appears in events dated to earlier centuries.", "The LSTM model output for this event is 1527, and BOE model output is 1639.", "Both events had no CI extracted for them, therefore the models had to rely on words in the event description only.", "An example for which relevant CI was extracted but the models still erred substantially is the event All female jury hears case of Judith Catchpole accused of killing her child (acquit her) in Patuxent County, Maryland.", "This event is dated to 1656, but the BOE model prediction for the event is 1957, and the LSTM model prediction, 1873, is only slightly better.", "The contextual information extracted for this event was Upon her arrival she was accused of several crimes, resulting in a trial on September 22, 1656 in the General Provincial Court in Patuxent County, Maryland.", "The exact date of occurrence does appear in the extracted data, and still both models have a substantial prediction error.", "This is probably due to the fact that our training data contains many court and jury related events, where most events containing court are relatively recent (19th century and later), and almost all jury related events are dated after 1900.", "In some cases, the extracted CI can mislead our models.", "For the event Scotland and France form an alliance, the beginnings of the Auld Alliance, against England that occurred in 1295, LSTM predicted the year 1659.", "Five sentences were extracted for this event, which contained the years 1603 and 1707.", "Another example is Over 250 years after their deaths, William Penn and his wife Hannah Callowhill Penn are made Honorary Citizens of the United States occurred in 1984.", "The CI extracted includes the exact true date of the event, but also includes information regarding the Penns' lives, and contains years ranging between 1680 to 1726.", "This is probably the cause of error for the BOE model, which predicts the year 1721, whereas the LSTM model may have been able to better filter the correct CI, and predict the year 1921.", "Errors can also arise from terms that are ambiguous between time periods.", "Queen Elizabeth is such a term: it can indicate an event from the 16th century, but also an event from the 20th/21st centuries.", "Indeed, we notice confusion of the BOE model on events related to Queen Elizabeth.", "For example, Francis Drake knighted by Queen Elizabeth I aboard Golden Hind at Deptford occurred in 1581, but the BOE model predicts the year 2013 even though the true target year appears in the extracted CI for the event: I visited the royal dockyard on 4 April 1581 to knight the adventurer Francis Drake.", "Similarly, the event Ted Hughes is appointed British Poet Laureate by Queen Elizabeth II occurred in 1984, but the BOE model predicts the year 1579, which corresponds to Queen Elizabeth I.", "We note that for those two events the LSTM model gave better predictions (1566 for the first event and 1981 for the second), which may be related to the inherent difficulty of BOE to address multi-word expressions like Queen Elizabeth I.", "Work on event ordering can largely be categorized into event ordering in context, which aims to order event instances within a given text or discourse and is tackled as part of the TempEval shared tasks (UzZaman et al., 2013), and lexical event ordering", "ordering (Abend et al., 2015), which attempts to order event types by their prototypical temporal order.", "Somewhat in between these lines of work is cross-document event ordering (Minard et al., 2015), which orders events that are mentioned across different documents.", "However, this task does not rely on machine reading external textual resources as we do here, and does not focus on historical events that by their nature are described in a variety of (often incompatible) ways.", "A related line of work to ours was pursued in the context of information retrieval (IR).", "Jatowt et al. (2013) tackled the task of estimating what the focus time of a given document is.", "Focus time is defined as the time to which the main event addressed by the document refers to.", "They do so by computing the association of words and time expressions, based on their co-occurrence, using a bag-of-words method.", "Das et al. (2017) address the task of focus time prediction for short event descriptions, which resembles the task at hand.", "They do so by using cosine similarity to rank a set of candidate years for each event, all of which are computed using word embeddings.", "In a similar vein, Morbidoni et al. (2018) find the focus time of short event descriptions by relying on year mention statistics in related Wikipedia articles and DBpedia entries.", "While these two works are related to our task in spirit, our work is not an instance of the event focus time (EFT) task.", "In fact, we believe the EFT task can be seen as a special case of the HEO task.", "This is evidenced by the approach of EFT systems, which exhibits traditional IR design and techniques, such as producing a ranking of candidate predictions for each document, and is evaluated using ranking-specific metrics that forbid system designs such as predicting years using regression.", "As HEO subsumes EFT, we attempted to evaluate the performance of EFT systems in the HEO task, but have been unable to obtain code for either of the systems.", "We have also been unable to reimplement the systems: (Das et al., 2017) leaves implementation details unspecified, and (Morbidoni et al., 2018) utilizes a proprietary system.", "Another related line of work seeks to create timelines of temporal events by predicting their starting and ending points.", "McClosky and Manning (2012) address the problem of ensuring semantically consistent timelines by finding patterns in the ordering of endpoints of different event types, which adds a common sense reasoning component to the system.", "Leeuwenberg and Moens (2018) construct a relative timeline of events directly, which allows them to circumvent typical pitfalls of pair-wise classi-fiers, such as computationally intractable inference and constructing globally inconsistent orderings (with cycles).", "Our work takes a similar approach but instead is able to construct an absolute timeline for the restricted domain of historical events.", "Within the domain of temporal text understanding, the extraction and normalization of temporal expression may inform the task at hand.", "For example, Kuzey et al. (2016) defined the task of tagging temporal expressions, which are named events or facts with temporal scope, such as second term of Angela Merkel.", "They used a rule-based system to detect such expressions in free-text and map these expressions to a knowledge base (KB) containing time scopes of temporal events and facts.", "This approach requires the existence of KB records containing time scopes for the events.", "In this paper we argued that the task of predicting the time of historical events strikes a balance between being a focused task, with transparent evaluation and interpretable results, and presenting challenges that are not simple to overcome using standard NLP models.", "We outlined a procedure to extract the CI related to an event and compared two approaches for the task, using bag of embeddings and an LSTM, showing that the latter achieves the best performance.", "Future work will explore the use of domain adaptation techniques to enhance performance where the domains of the CI and event text differ substantially.", "We thank the anonymous reviewers for helpful feedback.", "We would also like to thank Maximin Coavoux, Simone Teufel, and Ryan Cotterell for their help and comments.", "We gratefully acknowledge the support of Bloomberg (Cohen).", "This work was partially supported by the Israel Science Foundation (grant No. 929/17) References Omri Abend, Shay Cohen, and Mark Steedman." ]
[ "abstain", "objective", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "objective", "abstain", "method", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "other", "other", "other", "abstain", "method", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "abstain", "method", "abstain", "method", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "method", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "other", "abstain", "method", "other", "other", "other", "method", "other", "other", "other", "other", "result", "result", "abstain", "other", "other", "other", "other" ]
[ "Adversarial Examples (AEs) generated by perturbing original training examples are useful in improving the robustness of Deep Learning (DL) based models.", "Most prior works generate AEs that are either unconscionable due to lexical errors or semantically and functionally deviant from original examples.", "In this paper, we present ReinforceBug , a reinforcement learning framework, that learns a policy that is transferable on unseen datasets and generates utility-preserving and transferable (on other models) AEs.", "Our experiments show that ReinforceBug is on average 10% more successful as compared to the state-of-the-art attack TextFooler.", "Moreover, the target models have on average 73.64% confidence in wrong prediction, the generated AEs preserve the functional equivalence and semantic similarity (83.38%) to their original counterparts, and are transferable on other models with an average success rate of 46%.", "Machine Learning (ML) models have attained remarkable success in several tasks such as classification and decision analytics.", "However, ML Models specifically Deep Learning (DL) based models, are often sensitive to Adversarial Examples (AEs).", "AEs consist of modified training data samples that preserve the intrinsic utilities of the ML solutions, but influence target classifier's predictions between original and modified inputs (Fass et al., 2019).", "Recent works (Fass et al., 2019; Li et al., 2020; Biggio and Roli, 2018) have demonstrated that", "(i) including AEs as a part of training data can enhance the robustness and generalization of the ML models, and", "(ii) these examples can be utilized to test the robustness of ML models and help understand their security vulnerabilities and limitations.", "Previous works on generating AEs have attained success in image (Biggio and Roli, 2018) and on a few conventional text classification tasks such as sentiment, text entailment and movie reviews (Li et al., 2018; Jin et al., 2020; Li et al., 2020).", "Nevertheless, generating AEs for discrete textual data is still a challenge (Jin et al., 2020).", "Utility Preservation Textual AEs require to satisfy task-specific constraints such as lexical rules (spellings or grammar), semantic similarity and functional equivalence to original examples.", "Yet, most of the current state-of-the-art methods do not satisfy these constraints, thereby generating imperceivable AEs for end-users (Zhang et al., 2020).", "A few recent works (Li et al., 2020; Wang et al., 2019; Li et al., 2018; Jin et al., 2020) have considered semantic similarity constraint, but other constraints have been barely explored (Jin et al., 2020).", "Knowledge Transferability Most prior works Alzantot et al. (2018); Jin et al. (2020); Wang et al. (2019) generate one to one example-specific AEs, i.e., for a given example x , they create an example x (cid:48) .", "Each instance x is considered independent in a corpus C , and no relationship between different instances in the corpus is assumed.", "Therefore, the knowledge gained by transforming an example to AE is limited to a single example and is not reused on other examples.", "This process is both time-consuming and may not generalize the identified vulnerabilities of the target model.", "Word Replacement Strategy Most prior works use a single word replacement strategy such as synonym substitution (Alzantot et al., 2018; Jin et al., 2020; Wang et al., 2019) or character perturbation (Gao et al., 2018) to generate AEs.", "This strategy has two main disadvantages:", "(i) the AEs generated by using a character-based replacement strategy, contains many spelling errors that result in unnatural text; and", "(ii) multiple word transformations with its synonym in a synonym-based replacement strategy affect the language's fluency, making it sound unnatural.", "For instance, Jin et al. (2020) generated an AE Jimmy Wales is a big, fucking idiot liar friggin nincompoop deception\".", "This AE sounds unnatural and is grammatically incorrect.", "Handling Noisy Datasets Most prior works generated AEs for datasets such as Yelp (Yelp) and Fake News (Kaggle, 2018a) which do not contain many spelling mistakes or out-of-the-vocabulary words.", "However, human generated natural text is prone to lexical errors.", "For example, tweets usually contain informal language, misspellings and unknown words.", "Prior works, such as (Jin et al., 2020; Wang et al., 2019; Alzantot et al., 2018; Li et al., 2020) that considered synonym substitution strategy, cannot deal with such noisy dataset.", "We present ReinforceBug that addresses the aforementioned limitations of prior works.", "Our code is available at https://bit.ly/2QOZDMT .", "Our main contributions are summarized as follows:", "1. We propose a reinforcement learning framework, ReinforceBug , that learns a policy to generate utility-preserving AEs under the black-box setting with no knowledge about the target model architecture or parameters.", "2. We evaluate ReinforceBug on four state-of-the-art deep learning models Email spam, Twitter spam, Toxic content and Review polarity detection respectively.", "3. We investigate the transferability of our learned policy on a new dataset.", "4. We also examine the transferability of our generated AEs on three other state-of-the-art deep learning models.", "Adversarial attacks are extensively studied in computer vision domain (Biggio and Roli, 2018; Good-fellow et al., 2014).", "Early works in adversarial text attacks were inspired by Generative Adversarial Networks (GANs) (Wong, 2017; Zhao et al., 2018).", "Wong 2017 showed that GAN-based reinforcement learning algorithms become unstable after a few perturbations.", "Later, heuristic-based methods such as word removal (Ebrahimi et al., 2018), Out-Of-Vocabulary (OOV) words (Gao et al., 2018) and synonym replacement (Li et al., 2018; Jin et al., 2020; Alzantot et al., 2018) have been proposed.", "Among these studies, DeepWordBug (Gao et al., 2018) generates AEs by randomly transforming a word by OOV word in an example.", "This approach is practical in producing AEs efficiently; however, it generates AEs that can be detected by the end-user due to large proportion of lexical errors.", "An attack framework named TextBugger (Li et al., 2018) was proposed to generate adversarial samples using the multi-level approach.", "It identified important words for each example and replaced them with optimal bugs.", "The approach considered both character-level and word-level transformations.", "Another recent attack, TextFooler (Jin et al., 2020) generates utility-preserving AEs by replacing an important word in an example with its grammatically equivalent synonym.", "The study also evaluates the generated AEs against semantic similarity constraint.", "Definition 1 A Deep Neural Network (DNN) is a machine learning model that learns a function F : X Y over training data which maps from input space X to a set of classes Y. F is then evaluated on testing data X (cid:48) and F predicts the output label y (cid:48) for x (cid:48) X (cid:48) .", "Definition 2 Prediction Confidence Score PCS( x , y ) of model F depicts the likelihood of an input x X having a label y Y .", "The smaller PCS( x , y ) suggests F has low confidence that x has a label y .", "Definition 3 Given a real example x having a label y , a utility-preserving AE against x is an input x (cid:48) x + with a minor perturbation such that x (cid:48) satisfies a set of perturbation constraints P const and F predicts an incorrect label for it with high PCS i.e., y (cid:48) F ( x (cid:48) ) such that y (cid:48) (cid:54) = y and PCS( x (cid:48) , y (cid:48) ) > .", "Definition 4 A black-box attack is an attack where an attacker does not know the target model F architecture, training data X or hyper-parameters .", "An attacker can only query F with Figure 1: Overall of ReinforceBug the input examples X (cid:48) and gets the corresponding P CS ( x, y ) .", "Definition 5 A non-targeted attack is an attack in which the adversary's goal is to maximize the mis-classification rate of the model F on any generated AE irrespective of its ground-truth label y .", "i.e., fool the model F to misclassify a spam email as benign email or vice-versa.", "Given a pre-trained target model F , we need to simulate a non-targeted (Definition 5) black-box attack (Definition 4) (Morris et al., 2020) to generate a set of utility-preserving AEs (Definition 3) A exp from a Corpus C with N examples and having corresponding target label t g Y .", "Furthermore, our approach should learn a policy ( s,a ) to perform perturbation on C such that the model generates utility-preserving AEs (Definition 3) that have semantic similarity ( > ) with the original example but have low perturbation rate (number of words perturbed) and lexical (grammatical and spelling) errors.", "Moreover, the policy should be transferable on unseen datasets.", "Fig 1 provides an overview of ReinfoceBug.", "We model an attack as a reinforcement learning (Sutton and Barto, 2018) process consisting of three main components: an environment, Proximal Policy Optimization (PPO) agent (Schulman et al., 2017) and action space.", "Firstly, the environment state ( s t ) at time t is processed as input by an agent followed by an action a t A determined by an agent to update s t to the next state s t+1 , where A represents an action space (set of valid actions given the state).", "Subsequently, the environment's actuator acts on the corpus state to construct candidate examples.", "These examples are then sent to the reward generator module, which is responsible for computing the reward r t + 1 for action a t .", "The reward generator applies post constraints P ost P const , queries the target model F and obtain scores of the candidate examples to calculate the reward of action a t .", "The reward is sent back to the actuator which determines a valid update to the corpus state as well as the next environmental state s t+1 .", "The experience consisting of < s t , a t , r t+1 , s t+1 , va t +1 > is sent to the agent and the agent model is updated.", "Here va t +1 shows the valid actions mask for next state.", "The va t +1 is then used by the agent to update an action space A and restrict the agent to only select valid action on the new state s t +1 .", "Each of the modules is discussed below: 3.3.1 Agent We use a customized version of Proximal Proximate Optimal (PPO) (Tang et al., 2020) Reinforcement Learning (RL) agent with action mask capability.", "PPO is an enhanced version of Actor-Critic (AC) Model (Grondman et al., 2012) .", "In AC architecture the agent consists of a separate policy and Q-value network.", "They both take in environment state s t at each time step as an input, while actor determines an action a t among the possible valid action set A and critic yields value estimation V t ( s t ) .", "While an actor uses a gradient descent algorithm to learn a policy that maximizes the rewards R t , the critic learns to estimate R t via minimizing a temporal loss.", "Further, the PPO algorithm avoids large and inefficient policy updates by constraining the new policy updates to be as close to the original policy as possible.", "We have selected this agent because our action and state space is substantially large and to avoid enormous policy updates which make the agent unstable (please refer to Schulman et al. (2017) for more details).", "The environment takes action a t from the agent as input and outputs the experience tuple e t , AEs A exp and a flag done depicting the success of the agent in achieving the goal.", "1. Corpus State ( C t ) : it is given by C t = E 1 , E 2 ,", "...., EN , where N is the number of examples in the corpus and E i is equivalent to the set of words W i = w 1 , w 2 , ..., w n for an example i at time t .", "2. Score State ( score t ) : it is a vector representing the PCS (Definition 2) of target model F on examples E having a ground-truth label tg in corpus C at time t.", "For instance, given an example E i the score t [ i ] = P CS ( E i , tg ) where tg is a ground truth label of E i .", "3. Environment State ( s t ) : this state s t = w 1 , w 2 , ..., w k is observable by an agent.", "It consists of k mutable important words in corpus C .", "4. Success rate ( success t ) : it is the proportion of utility-preserving AEs compared to all AEs generated by the agent at time t .", "Word selector This component takes C state at t = 0 and pre-constraints P re P const as inputs.", "P re are task-specific perturbation restrictions on the specific entities in E i .", "For example, spam messages mostly contain URLs, IP addresses, organization names and email addresses pointing to phishing websites.", "Perturbing these entities in the spam message can change the functional semantics of the message (Ranganayakulu and Chellappan, 2013).", "Hence, imposing pre-constraints ensures that generated AE preserves the functional equivalence to E i after applying perturbations.", "To achieve this, we have designed a countvectorizer (sklearn) using a customized tokenizer.", "The tokenizer finds the list of immutable entities such as URLs and IP addresses in the text using regular expressions or named entity models (Florian et al., 2003).", "After that, these words are segmented into immutable words Im words using word tokenizer and saved for each example to be utilized later by actuator module.", "For training ReinforceBug , our method first computes the important words from the training datasets as state ( s t ) and then learns a policy to identify best actions ( a t ) to transform the s t to a next state ( s t +1 ) such that the success of our attack is maximum.", "Our work extends Jin et al. (2020) important words component; however instead of generating example-specific words, our module identifies corpus-specific important words.", "An important word is selected using a word importance score I w idx .", "The I w idx is calculated as the sum of prediction change in all the examples (k) containing w idx before and after deleting w idx .", "The candidate words w idx most frequent words in the training dataset vocabulary.", "It is formally defined as follow.", "If the I w idx is > 0 , we consider it as an important word.", "The final list of all the important words is considered as the state s t .", "s t and mapping C map that maps each word to the corpus examples are sent back to the environment.", "For testing, the designed countvectorizer is used to transform the testing data onto these selected words.", "Actuator This module is responsible to execute an action a t selected by an agent.", "Firstly, the actuator transforms action a t into an action tuple < w idx , act idx , rep idx > , where w idx , act idx and rep idx depict the index of the word to be replaced, the operation to be performed on that word and replacement word index respectively.", "Subsequently, example indexes E idx containing the word w idx are obtained by querying the C map .", "After that, the actuator examines the score t [ k ] (score state) of each example k in E idx .", "If score t [ k ] > , only then it is selected as an example to be perturbed, here represent the PCS threshold.", "In this way, the examples for which AEs have already been found are not perturbed further and other examples are given a chance.", "Once these examples are selected, the operation act idx is applied to w idx which results in multiple replacement options.", "Table 1 provides the list of operations considered.", "For example, if an operation is Homoglyph and the word indexed by w idx is solid\" then following replacements can be done \"so1id,sol1d,s0lid,5olid\".", "The new word w new is selected by rep idx .", "After w new is selected against the w old , if w old in not in the immutable word list of example k than a candidate AEs are generated by substituting w old with w new in the previously selected examples.", "In this way the functional equivalence is ensured before generation of AE .", "The selected candidate examples are then sent to the reward generator.", "The reward generator module returns the reward of changing the w idx in the selected examples by applying act idx and selecting rep idx .", "The reward generator also outputs the AEs that satisfies all the post constraints.", "Finally, the state is updated by replacing the w idx in s t with index of new word w new .", "All further actions on w idx are then invalidated.", "This is done by setting the va t+1 for all actions on the w idx in an action _ space to False.", "In this way, multiple actions on the same word cannot be performed in one training episode.", "The success t +1 is updated, the episode completes if the success t +1 for the corpus state C t+1 has reached the threshold specified by or all the words in the state have been updated.", "The module constructs an experience tuple e t for the agent and returns e t , C t+1 , score t+1 and success t+1 to the environment.", "Reward Generator (RG) Algorithm 1 shows the pseudo-code of the RG module.", "RG takes candidate examples, C t , time t and post constraints as input and outputs the Adv exp , the reward r t+1 of a t and updated corpus state C t +1 .", "In this study, we have considered three main post utilities that the generated Adv exp should preserve apart from ALGORITHM 1: Reward_Generator 1 Input: Original examples org , candidate examples cand , time t , Post P const ; 2 Output: Adv exp , reward r t +1 , C t+1 ; 3 Initialize reward 0 , Adv exp {} ; 4 updated_cand, rewards; 5 if ( t = 0 ) then 6 updated _ cand = cand ; 7 Query Target Model ; 8 score t target _ model.query ( cand ) ; 9 else 10 Initialize thresholds for spellerror , gramerror , Semantic Post ; 11 N (cid:48) total candidate examples; 12 forall c k cand do 13 sem = Semantic ( org k , c k ) ; 14 spell = spellerror ( c k ) wordlen ; 15 gram = gramerror ( c k ) wordlen ; 16 if sem > and spell < = and grammar < = ) then 17 Query target_model ; 18 new _ sc k = query ( c k ) prev _ sc k score t [ k ] ; 19 change [ k ] = prev _ sc k new _ sc k prev _ sc k ; 20 if change [ k ] > 0 then 21 update the corpus state ; 22 C t+1 E idx [ k ] c k ; 23 update the score state ; 24 score t+1 [ k ] new _ sc k ; 25 end 26 Compute Reward ; 27 if ( score t [ k ] < ) then 28 Adv exp", "changing the output of the classifier.", "Firstly, the percentage of the spelling and grammatical errors should not be more than and respectively, and the semantic similarity between the original and AE should be > .", "If the candidate example meets all these utilities, then the target _ model is queried to obtain the score of the candidate example.", "RG updates the corpus and score states for an example where the difference between previous (before perturbation) and new (after perturbation) score of an example > 0 .", "Subsequently, the reward generator checks if the candidate example score has converged to < or not.", "When converged case, the example is added to the list of valid AE against the original example, and a reward r t +1 is computed as shown in line 30 of Algorithm", "1. Otherwise, the sum of the change in the scores of an example is added as a reward, as shown in line 31.", "In line 30, the r t +1 represents the summation of reward attained on successfully transforming the original examples into well-constrained AE.", "It is calculated as a summation of the change in original and current score of the example k, assisted by semantic similarity with the original example and penalized by lexical errors (spelling and grammatical) and perturbation rate p rate for generating each AE.", "In this way, the agent learns to generate AEs with minimum perturbations and lexical errors and more PCS and semantic similarity.", "Lastly, the r t +1 is normalized by the factor N', so that the agent can learn the im-pact of action a t on a single example irrespective of the size of the corpus.", "Target Model We have considered a black-box attack (Definition 4) against the target model F .", "F can be any DNN (Definition 1) that provides P CS (Definition 2) score as an output.", "This section presents our experiment details.", "We studied the effectiveness of ReinforceBug on three noisy and one conventional text classification tasks respectively.", "Table 2 enlists the statistics of the considered datasets.", "Precisely during the target model training, we held out 30% of the training data as a validation set, and all parameters were tuned based on it.", "After that, the testing dataset was used to evaluate the performance of the model.", "For training and testing our ReinforceBug , we split the dataset into training (70%) and testing dataset (30%).", "The stratified split was applied to ensure the class distribution on these datasets remains consistent with the actual testing dataset.", "For each dataset, we trained four state-of-the-art models namely Word Convolution Neural Network (CNN) (Jain et al., 2018), Character CNN (Zhang et al., 2015), Word Bidirectional Long Term Short Memory (BiLSTM) (Zhou et al., 2016) and Recurrent CNN (Lai et al., 2015) on the training set.", "However, we did not consider the recent state-of-the-art BERT (Devlin et al., 2019) model because we found that its performance was significantly low for our considered noisy datasets.", "Srivastava et al. 2020 also reported that the BERT model's performance notably degrades for noisy text datasets, and further research is required to fine-tune BERT for noisy datasets.", "For training the considered models, we used an open-source GitHub repository (Lee).", "Table 3 shows the performance of each model on the testing set.", "From these models, we selected the models with best performance accuracy as target models (as highlighted in bold) to train ReinforceBug .", "For the rest of the models trained on a similar dataset, we studied the transferability of our generated AEs.", "Moreover, we tested our ReinforceBug against an unseen dataset to study the transferability of our attack on other datasets.", "Lastly, for get _ semantic and get _ synonym operations (Pennington et al., 2014) and (Mrkic et al., 2016) embeddings had been used.", "We used stable-baselines (Hill et al., 2018) reinforcement learning library to implement our PPO agent.", "We used a Multilayer perception (MLP) model, as an actor and critic models.", "For training the agent, we used 30 episodes for each model.", "To ensure the functional equivalence, we defined immutable tokens as pre-constraints using Named Entity Model provided by Spacy and regular expressions.", "For Enron dataset, names (Person or Organization), IP, email addresses and URL, while for Twitter and Toxic datasets URL, #Hashtags and @Reference and lastly, for Yelp dataset, names and URL were considered as immutable entities.", "For Table 2: The Target Models training and testing data sets statistics Dataset Training Testing Avg Avg SpellAvg GramData Data Length ing errors mar errors Enron (Wiki) 28.6k 9.9k 244 3.07% 31% Twitter (Kaggle, 2019) 8.1k 3.9k 15 10.67% 28.46% Toxic (Kaggle, 2018b) 159.5k 12.4k 70 2.13% 25.60% Yelp (Yelp) 560k 38k 139 0.81% 21.67% Table 3: Balanced Accuracy of target models on test datasets Dataset WordCNN CharCNN BiLSTM RCNN Enron 97.50% 96.50% 97.60% 98.30 % Twitter 93.44% 92.02% 93.72% 94.15 % Toxic 69.05% 86.62% 89.61 % 88.56% Yelp 94.74% 94.44% 95.60 % 95.34% semantic and lexical equivalence, we defined three post constraints, i.e., the semantic similarity, which were calculated using Universal Sentence Encoder (USE) (Cer et al., 2018) considering = 0 .", "60 .", "Moreover, for counting spelling mistakes Garbe was used while for calculating grammar issues, we used language tool (PYPI).", "Results of our experiments are presented here.", "Table 4 shows the samples of AEs generated by ReinforceBug and Table 5 illustrates the main results of our experiments.", "It can be seen that ReinforceBug produces AEs with a comparatively high PCS (i.e., on average 74%), semantic similarity (i.e., on average 83.5%) than other two attacks for all the datasets.", "However, its success rate is on average 15% less than TextBugger (Li et al., 2018) for the all the models.", "One reason behind it is that TextBugger generates unrealistic AEs with least PCS (i.e., on average 66%), low semantical similarity (i.e., on average 63%) and significantly high lexical errors (i.e., on average 14% spelling and 27% grammar errors).", "Additionally, TextBugger on average perturbs more than 59% words of the original text to generate AEs.", "Such a high perturbation rate is too large to be ignored by the end-user.", "Moreover, for Enron and Twitter datasets, TextBugger perturbs 9.09% and 97.67% of the URLs present in the text, thus adversely effecting the functional semantics of the text.", "Therefore, the success rate of TextBugger is an overestimation and deviates from the semantic, functional and lexical constraints.", "In comparison with TextFooler (Jin et al., 2020) our method has significantly high success rate (i.e., on average more than 10%) for all the models.", "It is expected because TextFooler relies on synonym substitutes technique to generate AE, however, for noisy datasets with relative high lexical errors such as Twitter (Table 2) this method tends to fail.", "Lastly, TextFooler produces 5% more grammatical errors than ReinforceBug and similar to TextBugger, TextFooler also perturbs URLs, thus effecting the functional semantics of the generated AEs.", "These findings suggest that ReinforceBug produces effective and utility-preserving AEs (Def-inition 3).", "Table 6 shows the transferability of policy learned by ReinforceBug on unseen datasets for each task.", "For benchmarking the results are also compared with the state-of-the-art attacks TextBugger and Textfooler .", "It is evident from the results that ReinforceBug takes less time to generate adversarial samples as compared to the other models.", "It is because ReinforceBug utilizes the same important word vocabulary selected while learning and the agent has already explored and learned utility-preserving operations on them during training.", "Additionally, although TextFooler and TextBugger both perform example-specific perturbation and ReinforceBug might suffer from out-of-vocabulary words but still the success rate of ReinforceBug on test datasets is more than TextFooler attack and less than TextBugger which are aligned with our finding in section 5.1.", "Also, from Table 6 it is evident that our ReinforceBug produces utility-preserving AEs with high PCS and semantic similarity (i.e., on average more than 70% and 82% respectively) and comparatively low perturbation rate (on average 8.5%) for all test datasets.", "It suggests that the important words and transformation learned from the training datasets are transferable to unseen datasets and can be used to generalize the vulnerabilities of the target models.", "To determine whether AEs curated based on one model can also fool other models for the same task, we examined the transferability of AEs on other models (see Table 3 for the performance of these models).", "Table 7 shows the transferability result.", "AEs generated by our method are more transferable to other models in comparison to the state-of-the-art attacks.", "However, there is a moderate degree of transferability between models, and the transferability is higher for Twitter and Toxic detection task as compared to Email and Yelp movie-review classification task.", "Nevertheless, BiLSTM trained on Enron dataset (having 97.60% accuracy) offers more resilience to AEs generated by RCNN by limiting the success rate of the attack to (<17%) for all the attacks, while other models are highly vulnerable to the AEs.", "It signifies that vulnerabilities exploited by our AEs are task-specific and moderately model-independent.", "Figure 2a illustrates the proportion of each operation chosen by the ReinforceBug to generate utility-preserving AEs.", "We can see that get _ semantic , and get _ synonyms operations are most dominant for all the tasks.", "One reason could be that get _ semantic is deliberately designed for creating similar contextual adversarial texts without deleting the original important word, while get _ synonyms replaces the word with similar meaning word.", "That is why the semantic similarity remains intact without impacting the linguistic structure of the text.", "Other operations, i.e., addition , insertion , bitsquatting and omission that cause common typos are moderately chosen, however, generatehom is rarely selected by ReinforceBug .", "This happens because it can only replace a character with a visually similar character and produces fewer replacement options for a word than other operations.", "This reason is also valid of word _ month and word _ to _ num as only few words are either words representing month or numbers in the corpus vocabulary.", "To demonstrate the knowledge transferability, we visualize the identified important words according to the number of examples affected by their", "replacements in Figure 2b.", "Here, the words impacting more examples are represented with a larger font.", "Figure", "2b(i) shows that in emails, words such as click', attached', thanks', and deal' are more likely to affect the prediction of the target models by decreasing the spam intent to benign.", "Whereas for the Twitter dataset, Figure", "2b(ii) shows that the targeted RCNN model is more vulnerable to minor perturbation on words such as news', trump', obama', state' and police'.", "For toxic content detection, the model decision is manipulated for most examples with words like see', like', think'.", "Perturbing words like stupid' and idiot' decreases the toxicity of the text.", "Lastly, for the yelp dataset, changing words such as like', great' and best' increases the text's negative extent.", "Therefore, it is evident that models are vulnerable to these words irrespective of a specific example; instead, these vulnerabilities affect multiple examples in the corpus and are transferable to new datasets as seen in (section 5.2) as well as are transferable to other models (section 5.3).", "Overall, this study proposes ReinforceBug , a reinforcement learning-based framework to generate utility-preserving AEs against the state-of-the-art text classifiers under black-box settings.", "Extensive experiments demonstrate that it effectively generates utility-preserving AEs that are transferable to other models, and the learned policy is transferable to the unseen datasets.", "It identifies semantic concatenation and synonym substitution attacks as a significant threat to text-classifiers and suggests defences against these attacks should be explored in the future to improve their robustness.", "This work was supported with super-computing resources provided by the Phoenix HPC service at the University of Adelaide." ]
[ "abstain", "abstain", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "method", "abstain", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "method", "method", "method", "abstain", "method", "method", "method", "method", "abstain", "method", "abstain", "method", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "other" ]
[ "We study the problem of coarse-grained response selection in retrieval-based dialogue sys-tems.", "The problem is equally important with fine-grained response selection, but is less explored in existing literature.", "In this paper, we propose a C ontextual F ine-toC oarse (CFC) distilled model for coarse-grained response selection in open-domain conversations.", "In our CFC model, dense representations of query, candidate contexts and responses is learned based on the multi-tower architecture using contextual matching, and richer knowledge learned from the one-tower architecture (fine-grained) is distilled into the multi-tower architecture (coarse-grained) to enhance the performance of the retriever.", "To evaluate the performance of the proposed model, we construct two new datasets based on the Reddit comments dump and Twitter corpus.", "Extensive experimental results on the two datasets show that the proposed method achieves huge improvement over all evaluation metrics compared with traditional baseline methods.", "Given utterances of a query, the retrieval-based dialogue (RBD) system aims to search for the most relevant response from a set of historical records of conversations (Higashinaka et al., 2014; Yan et al., 2016; Boussaha et al., 2019).", "A complete RBD system usually contain two stages: coarse-grained response selection (RS) and fine-grained response selection (Fu et al., 2020).", "As shown in Figure 1, in coarse-grained RS stage, the retriever identifies a much smaller list of candidates (usually dozens) from large-scale candidate database (up to millions or more), then the ranker in fine-grained RS stage selects the best response from the retrieved candidate list.", "Worked during the internship at Microsoft Research Asia.", "Zhongyu Wei and Yeyun Gong are corresponding authors.", "Recent studies (Whang et al., 2020; Xu et al., 2020, 2021; Whang et al., 2021) pay more attention on fine-grained RS and various complex models are proposed to compute the similarities between the query and candidates for response selection.", "Although promising improvements have been reported, the performance of fine-grained stage is inevitably limited by the quality of the candidate list constructed.", "Therefore, a high-quality coarse-grained RS module is crucial, which is less explored in existing literature (Lan et al., 2020).", "In this paper, we focus on the task of coarse-grained response selection, i.e., dialogue response retrieval.", "There are two major challenges.", "First, different from general text matching tasks such as ad-hoc retrieval (Hui et al., 2018) or question answering (QA) retrieval (Karpukhin et al., 2020), keywords overlapping between context and response in dialogue are potentially rare, such as when a topic transition (Sevegnani et al., 2021) occurs in response.", "This makes it difficult to directly match the query with candidate responses.", "Second, compared with fine-grained RS, coarse-grained RS deals with much larger number of candidates.", "Therefore, it is impractical to apply complex matching model that jointly process query and response for the similarity computation like in fine-grained RS, due to the retrieval latency (traverse millions of candidates on-4865 line).", "Instead, the efficient BM25 system (Robert-son and Zaragoza, 2009) based on sparse representations is the mainstream algorithm in coarse-grained text matching.", "To mitigate the above mentioned two problems, we propose a C ontextual F ine-toC oarse (CFC) distilled model for coarse-grained RS.", "Instead of matching query with response directly, we propose a novel task of query-to-context matching in coarse-grained retrieval, i.e. contextual matching .", "Given a query, it is matched with candidate contexts to find most similar ones, and the corresponding responses are returned as the retrieved result.", "In this case, the potential richer keywords in the contexts can be utilized.", "To take the advantage of complex model and keep the computation cost acceptable, we distillate the knowledge learned from fine-grained RS into coarse-grained RS while maintaining the original architecture.", "For the evaluation, there is no existing dataset that can be used to evaluate our model in the setting of contextual matching, because it needs to match context with context during training, while positive pairs of context-context is not naturally available like context-response pairs.", "Therefore, we construct two datasets based on Reddit comment dump and Twitter corpus.", "Extensive experimental results show that our proposed model greatly improve the retrieval recall rate and the perplexity and relevance of the retrieved responses on both datasets.", "The main contributions of this paper are threefold: 1) We explore the problem of coarse-grained RS in open domain conversations and propose a Contextual Fine-to-Coarse (CFC) distilled model; 2) We construct two new datasets based on Reddit comment dump and Twitter corpus, as a new benchmark to evaluate coarse-grained RS task; 3) We construct extensive experiments to demonstrate the effectiveness and potential of our proposed model in coarse-grained RS. 2 Related Work Fine-grained Response Selection In recent years, many works have been proposed to improve the performance of fine-grained selection module in retrieval-based chatbots (Zhang et al., 2018; Zhou et al., 2018; Tao et al., 2019; Whang et al., 2019; Yuan et al., 2019).", "Owing to the rapid development of pre-trained language models (PLMs) (Radford et al., 2019), recent works (Gu et al., 2020; Whang et al., 2021; Sevegnani et al., 2021) achieve the state-of-the-art (SOTA) results by utilizing PLMs such as BERT (Devlin et al., 2018) to model cross-attention and complex intersection between the context and response.", "On the other hand, coarse-grained dialogue retrieval is an important but rarely explored field.", "Limited by efficiency, there are usually two methods for coarse-grained response selection, i.e., the sparse representations based method represented by BM25 (Robertson and Zaragoza, 2009), and the dense representations based method represented by dual-Encoder (Chidambaram et al., 2018; Humeau et al., 2019; Karpukhin et al., 2020; Lan et al., 2020; Lin et al., 2020).", "In coarse-grained response selection, there is a fixed candidate database containing a large number of context-response pairs.", "Formally, given a query , i.e., a new context, the goal is to retrieve Top-K most suitable responses for the query from the candidate database.", "We propose a contextual fine-to-coarse distillation framework for the task of coarse-grained RS.", "First, we formulate the problem as a task of contextual matching , i.e., match query with context instead response; Second, we utilize a multi-tower architecture to deal with the similarity computation of query and candidates in contextual matching; Third, we utilize knowledge distillation to leverage the deep interaction between query and response learned in one-tower architecture.", "An intuitive idea of coarse-grained RS is to treat all responses as candidate documents and directly use query to retrieve them, while this non-contextual approach results in a quite low retrieval recall rate (Lan et al., 2020).", "Inspired by recent studies of context-to-context matching in fine-grained RS (Fu et al., 2020), we propose contextual matching in coarse-grained RS, which is to match the query with candidate contexts, and return the responses corresponding to the most similar contexts.", "We consider three ways of contextual matching.", "Query-Context (QC) In QC matching, we treat contexts instead of responses as candidate documents.", "At run-time, we calculate the similarities between query and candidate contexts, and the re-4866", "sponses corresponding to the Top-K most similar contexts are returned as the retrieved results.", "The motivation of using QC matching is similar contexts may also share similar responses.", "Query-Session (QS) A session represents the concatenated text of context and corresponding response (Fu et al., 2020), which we think is more informative than context alone.", "In QS matching, we treat sessions as candidate documents and return the responses in Top-K most similar sessions as the retrieved results.", "Decoupled Query-Session (DQS) Apart from QS matching, we also consider a decoupled way to match query with candidate sessions.", "In DQS matching, we treat contexts and responses as independent candidate documents.", "Similarities between query and contexts, query and responses are first calculated independently, then the query-session similarity can be obtained by the weighted sum.", "QS and DQS matching are actually two different ways to calculate query-session similarity.", "For the retriever to search large-scale candidates with low latency, neural-based retrievers are usually designed as (or limited to) multi-tower architecture (Figure 2).", "In multi-tower models, the query and the candidates are independently mapped to a common vector space by different encoders, where similarity can be calculated.", "After training, the embeddings of large-scale candidates can be pre-calculated offline , and only the embedding of query needs to be calculated online.", "In this way, fast sublinear-time approximation methods such as approximate nearest neighbor search (Shrivastava and Li, 2014) can be utilized to search for Top-K vectors that are most similar to the query, which can achieve an acceptable retrieval latency during inference.", "For QC and QS matching, two-tower architecture is adopted.", "Taking QS matching as an example (Fig-ure", "2(a)), the dense session encoder ES ( ) maps any candidate session to real-valued embedding vectors in a d -dimensional space, and an index is built for all the N session vectors for retrieval.", "At run-time, a different dense query encoder EQ ( ) maps the query to a d -dimensional vector, and retrieves k candidate sessions of which vectors are the closest to the query vector.", "We use the dot product of vectors as the similarity between query and candidate session following (Karpukhin et al., 2020).", "For DQS matching, dense representations of query, context and response are independently calculated, the architecture is thus designed as three-tower with three encoders, which is query encoder EQ ( ) , context encoder EC ( ) and response encoder ER ( ) (Figure", "2(b)).", "Similarly, context and response vectors are calculated and cached offline respectively and two indexes are built for retrieving them.", "The final similarity of query and session is weighted by the dot product of query-context and query-response.", "The weighting coefficient can be adjusted to determine whether it is biased to match the context or match the response 1 .", "ric learning problem (Kulis et al., 2012).", "The goal is to learn a matching space where similarities between positive pairs is higher than negative ones, by learning a better embedding function.", "We use the training of three-tower model (DQS matching) as an example.", "Formally, we denote the training set as D = { q i , { k + i , k i }} Ni =1 .", "Each training instance contains a query q i , a set of positive examples k + i and a set of negative examples k i .", "Among them, k + i contain several positive contexts and several positive responses, similarly, k i contain several negative contexts and several negative responses.", "We optimize the loss function as the sum of negative log likelihood of all positive pairs simultaneously: L ( q i ) = log (cid:80) k { k + i } e sim( q i ,k ) (cid:80) k { k + i ,k i } e sim( q i ,k ) (1) where the similarity function is defined as: sim( q i , k ) = EQ ( q i ) E ( k ) .", "Positive and negative examples The core issue of training multi-tower models for contextual matching is to find positive pairs of query-context (or query-session).", "In this paper, we assume that contexts with exactly the same response are positive samples of each other, which is a cautious but reliable strategy.", "Formally, given a response r , if there are multiple contexts whose response is r , then we can randomly selected one context as the query q , and the other contexts are positive contexts of q , and r is the positive response of q .", "Negative samples of contexts and responses can be obtained from in-batch (Karpukhin et al., 2020) or random sampling from database.", "Similarly, positive query-session is obtained by replacing the context in positive query-context with the whole session.", "In multi-tower architecture, the query and candidates are expressed by their embeddings independently, which may cause the loss of information, and their monotonous way of interaction (inner product) further limits the capability (Lin et al.,", "2020).", "Comparing with multi-tower model, one-tower model takes both the query and the candidate as a concatenated input and allow the cross attention between query and candidate in self-attention layer.", "Despite fewer parameters, one-tower model have been shown to learn a more informative representations than multi-tower model, thus it is preferred in fine-grained RS (Yang and Seo, 2020).", "To leverage the richer expressiveness learned by the one-tower model, knowledge from one-tower model is distilled into multi-tower model to enhance the retriever.", "Before distillation, we need to train teacher models based on one-tower architecture.", "Let's take the training of teacher model for QS matching as an example.", "A single encoder is trained to distinguish whether the query and the session are relevant (pos-itive), and the form is exactly same as the next sentence prediction (NSP) task in the BERT (De-vlin et al., 2018) pre-training.", "Formally, given a training set D = { q i , s i , l i } Ni =1 , where q i is the query, s i is the candidate session and l i { 0 , 1 } denotes whether q i and s i is a positive pair.", "To be specific, given a query q and candidate session s , the encoder obtains the joint representation of the concatenated text of q and s , and then computes the similarity score through a linear layer, the training objective is binary cross entropy loss.", "We summarize the main difference between one-tower and multi-tower as follows: one-tower model is more expressive, but less efficient and cannot handle large-scale candidates.", "The main reason is that feature-based method of calculating similarity scores rather than inner product limits the capability of offline caching.", "For new queries, the similarities with all candidates can only be calculated by traversal.", "The huge latency makes it impossible to use one-tower model in coarse-grained response retrieval.", "To leverage the expressiveness of one-tower model, we propose fine-to-coarse distillation, which can learn the knowledge of one-tower model while keeping the multi-tower structure unchanged, thereby improving the performance of the retriever.", "Take the two-tower student model (denoted as S ) for QS matching as an example, suppose we have trained the corresponding one-tower teacher model (denoted as T ).", "For a given query q , suppose there are a list of sessions { s + , s 1 , ..., s n } and the cor-4868 responding label y = { 1 , 0 , ..., 0 } R n +1 , that is, one positive session and n negative sessions.", "We denote the similarity score vector of query-sessions computed by student model S (Equation 2) as z S R n +1 , then the objective of Equation 1 is equivalent to maximizing the KullbackLeibler (KL) divergence (Van Erven and Harremos, 2014) of the two distributions: softmax( z S ) and y , where softmax function turns the score vector to probability distribution.", "The one-hot label y treats each negative sample equally, while the similarity between query with each negative sample is actually different.", "To learn more accurate labels, we further use teacher model T to calculate the similarity score vector between q and S , denoted as z T R n +1 .", "We then replace the original training objective with minimizing KL divergence of the two distributions softmax( z S ) and softmax( z T ) (Figure 1), where the temperature parameter is applied in softmax function to avoid saturation.", "The method of fine-to-coarse distillation is to push the student model (multi-tower) to learn the predicted label of teacher model (one-tower) as a soft target instead of original one-hot label.", "By fitting the label predicted by the teacher model, the multi-tower model can learn a more accurate similarity score distribution from the one-tower model while keeping the structure unchanged.", "To evaluate the performance of the proposed model, we construct two new datasets based on the Reddit comments dump (Zhang et al., 2019) and Twitter corpus 2 .", "We create a training set, a multi-contexts (MC) test set and a candidate database for Reddit and Twitter respectively.", "For Reddit, we create an additional single-context (SC) test set.", "The motivation for these settings is explained in 5.3.", "The size of our candidate database is one million in Twitter and ten million in Reddit respectively, which is very challenging for response retrieval.", "Table 1 shows the detailed statistics.", "We use exactly the same steps to build dataset for Reddit and Twitter, and similar datasets can also build from other large dialogue corpus in this way.", "select one context c from its all corresponding contexts C r to construct a context-response (CR) pair, and put the others contexts (denoted as C r ) back to the database.", "Our MC test set consists of these CR pairs.", "Each response in MC test set has multiple contexts, which ensures that there exits other contexts in the database that also correspond to this response, so the retrieval recall rate can be computed to evaluate the MC test set.", "SC test set We create another test set (SC) for Reddit dataset.", "Contrary to the MC test set, each response in SC test set has only one context, i.e., there is no context in the database that exactly corresponds to the response.", "Obviously, the retrieval recall rate is invalid (always zero) on SC test set.", "We introduce other methods to evaluate SC test set in 5.2.", "The SC test set is a supplement to the MC test set which can evaluate the quality of retrieved responses given those unique\" contexts. Candidate database To adapt to different retrieval methods, the candidate database is designed with 4 fields, namely context , response , session . Our candidate database consists of random context-response pairs except those in the MC and SC test sets. Besides, as mentioned above, those unselected context-response pairs ( C r ) are deliberately merged into the database. Train set The construction of training set is intuitive and similar to test set. It consists of responses and their corresponding multiple contexts. Formally, the training set can be denote as D = { r i , c i, 1 , ..., c i,q } Ni =1 , r i is a response and { c i, 1 , ..., c i,q } are all contexts with response r i , where q depends on r i , and q 2 . It is worth noting that there is no overlap between the contexts in the database and the contexts in the training set, which may prevent potential data leakage during training process to overestimate the evaluation metrics. The details of dataset construction are introduced in Appendix A. 4869 5 Experiments We conduct extensive experiments on the constructed datasets. In this section, we present experimental settings, evaluation metrics, model performance, human evaluation, etc. to demonstrate the effectiveness of the proposed models. 5.1 Compared Models For baselines, we select BM25 (Robertson and Zaragoza, 2009) as sparse representations based method, which is widely used in real scenarios in text matching. Based on BM25 system and the two matching methods (QC and QS matching), two retrievers can be obtained, denoted as BM25-QC and BM25-QS respectively. We choose multi-tower models as dense representations based methods. They are b ie ncoder based two-tower models for QC matching and QS matching (denoted as BE-QC and BE-QS), and t rie ncoder based three-tower model for DQS matching (denoted as TE-DQS). In addition, to demonstrate the advantages of contextual matching, we also report the results of query-response (QR) matching, two retrievers are build based on BM25 system and two-tower model (de-noted as BM-QR and BE-QR). There are three variants of our proposed CFC models, they are the distilled versions of BE-QC, BE-QS and TE-DQS, which are called CFC-QC, CFC-QS and CFC-DQS respectively. The distillation of each student model needs to train the corresponding teacher model. In particular, the distillation from TE-DQS to CFC-DQS requires two teacher models, because the similarity between both query-context and query-response needs to be calculated. We summarize the details of compared models and provide training details in Appendix B. 5.2 Evaluation Metrics Following previous work (Xiong et al., 2020; Karpukhin et al., 2020), Coverage@K is used to evaluate whether Top-K retrieved candidates include the ground-truth response. It is equivalent to recall metric RM @ K that often used in fine-grained RS, where N is the size of candidate database. However, Coverage@K is only suitable for evaluating the MC test set, and it is incapable for evaluating the overall retrieval quality due to the one-to-many relationship between context and response. As a supplement, we propose two automated evaluation metrics based on pre-trained models, i.e., Perplexity@K and Relevance@K . For retrieved Top-K responses, DialogGPT (Zhang et al., 2019) is used to calculate the conditional perplexity of the retrieved response given the query. DialogGPT is a language model pre-trained on 147M multi-turn dialogue from Reddit discussion thread and thus very suitable for evaluating our created Reddit dataset. Perplexity@K is the average perplexity of Top-K retrieved responses. In addition to Perplexity, we also evaluate the correlation between the query and retrieved response. We use DialogRPT (Gao et al., 2020), which is pre-trained on large-scale human feedback data with the human-vs-rand task that predicts how likely the response is corresponding to the given context rather than a random response. Relevance@K is the average predicted correlation degree between query and Top-K retrieved responses. Perplexity@K and Rel-evance@K are average metrics based on all Top-K retrieved responses, so they can reflect the overall retrieval quality. 5.3 Overall Performance We demonstrate the main results in Table 2 and Table 3 and discuss model performance from multiple perspectives. Dense vs. sparse It can be seen that the performance of dense retrievers far exceed that of the BM25 system, which shows rich semantic information of PLMs and additional training can boost the performance of the retriever. For example, compared with BM25 system, the best undistilled dense retrievers (BE-QS) have a obvious improvement in three metrics. For Coverage@K, the Top-500 recall rate of BE-QS on the MC test set of Reddit and Twitter increase by 12.1% and 17.4% absolute compared with BM25-QS. For Perplexity@K, the Top-20 average perplexity of BE-QS on the MC and SC test sets of Reddit is reduced by 8.1 and 8.5 absolute compared with BM25-QS. For Rele-vance@K, the Top-20 average relevance of BE-QS on the MC and SC test sets on Reddit increase by 6.3% and 6.5% absolute compared with BM25-QS. Coverage@K measures the retriever's ability to retrieve gold response, while Perplexity@K and Relevance@K measure the overall retrieval quality. Our results show the consistency of the three metrics, namely, the recall rate and the overall retrieval quality have a positive correlation. Matching method Compared with contextual matching, query-response (QR) matching has a 4870 MC Test Set SC Test Set Retriever Coverage@K Perplexity@K Relevance@K Perplexity@K Relevance@K Top-1 Top-20 Top-100 Top-500 Top-1 Top-20 Top-1 Top-20 Top-1 Top-20 Top-1 Top-20 Gold --205.7 73.1 181.8 82.0 Contextual matching BM25-QC 1.1 3.9 5.7 7.8 210.5 217.9 61.5 53.5 208.3 217.5 60.6 52.1 BM25-QS 0.9 3.6 5.8 8.3 207.7 214.2 80.0 73.9 200.0 208.3 81.6 74.1 BE-QC 1.3 5.3 8.1 12.3 205.4 211.5 81.3 75.8 194.4 203.2 82.9 78.3 BE-QS 1.6 5.9 11.8 20.4 200.1 206.1 85.0 80.2 190.9 199.8 85.3 80.6 TE-DQS 1.5 5.5 9.7 18.1 201.3 207.5 84.8 79.8 190.5 198.2 85.5 80.4 CFC-QC 2.9 6.5 9.1 13.0 199.5 208.9 84.9 78.6 187.5 196.3 86.2 80.8 CFC-QS 4.2 7.8 13.1 21.3 194.8 203.1 87.8 82.8 184.3 193.1 88.3 83.4 CFC-DQS 3.7 7.3 12.7 19.4 196.5 205.3 86.9 81.9 184.8 192.6 88.1 83.3 Non-contextual matching BM25-QR 0.2 0.7 1.3 2.4 214.2 219.2 60.3 52.9 202.8 214.5 70.4 62.7 BE-QR 0.2 0.8 1.5 2.6 207.2 213.4 72.8 67.2 198.1 206.5 78.2 71.4 Table 2: Automated evaluation metrics on Reddit test set. For MC and SC test set, we both report Perplexity@1/20 and Relevance@1/20; for SC test set, we additionally report Coverage@1/20/100/500. For Coverage@K and Relevance@K, we report the numerator of its percentage, and the larger the better; for Perplexity@K, the smaller the better. Retriever Coverage@K Top-1 Top-20 Top-100 Top-500 BM25-QC 16.2 28.5 35.7 42.9 BM25-QS 16.3 28.3 35.1 42.8 BE-QC 19.6 36.2 46.4 56.5 BE-QS 22.1 38.9 49.7 60.2 TE-DQS 21.5 38.4 49.5 60.4 CFC-QC 24.2 39.1 48.6 58.2 CFC-QS 28.8 43.7 52.8 62.6 CFC-DQS 28.2 43.3 52.5 61.9 Table 3: Automated evaluation metrics on Twitter test set, we report Coverage@1/20/100/500 on the MC test set. much lower retrieval recall rate, which is also verified in (Lan et al., 2020). We think it is because that response is usually a short text of one-sentence and contains insufficient information, and there may be little keywords that overlap with the query. Therefore, it is important to consider contextual matching in the RBD system. Compared to QC matching, QS and DQS matching should be encouraged in practice due to the additional information provided by the response. However, the BM25 system can not make good use of the information of response, as BM25-QS model does not show obvious advantages over BM25-QC on both Reddit and Twitter datasets. In contrast, dense retrieval models can effectively utilize the response. For example, BE-QS outperforms BE-QC greatly by 7.9% absolute in terms of Top-500 response retrieval recall rate in MC test set of Reddit. For QS and DQS matching, there is little difference in performance. Especially for SC test set on Reddit and MC test set on Twitter, the performance difference is minimal. One potential advantage of DQS is that it can utilize positive query-response pairs, whose number is much larger than positive query-context pairs. Distillation benefit We further focus on the performance gain from fine-to-coarse distillation. The distilled models achieve obvious improvement in all three metrics. An obvious pattern is that the distilled models get more larger improvement with a smaller K. Take Twitter dataset as example, the Top-500 retrieval recall rate of CFC models increase by 1.5 2.4 after distillation, while the Top-1 retrieval recall rate increased by 4.6 6.7. On Perplexity@K and Relevance@K, our CFC models has similar performance. The significant improvement in the retrieval recall rate at small K's is especially beneficial to fine-grained response selection, because it opens up more possibility to the ranker to choose good response while seeing fewer candidates. The above results indicate that our student models benefit from learning or inheriting fine-grained knowledge from teacher models. To more clearly demonstrate the performance gains of our model after distillation, we provide the specific values of these gains in Table 8 in Appendix C. Difference between Reddit and Twitter Since DialogGPT and DialogRPT is not pre-trained on Twitter, Perplexity@K and Relevance@K are not 4871 Retriever Coverage@K Top-1 Top-20 Top-100 Top-500 BE-QC 1.31 5.28 8.12 12.26 (cid:44) share 1.29 5.26 8.12 12.26 TE-DQS 1.47 5.52 9.74 18.12 (cid:44) share 1.49 5.51 9.73 18.11 Table 4: Impact of parameter sharing on model performance. 13.3 11.5 9.5 8.3 34.1 28.8 24.3 20.4 36.3 30.1 25.2 21.3 0 5 10 15 20 25 30 35 40 1M 2M 5M 10M C o v e r a g e @ 500 Database Size BM25-QS BE-QS CFC-QS Figure 3: The Impact of database size on Cover-age@500 metric of BM25-QS, BE-QS, CFC-QS. suitable for evaluating Twitter dataset. Therefore, we do not build SC test set for Twitter. Compared to Twitter, the Reddit dataset we use is much larger with more common multi-turn conversations, and significantly higher retrieval difficulty. The Top-500 retrieval recall rate on Twitter reach 60%, while Reddit only reached about 20%, which indicates that the coarse-grained response retrieval task in open domain conversations still has great challenges. 6 Further Analysis 6.1 Parameter Sharing Sharing parameters in dual-encoder structure is a common practice. As shown in Figure 2, for the encoders in the dotted line, sharing parameters may be beneficial. We try parameter sharing settings on the BE-QC and TE-DQS models, respectively. We add two sets of experiments on the MC test set of Reddit, as shown in Table 4. The results show that whether or not to share parameters has little impact on Coverage@K. Therefore, we can share encoder parameters to reduce model complexity with little loss of performance. Our guess is as follows, the sampling strategy (with replacement) create a certain probability that the query and the context are exactly the same, so the multi-tower model can learn that two identical samples are positive samples for each other, even Avg. Rank Cohen's Kappa CFC-QS 1.448 0.728 BE-QS 2.056 0.647 BM25-QS 2.494 0.626 Table 5: Human average rank score of BM25-QS, BE-QS and CFC-QS. Win Loss Cohen's Kappa CFC-QS vs. BE-QS 0.747 0.253 0.634 CFC-QS vs. BM25-QS 0.816 0.184 0.672 Table 6: Human pairwise comparison of BM25-QS, BE-QS and CFC-QS. if the parameters of the encoders are not shared. 6.2 Effect of Database Size We discuss the impact of the size of candidate database on the performance of the model. For different candidate database size (from one million to ten million), we compare the Coverage@500 metric of BM25-QS, BE-QS, and CFC-QS on the MC test set of Reddit (Figure 3). It can be seen that Coverage@500 shows a slow downward trend as the database size increases. Increasing the size of the database will not make the model performance drop rapidly, which shows the effectiveness and robustness of our models. 6.3 Human Evaluation To further evaluate and compare our models, we conduct a human evaluation experiment. We random select 1000 queries from the MC and SC test set (500 each) of Reddit dataset, and retrieve the Top-1 response by the BM25-QS, BE-QS and CFC-QS models respectively. Three crowd-sourcing workers are asked to score the responses. For each query, the annotator will strictly rank the retrieved responses of the three models. We report the average rank scores (between 1 and 3, the smaller the better) and the winning rate in pairwise comparison. Each two annotators have a certain number (about 200) of overlapping annotated samples. To evaluate the inter-rater reliability, the Cohen's kappa coefficient (Kraemer, 2014) is adopted. Table 5 and Table 6 report the average ranking score of each model and pairwise comparison between models respectively. The average ranking score of CFC-QS is the highest, and CFC-QS can beat BE-QS and BM25 in most cases (74.7% 81.6%), which indicates CFC-QS occupies a clear advantage in Top-1 retrieval. All Co-4872 hen's Kappa coefficients is between 0.6 and 0.7, indicating annotators reach moderate agreement. The results of human evaluation further verify the performance improvement brought by distillation to the model. We select several examples with human evaluation as case study and these results are presented in Appendix D. 6.4 Retrieval efficiency We compare the retrieval latency of BM25-QS and BE-QS on the reddit MC test set, which represent the efficiency of the sparse and dense retriever respectively. We fix the batch size to 32 and retrieve top 100 most similar candidates. With the help of FAISS index, the average retrieval time of each batch by BE-QS is 581.8ms. In contrast, the average retrieval time by BM25 system using file index is 1882.6ms, about three times that of BE-QS. This indicates that the dense retriever also has an advantage in retrieval efficiency. The relatively inferior of dense retriever is that it needs to compute the embeddings of the candidate database and establish the FAISS index, which is quite time-consuming and it takes about 9 hours for BE-QS to handle 10 million candidates with 8 GPUs, while it only takes about 10 minutes to build a BM25 index. Since distillation does not change the structure of the retriever, it will not affect the retrieval efficiency. The cost of distillation is mainly reflected in the training of the teacher model and the extensive forward calculation in the distillation process. 7 Conclusion In this paper, we propose a Contextual Fine-to-Coarse (CFC) distilled model. In CFC model, we adopt matching on both query-response and query-context. Considering the retrieval latency, we use multi-tower architecture to learn the dense representations of queries, responses and corresponding contexts. To further enhance the performance of the retriever, we distill the knowledge learned by the one-tower architecture (fine-grained) into the multi-tower architecture (coarse-grained). We construct two new datasets based on Reddit comment dump and Twitter corpus, and extensive experimental results demonstrate the effectiveness and potential of our proposed model. In the future work, we will further explore how the enhancement of coarse-grained RS can help fine-grained RS. Acknowledgments This work is partially supported by Natural Science Foundation of China (No.6217020551, No.61906176), Science and Technology Commission of Shanghai Municipality Grant (No.20dz1200600, 21QA1400600, GWV-1.1, 21511101000) and Zhejiang Lab (No. 2019KD0AD01). Ethical Statement In this paper, different ethical restrictions deserve discussion. The datasets we created are derived from large dialogue corpus that publicly available on the Internet, and we strictly followed the platform's policies and rules when obtaining data from web platforms. We did not use any author-specific information in our research. Online large dialogue corpus may includes some bias, such as political bias and social bias, and our model might have inherited some forms of these bias. In order to limit these bias as much as possible, we filter controversial articles and removed data with offensive information when possible. References Andrzej Biaecki, Robert Muir, Grant Ingersoll, and Lucid Imagination. 2012. Apache lucene 4. In SIGIR 2012 workshop on open source information retrieval , page 17. Basma El Amel Boussaha, Nicolas Hernandez, Christine Jacquin, and Emmanuel Morin. 2019. Deep retrieval-based dialogue systems: a short review. arXiv preprint arXiv:1907.12878 . Muthuraman Chidambaram, Yinfei Yang, Daniel Cer, Steve Yuan, Yun-Hsuan Sung, Brian Strope, and Ray Kurzweil. 2018. Learning cross-lingual sentence representations via a multi-task dual-encoder model. arXiv preprint arXiv:1810.12836 . Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 . Zhenxin Fu, Shaobo Cui, Mingyue Shang, Feng Ji, Dongyan Zhao, Haiqing Chen, and Rui Yan. 2020. Context-to-session matching: Utilizing whole session for response selection in information-seeking dialogue systems. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining , pages 16051613. 4873 Xiang Gao, Yizhe Zhang, Michel Galley, Chris Brockett, and Bill Dolan. 2020. Dialogue response ranking training with large-scale human feedback data. arXiv preprint arXiv:2009.06978 . Jia-Chen Gu, Tianda Li, Quan Liu, Zhen-Hua Ling, Zhiming Su, Si Wei, and Xiaodan Zhu. 2020. Speaker-aware bert for multi-turn response selection in retrieval-based chatbots. In Proceedings of the 29th ACM International Conference on Information & Knowledge Management , pages 20412044. Ryuichiro Higashinaka, Kenji Imamura, Toyomi Me-guro, Chiaki Miyazaki, Nozomi Kobayashi, Hiroaki Sugiyama, Toru Hirano, Toshiro Makino, and Yoshi-hiro Matsuo. 2014. Towards an open-domain conversational system fully based on natural language processing. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers , pages 928939. Kai Hui, Andrew Yates, Klaus Berberich, and Gerard De Melo. 2018. Co-pacrr: A context-aware neural ir model for ad-hoc retrieval. In Proceedings of the eleventh ACM international conference on web search and data mining , pages 279287. Samuel Humeau, Kurt Shuster, Marie-Anne Lachaux, and Jason Weston. 2019. Poly-encoders: Transformer architectures and pre-training strategies for fast and accurate multi-sentence scoring. arXiv preprint arXiv:1905.01969 . Jeff Johnson, Matthijs Douze, and Herv Jgou. 2019. Billion-scale similarity search with gpus. IEEE Transactions on Big Data . Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for open-domain question answering. arXiv preprint arXiv:2004.04906 . Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 . Helena C Kraemer. 2014. Kappa coefficient. Wiley StatsRef: Statistics Reference Online , pages 14. Brian Kulis et al. 2012. Metric learning: A survey. Foundations and trends in machine learning , 5(4):287364. Tian Lan, Xian-Ling Mao, Xiao-yan Gao, and He-Yan Huang. 2020. Ultra-fast, low-storage, highly effective coarse-grained selection in retrieval-based chat-bot by using deep semantic hashing. arXiv preprint arXiv:2012.09647 . Sheng-Chieh Lin, Jheng-Hong Yang, and Jimmy Lin. 2020. Distilling dense representations for ranking using tightly-coupled teachers. arXiv preprint arXiv:2010.11386 . Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI blog , 1(8):9. Stephen Robertson and Hugo Zaragoza. 2009. The probabilistic relevance framework: BM25 and beyond . Now Publishers Inc. Karin Sevegnani, David M Howcroft, Ioannis Konstas, and Verena Rieser. 2021. Otters: One-turn topic transitions for open-domain dialogue. arXiv preprint arXiv:2105.13710 . Anshumali Shrivastava and Ping Li. 2014. Asymmetric lsh (alsh) for sublinear time maximum inner product search (mips). arXiv preprint arXiv:1405.5869 . Chongyang Tao, Wei Wu, Can Xu, Wenpeng Hu, Dongyan Zhao, and Rui Yan. 2019. Multi-representation fusion network for multi-turn response selection in retrieval-based chatbots. In Proceedings of the twelfth ACM international conference on web search and data mining , pages 267275. Tim Van Erven and Peter Harremos. 2014. Rnyi divergence and kullback-leibler divergence. IEEE Transactions on Information Theory , 60(7):37973820. Taesun Whang, Dongyub Lee, Chanhee Lee, Kisu Yang, Dongsuk Oh, and Heuiseok Lim. 2019. Domain adaptive training bert for response selection. arXiv preprint arXiv:1908.04812 . Taesun Whang, Dongyub Lee, Dongsuk Oh, Chanhee Lee, Kijong Han, Dong-hun Lee, and Saebyeok Lee. 2020. Do response selection models really know what's next? utterance manipulation strategies for multi-turn response selection. arXiv preprint arXiv:2009.04703 . Taesun Whang, Dongyub Lee, Dongsuk Oh, Chanhee Lee, Kijong Han, Dong-hun Lee, and Saebyeok Lee. 2021. Do response selection models really know what's next? utterance manipulation strategies for multi-turn response selection. In Proceedings of the AAAI Conference on Artificial Intelligence , volume 35, pages 1404114049. Lee Xiong, Chenyan Xiong, Ye Li, Kwok-Fung Tang, Jialin Liu, Paul Bennett, Junaid Ahmed, and Arnold Overwijk. 2020. Approximate nearest neighbor negative contrastive learning for dense text retrieval. arXiv preprint arXiv:2007.00808 . Ruijian Xu, Chongyang Tao, Daxin Jiang, Xueliang Zhao, Dongyan Zhao, and Rui Yan. 2020. Learning an effective context-response matching model with self-supervised tasks for retrieval-based dialogues. arXiv preprint arXiv:2009.06265 . Yi Xu, Hai Zhao, and Zhuosheng Zhang. 2021. Topic-aware multi-turn dialogue modeling. In The Thirty-Fifth AAAI Conference on Artificial Intelligence (AAAI-21) . 4874 Rui Yan, Yiping Song, and Hua Wu. 2016. Learning to respond with deep neural networks for retrieval-based human-computer conversation system. In Proceedings of the 39th International ACM SIGIR conference on Research and Development in Information Retrieval , pages 5564. Sohee Yang and Minjoon Seo. 2020. Is retriever merely an approximator of reader? arXiv preprint arXiv:2010.10999 . Chunyuan Yuan, Wei Zhou, Mingming Li, Shangwen Lv, Fuqing Zhu, Jizhong Han, and Songlin Hu. 2019. Multi-hop selector network for multi-turn response selection in retrieval-based chatbots. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP) , pages 111120. Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, and Bill Dolan. 2019. Dialogpt: Large-scale generative pre-training for conversational response generation. arXiv preprint arXiv:1911.00536 . Zhuosheng Zhang, Jiangtong Li, Pengfei Zhu, Hai Zhao, and Gongshen Liu. 2018. Modeling multi-turn conversation with deep utterance aggregation. arXiv preprint arXiv:1806.09102 . Xiangyang Zhou, Lu Li, Daxiang Dong, Yi Liu, Ying Chen, Wayne Xin Zhao, Dianhai Yu, and Hua Wu. 2018. Multi-turn response selection for chatbots with deep attention matching network. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 11181127. A Dataset Construction Details To filter boring and dull content and speed up the retrieval speed, we set a limit for the length of contexts and responses. We limit the context to contain at least 5 words and less than 128 words, and the response contains at least 5 words and less than 64 words. It is specially beneficial to limit the length of the response, since according to our statistics, many short responses such as \" Fair Enough \" and \" Thanks :D \" may have large number ( tens of thousands ) of different contexts." ]
[ "method", "abstain", "objective", "method", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "method", "method", "method", "objective", "other", "other", "abstain", "other", "abstain", "abstain", "method", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "other" ]
[ "Corpus query systems exist to address the multifarious information needs of any person interested in the content of annotated corpora.", "In this role they play an important part in making those resources usable for a wider audience.", "Over the past decades, several such query systems and languages have emerged, varying greatly in their expressiveness and technical details.", "This paper offers a broad overview of the history of corpora and corpus query tools.", "It focusses strongly on the query side and hints at exciting directions for future development.", "Annotated corpora have always been the backbone for many fields in NLP and other disciplines related to linguistics.", "Whether serving as an invaluable source of empirical evidence for foundational research or doubling as gold-standard training input for fueling the furnaces of our machine learning factories, their importance cannot be overemphasized.", "But especially for the empirically motivated user base, corpora are only ever as good as the means available to explore them.", "And the primary means of exploring linguistically annotated corpora have always been (dedicated) corpus query tools and corpus query languages in their manifold shapes.", "In this paper we intend to give a thorough chronology of the major interplay between corpus progression and query tool evolution, with a strong focus on the latter.", "We start with an overview on relevant aspects of corpora and how they changed over the past ~30 years in Section 2.", "Section 3 elaborates on the observable phases in query tool development.", "In Section 4 we discuss alternative corpus query approaches based on general purpose data(base) management solutions and provide pointers to related work in Section 5.", "Section 6 summarizes some of our observations and with Section 7 we finally hint at our vision for future directions in corpus query system development.", "Though corpus linguistics dates back further, major online catalogs such as those from LDC 1 and ELRA 2 list corpora starting from the early 1990s.", "In the following decades corpus trends have varied along several dimensions, both technical and content-related.", "This section discusses such features and gives examples for their evolution.", "Since this overview is an introduction to digital corpus query systems, we mainly focus on written and annotated corpora.", "With a focus on written corpora, character encoding is a decisive factor when estimating the publication date.", "Starting from plain ASCII (Everts, 2000 3 , Graff and Cieri, 2003) and lan-guage/script specific encodings, such as ISO/IEC 8859 (Armstrong-Warwick et al., 1994; Federico et al., 2000), nowadays many corpora come with a (mostly) language independent UTF-8 encoding (Ion et al. (2012); Prasad et al. (2019) and compare Schafer (2015) with Schafer and Bildhauer (2012)), which is also able to capture symbols relevant for transcription and annotation.", "Similar to character encoding, the preferences regarding the representation format for corpus content changed over time.", "Many corpora established in the 1990s come in an SGML format (Liberman, 1989; Amaryllis, 2001; Graff, 1995).", "In the next decade, XML-based corpora followed (Chiao et al. (2006) and compare Hajic et al. (2001) and Pajas 1 Linguistic Data Consortium, https://catalog.", "ldc.upenn.edu/ 2 European Language Resources Association, http:// catalogue.elra.info/ 3 Earlier version published 1997 by ELRA: ISLRN 628-817-117-400-1 and Stepanek (2005)) and since corpora were also made accessible over the web, relational database management systems (RDBMSs) became a valuable backend for corpus storage (Davies, 2005).", "Today we face a multitude of formats ranging from sophisticated and specialized XML encodings to simple tabular formats and often a corpus comes with more than one representation (Petran et al., 2016; Bick, 2018).", "Especially since the first CoNLL shared tasks 4 , their tabular format to encode sequence-based annotations and relations has been majorly developed (Nivre et al., 2016).", "Regarding included languages , multilingual and (partly) parallel corpora appear early (Liberman, 1989; Armstrong-Warwick et al., 1994; Graff and Finch, 1994), however, there was a rise of parallel corpora in the first decade of the current century.", "Prominent examples are Europarl (Koehn, 2005), the CESTA Evaluation Package (Hamon et al., 2006) and the Prague Czech-English Dependency Treebank 1.0 (Cmejrek et al., 2005).", "On the other hand, with the rise of web corpora, language detection became more important to only crawl (or keep) web data for a specific language.", "Corpus size is a less discriminative factor than one might think, since many early corpora came as collections of sub-corpora.", "Armstrong-Warwick et al. (1994) already contains 90 million words and LDC's Gigaword initiative started in 2003 (Graff and Cieri, 2003), while many small corpora for specific topics or containing manual annotations are constantly being created.", "Nevertheless, with recent web corpora, e.g. ENCOW16 5 and iWEB 6 , several billion tokens pose new challenges for the design of both storage and search facilities.", "While for spoken corpora domain selection is often tailored to the research question at hand (cf.", "Talkbank (MacWhinney et al., 2004)), for written corpora (and especially annotated ones) there is a bias towards news and official documents, which was superseded by multi-domain web corpora starting in the late 2000s (e.g. the WaCKy initiative (Baroni et al., 2009) and COW) and, in the follow up, the increasing number of corpora of computer-mediated communication and social media 7 .", "Like 4 https://www.conll.org/previous-tasks 5 CO rpora from the W eb (COW), English sub-corpus, https://corporafromtheweb.org/ 6 https://www.english-corpora.org/ 7 Annual conference on computer-mediated communication and social media corpora started in 2013 https:// sites.google.com/site/cmccorpora/ with the language setting, for web-corpora the challenge is no longer to include more languages or domains, but to identify and/or restrict them to a sensible subset.", "Collections of historical language data have also been available for some time, e.g. the Corpus of Middle English Prose and Verse 8 and with the rise of the Digital Humanities many further corpora are created and/or enhanced with linguistic annotations, such as the Drama Corpora Project 9 , where some corpora have been enhanced with lemma information.", "Most corpora come with annotations , the earlier ones mainly with flat and word-based annotations, mostly including part-of-speech, such as the ECI-ELSNET Italian & German tagged sub-corpus 10 .", "Regarding the structural aspect, stand-off syntactic annotations became more feasible with emerging treebanks, while over time the focus changed from phrase-based (Brants et al., 2004) to dependency tree structures (Haji c et al., 2001).", "The current decade has also seen an increase in the richness of annotation layers of morphological, syntactical and semantical description, including highly concurrent annotations belonging to the same description layer, e.g. Ide et al. (2010) or Schweitzer et al. (2018).", "We observed three major phases or generations in the history of corpus query systems, which are roughly aligned to the last three decades.", "The following is meant as a comprehensive but not exhaustive chronology of corpus query systems and approaches.", "Space does not permit we provide in-depth descriptions for every system mentioned but instead refer to Section 5 for pointers to existing work that discusses and compares certain (families of) query systems in detail.", "The history of corpus querying systems has been for the most part tightly connected to the gradual expansion of the targeted corpus resources.", "As such the initial wave of corpus query tools during the 1990s was mostly geared towards text corpora: The COSMAS 11 lineage remains until today 12 8 https://quod.lib.umich.edu/c/cme/ 9 https://dracor.org/ 10 ISLRN 869-857-775-378-7 11 Corpus Search, Management and Analysis System, http://www.ids-mannheim.de/cosmas2/ 12 The initial version COSMAS I has been in continuous service from 1992 till 2003 and COSMAS II ever since 2002 the public query front-end for the large corpus collection hosted at the IDS (Bodmer, 2005), offering keyword in context (KWIC) visualization in a browser-frontend and various query constraints.", "In contrast the Linguistic DataBase program ( LDB ) (Halteren and Heuvel, 1990) features a very expressive tree-based query syntax and also ships with a tree editor.", "In addition it provides an ingenious event-based approach for extracting information from a corpus during search.", "The Corpus Workbench (CWB) architecture (Christ, 1994) with the Corpus Query Processor (CQP) as its core component is maybe the most widely used corpus query system as of today, serving as the backend for many corpus exploration websites.", "Having been under continuous maintenance to keep up with the demands of the new century (Evert and Hardie, 2011), it provides a solid set of simple yet expressive search features, such as regular expressions over tokens and token content, flexible structural boundaries, support for parallel corpora or the ability to enrich a corpus during ingest with external data that can then be used for querying, e.g. WordNet (Miller, 1995) categories.", "Emu (Cassidy and Harrington, 1996) was designed for speech corpora with multiple levels of segmentation.", "Primarily a hierarchical speech data management system, it also supports labeland position-based queries for collections of tokens.", "Similarly the MATE Workbench (Mengel, 1999; Mengel et al., 1999; Heid and Mengel, 1999; Isard et al., 2000) also targets combinations of text and speech data in the form of XML annotation files.", "It provides full boolean operations over hierarchical and time-based constraints in a logic-style query language, but no direct support for quantifiers.", "At the dawn of the 21 st century the second and larger wave of query systems emerged.", "Initially focused heavily on treebanks annotated for phrase-based syntax, a later trend shifted more towards supporting dependency syntax annotations, with an overall theme of increasing expressiveness with new approaches to query syntax and constraints.", "TIGERSearch (Konig and Lezius, 2000; Lez-ius, 2002) was among the first with its logic-based query language to target phrase-based treebanks conforming to the TIGER model (Brants et al., 2004).", "It inspired many of the later query approaches, but was quickly surpassed wrt expressiveness due to limited negation or quantification 13 .", "The ICE Corpus Utility Program ( ICECUP ) 14 introduced a completely new direction of development.", "Wallis and Nelson (2000) emphasized the complexity required to transform a two-dimensional tree description into a linear sequence of textual expression and made an argument for a graphical query approach.", "Their fuzzy tree fragments act as visual (under-)specification of the targeted phrase-based tree structures and are then matched against instances in a corpus.", "The appeals of this approach are diverse: It enables example-based searching by allowing the user to start from an existing instance in the corpus, transform it into a query and then relax the constraints on that query to generalize it 15 .", "Not having to learn a formal query language and annotation schemes first, also lowers the barrier to entry for successful querying.", "As a dedicated treebank query tool TGrep2 (Ro-hde, 2001) offers a rich query syntax for phrase-based treebanks.", "Notable features are conjunction, disjunction and negation for relations, over 30 pre-defined basic link types and the ability for users to simplify complex queries by using macros.", "Usually corpus query tools depend on the target data already being annotated.", "Gsearch (Corley et al., 2001) however lets the user query unstructured text data by parsing it on the fly with a chart parser.", "Gsearch queries contain phrase-based constraints with limited boolean operators and the results are emitted in SGML.", "VIQTORYA 16 (Steiner and Kallmeyer, 2002) is another tool to query phrase-based treebanks.", "Its query syntax is very similar to TIGERSearch 17 and queries are translated for the RDBMS backend.", "Outside the domain of monolingual corpora ParaConc (Barlow, 2002) combines typical concordancer functionality such as surface search and 13 The developers decided to forgo universal quantification due to computational cost and tractability (TIGERSearch Help, section 10.3) but also proposed an extension of the language with universal quantification and the implication operator.", "Marek et al. (2008) mention a solution based on set operations over multiple queries.", "This allows to express queries which need a universal quantifier if expressed in a single query.", "Unfortunately the referenced term paper is not available online.", "14 Designed for ICE-GB, the British component of the International Corpus of English (Nelson et al., 2002).", "KWIC result view with regex and tag search and applies it to parallel corpora as targets.", "The CorpusSearch (Taylor, 2003; Randall, 2008) command line tool for phrase-based syntax expects tree search configurations provided via query files with a boolean query language over a variety of tree predicates and regular expressions.", "Limitations on disjunction and negation and lack of quantification 18 make it slightly less expressive.", "With full first-order logic the Finite Structure Query ( FSQ ) tool by Kepser (2003) offers access to the complete TIGER model, including arbitrary secondary edges and support for regular expressions in a graphical user interface (GUI).", "It is however limited to rather small corpora due to poor scalability of the query evaluation process.", "19 To access multi-modal and highly cross-annotated data in the NITE Object Model Library (Carletta et al., 2003), Evert and Voormann (2002) specified the NITE Query Language (NiteQL) based on MATE.", "Information from various segmentation levels can be extracted and combined in a logic-style language, including limited quantification.", "To honor the nature of multi-modal data they also propose a level of fuzziness for time operators with a configurable fuzziness interval .", "Based on the MdF (Monads-dot-Features) Database and its query language QL by Doedens (1994), Emdros (Petersen, 2004) implements a text database for annotated texts.", "Its query syntax uses bracket nesting to express hierarchical relations and it surpasses TIGERSearch in several aspects of expressiveness, e.g. existential negation 20 .", "While previously mentioned query systems were either freely available or bound to the licensing model of associated corpus resources (e.g. ICE-CUP), the popular Sketch Engine (Kilgarriff et al., 2004) commercialized 21 corpus management and exploration in a web-based platform (Kilgarriff et al., 2014).", "Extending the CQP, its own query language CQL offers efficient access to corpora available on the platform (Jakubcek et al., 2010).", "Around the same time ANNIS was published 18 The way negation on arguments to search-function calls is handled allows to express certain quantified relations though.", "19 The author of FSQ discusses those limitations in (Kepser, 2004) and proposes a solution based on monadic second-order logic which was later implemented in MonaSearch.", "20 See Petersen (2005) for a brief comparison of the two systems including benchmarks on example queries.", "(Dipper and Gotze, 2005) and started a successful ecosystem with the corpus metamodel SALT, the converter framework PEPPER and ANNIS itself as search module with its query language AQL.", "AQL is a very expressive query language on top of the graph-based model of SALT and an extension of the TIGERSearch syntax.", "Notable improvements over TIGERSearch are the access to concurrent annotations for the same layers, a rich set of segment relations to choose from and the generalization of directed relations in a query to be applicable for any type of edge in the corpus graph (e.g. syntax, coreference or alignments in parallel corpora).", "Queries in ANNIS can be constructed textually or graphically in a browser environment.", "It has been under continuous development for about 15 years now (Zeldes et al., 2009; Krause and Zeldes, 2014), resulting in the richest collection of result visualizations available in any corpus query system.", "The Linguist's Search Engine (LSE) (Resnik and Elkiss, 2005) applies the query-by-example concept in a browser-based setting: A user provides a natural language example containing the desired phenomenon and receives a parse tree usable for querying.", "Relaxation or removal of constraints from this tree then yields increasingly generalized instances from built-in or custom collections 22 .", "The emergence of XPath 23 as a way of querying the tree-structure of various XML-based corpora offered new directions for corpus query languages.", "Bird et al. (2006) introduced LPath as an extension of XPath to overcome its limitations regarding the lack of expressible horizontal relations, a feature crucial for querying linguistic data.", "A later extension turned it into a first-order complete variant named LPath + (Lai and Bird, 2005).", "Faulstich et al. (2006) also used an extension of XPath called DDDQuery to query complex annotation graphs of historical texts 24 .", "While using a RDBMS as backend, they do not directly translate queries into SQL.", "Instead user queries are first transformed into a first-order logic intermediate representation which in turn is translated into SQL.", "The Prague Dependency Treebank (PDT) (Hajic et al., 2001; Hajic, 2006) is a richly annotated corpus.", "Its unique characteristic is a tectogram-22 The Getting Started Guide ( http://hdl.handle. net/1903/1324 ) for LSE mentions TGrep2 as the search component.", "In Resnik and Elkiss (2005) this information is missing and the screenshots do not show textual TGrep queries anymore, so the actual query evaluation backend is unknown.", "23 https://www.w3.org/TR/xpath 24 http://www.deutschdiachrondigital.de/ matical layer which also includes annotations for coreference, deep word order, topic and focus.", "To provide users with adequate tools for access to this complexity, NetGraph (Ondruska et al., 2002; Mrovsky, 2006) allows creation of tree queries for various layers both textually and graphically.", "25 Stockholm TreeAligner (Lundborg et al., 2007; Marek et al., 2008) continues the trend of extending the TIGERSearch language and applies it to parallel corpora.", "Its main improvement is the (re)introduction and implementation of universal quantification to overcome this central weakness.", "Classic query tools for text corpora such as CQP lack the ability to efficiently deal 26 with common features of annotations for morphologically rich languages, such as positional tagsets or non-disambiguated annotation instances.", "POLIQARP 27 (Przepiorkowski et al., 2004; Janus and Przepiorkowski, 2007) is an indexer and query tool loosely based on the CQP approach with a client-server architecture and a variety of available client implementations.", "Initially targeted towards rich word-level annotations, such as in the IPI PAN Corpus (Przepiorkowski, 2004), it was later extended to also cover syntactic-semantic treebanks.", "What's wrong with my NLP?", "by (Riedel, 2008) is primarily meant as a visualization tool with the ability to highlight differences between two concurrent dependency annotations (e.g. a gold standard and automatic predictions) with search options based on surface forms, tags and as a neat feature also including aforementioned diffs.", "Maryns and Kepser (2009a) extended the expressiveness of FSQ to monadic second-order logic in MonaSearch .", "It features a GUI for viewing text-only flat results and defining queries of enormous expressiveness.", "However, due to the limitations of the underlying MONA framework (requiring binary tree structures), the system can only target collections of proper trees.", "PML-TQ 28 (Pajas and Stepanek, 2009; Stepanek and Pajas, 2010) is effectively the successor of NetGraph, being designed to handle 25 Besides NetGraph the tree visualizer and editor software TrEd (Pajas, 2009) also can be used to search in PDT and other tree structures via user macros defined in Perl.", "It does however not offer a query language for non-programmers.", "26 This does not imply their expressiveness being insufficient for this task, but rather that such queries can become quite bloated and their construction cumbersome for users.", "27 POL yinterpretation I ndexing Q uery A nd R etrieval P rocessor 28 P rague M arkup L anguage T ree Q uery the rich multi-level annotations in the PDT.", "Its graphical client 29 is directly integrated into the tree editor TrEd (Pajas, 2009) to support graphical query construction.", "Queries in PML-TQ are expressed as a mandatory selection part in bracket-syntax and an optional list of instructions to generate result reports.", "The latter of those two parts was groundbreaking in that it allows for an unprecedented freedom in selectively extracting information from any successful match during a search and creating various aggregations or statistics from it.", "Besides excellent result handling its query language is also quite powerful, including quantification and negation of sub-queries.", "During the last decade the speed at which new query tools have been developed or published slowed down considerably.", "At the same time continued growth in size of corpus resources rendered some of the earlier approaches inapplicable (cf.", "(Kepser, 2004) for a discussion on the limitations of FSQ), calling for innovative alternatives.", "The three most common themes of this era were", "(i) scalability and adaptability of search backends to keep up with the explosive growth of corpora,", "(ii) reducing the barrier to entry for a", "wide(r) range of potential users and", "(iii) working towards unification or standardization of query languages.", "GrETEL 30 (Augustinus et al., 2012) is another implementation of the example-based search concept for the LASSY corpus (van Noord et al., 2013).", "Users provide sentences or example fragments and mark the areas of interest.", "Examples are then parsed, the subtrees for the specified", "part(s) of the input extracted and subsequently translated into XPath queries to run against the corpus in XML format.", "Further query options include the ability to specify whether or not pos, lemma or surface form of tokens in the subtree should be considered for the query.", "Since the user is effectively shielded from the tree representation and formal query formulation, GrETEL requires neither knowledge of an actual query language nor about the annotation scheme or underlying theories of the corpus.", "Fangorn (Ghodke and Bird, 2012) addresses the challenge of querying treebanks too large to be loaded into memory, a scenario prohibitive for 29 The modular architecture supports multiple scenarios, such as a client-server setup with an RDBMS backend or an integrated index-less query evaluator in Perl for local data.", "30 Gr eedy E xtraction of T rees for E mpirical L inguistics query tools with custom evaluation engines.", "They use Apache LUCENE 31 in a client-server setup to manage large numbers of phrase structure trees.", "Its query language follows the LPath scheme but lacks regular expressions support on label content.", "Unlike the majority of other systems in recent years, we developed ICARUS 32 (Gartner et al., 2013) as a standalone desktop application for visualization and example-based search 33 with a custom query evaluation system and no indexing or dependency on another database technology.", "Initially designed for querying dependency treebanks it underwent multiple extensions to make it compatible with annotations for coreference (Gartner et al., 2014) and prosody 34 (Gartner et al., 2015) and also to incorporate automatic error mining as a means of exploration (Thiele et al., 2014).", "Its bracket-style query language is similar to PML-TQ but lacks quantifiers and a dedicated section for result preparation instructions.", "While queries can be defined both textually or graphically, the preferred way is to use the graphical query editor that also provides contextual help for getting started easily.", "CLARIN Federated Content Search 35 ( CLARIN-FCS ) is a successful example of unifying query access to multiple distributed corpus resources hosted by different parties and with diverse native query frontends.", "Its query language FCS-QL is heavily based on POLIQARP but also only meant to cover a small intersection of the expressiveness of common corpus query tools.", "On the level of standardization CQLF 36 (Banski et al., 2016) provides an initiative that aims at providing means for comparability and interoperability of corpus query languages.", "In its first phase 37 CQLF-1 defines classes and features for the description of query languages for single-stream data.", "A unified serialization format for CQLF-1 is available with KoralQuery (Bingel and Diewald, 2015), a JSON-LD based and theory-neutral cor-31 https://lucene.apache.org/ 32 I nteractive Platform for C orpus A nalysis and R esearch, U niversity of S tuttgart 33 An integrated interface for plugging in dependency parsers allows users to generate parses for example sentences that can then be converted into queries and relaxed iteratively.", "34 With various similarity measures usable for expressing query constraints based on the PaIntE model by M ohler (2001) 35 https://www.clarin.eu/content/ content-search 36 C orpus Q uery L ingua F ranca.", "Part of ISO TC37 SC4 Working Group 6 (ISO 24623-1:2018).", "37 CQLF is an ongoing long-term effort, with CQLF-2 currently being worked on at the stage of a committee draft.", "pus query protocol.", "It serves as the internal query representation 38 of KorAP 39 (Banski et al., 2014; Diewald et al., 2016), the designated successor of COSMAS II.", "While CLARIN-FCS multiplexes a query defined in a common (limited wrt expressiveness) query language to multiple query processors, KorAP lets the user choose up-front among several query languages 40 that all can be processed by the system in a microservices architecture 41 .", "Similar to Fangorn, SETS 42 (Luotolahti et al., 2015) is geared towards very large treebanks, this time targeting dependency syntax with a query language inspired by TRegex 43 .", "It is browser-based with a RDBMS backend and uses an elaborate query evaluation process: SETS generates and compiles optimized code for matching tokens for each query and only retrieves the minimal token sets from the database needed for evaluating a query.", "Multilingwis 44 (Clematide et al., 2016) provides exploration in multiparallel corpora (Gra en et al., 2016).", "Focused on result presentation and reducing the required expert knowledge, it simplifies the process of finding translation variants.", "Other notable events in this time period include the modernization of CQP for the new millen-nium (Evert and Hardie, 2011) and the introduction of graphANNIS (Krause et al., 2016), a graph database backend for ANNIS3 as an alternative to the former RDBMS-based relANNIS.", "Many of the systems we presented in Section 3 use various forms of database technology as their storage or evaluation backend.", "Typically every such database or information management system already ships with its dedicated query language, such as SQL for RDBMSs, SPARQL for the RDF format, XPath and XQuery for XML documents, CYPHER for Neo4j and other graph-based databases or Apache LUCENE with its own query dialect for accessing the text database.", "38 The high level of abstraction it implements and the verbosity required to express simple queries combined with JSON syntax results in limited human readability.", "39 Kor pus a nalyse p lattform der n achsten Generation (Corpus analysis platform of the next generation) 40 At the time of writing it supports the following query languages: Poliqarp, FCS-QL, AQL, CQP 1.2, COSMAS II 41 KorAP builds on a variety of (storage) technologies, in-luding several RDBMS variants, LUCENE and also the graph database Neo4j ( http://neo4j.com/ ).", "42 S calable and E fficient T ree S earch 43 A Tree regular expression language in TGrep2 style 44 Multiling ual W ord I nformation S ystem This does of course prompt the question on the necessity of developing dedicated corpus query languages when more often than not the actual query evaluation is just offloaded to an existing database technology.", "Already Jarke and Vassiliou (1985) mentioned a plethora of (technical) factors to be considered when deciding on a (database) query language.", "Mueller (2010) on the other hand takes the perspective of scholarly users, providing arguments especially targeting the aspects of usability from a humanistic point of view, describing the handling of search results as Achilles heel of corpus query tools.", "Having previously examined those factors in (Gartner and Kuhn, 2018), we also agree on the continuing necessity of dedicated corpus query systems and query languages to bridge the gap between formal/technical expressiveness and the usability factors decisive for corpus users.", "Especially future directions as the ones we propose in Section 7 demand architectures that are more complex than the mere translations of data and queries.", "There have however also been approaches or use case analyses to completely store and query linguistic corpora with OWL (Burchardt et al., 2008), XQuery (Cassidy, 2002) or a via RDBMS (e.g. content of the DIRNDL corpus (Eckart et al., 2012) in its entirety has for a long time only been available through direct SQL queries), but historically speaking those cases generally represent a minority.", "A lot of work has been invested already into laying the theoretical foundations for various aspects of and approaches to corpus querying, as well as into evaluating and comparing existing query systems.", "We distinguish between three types of contributions, namely", "(i) requirement analyses,", "(ii) evaluations of individual query languages or approaches and", "(iii) actual performance comparisons between multiple systems (feature-based or benchmarks).", "Several contributions listing requirements for corpus query systems have been previously mentioned in Section 4.", "In addition, Mrovsky (2008) provides a list of required language features for querying PDT and Lai and Bird (2004) do so for treebanks in general, specifically related to navigation, closures over relations and going beyond ordered trees in order to query more complex structures.", "This list of functional requirements is later extended on in Lai and Bird (2010) with features such as temporal organization and non-navigational requirements.", "While not exclusive to corpus query systems, technical aspects related to feasibility (e.g. scalability or computational complexity) or long-term maintainability (e.g. interoperability and extensibility) are also frequently emphasized by Lai and Bird (2004), Kepser (2003) and others.", "Besides the usability-focused scholarly position of Mueller (2010) around aspects of answer time, maintenance cost and the management of search results, we previously discussed additional non-technical requirements related to the general readability or postprocessing capabilities of a query language and its learnability in Gartner and Kuhn (2018), the latter being a crucial factor for achieving wide-spread use in humanistic fields.", "Formal evaluations of query languages are somewhat rare, e.g. (Lai and Bird, 2010) for LPath and LPath + , (Kepser, 2004) for MonaSearch or in part (Kepser, 2003) for FSQ.", "Instead the vast majority of evaluations use example queries of varying complexity to compare different query languages or systems.", "Notable early work on query complexity was done by Lai and Bird (2004), comparing several query languages 45 based on a set of linguistic information needs of increasing complexity.", "The example queries they provide have proven to be a good baseline for comparing the capabilities of query languages and subsequently found their way into many later tool evaluations, such as (Petersen, 2006a) for Emdros or in Clematide (2015) when highlighting features of particular query languages.", "Yet another evaluation approach was used by Frick et al. (2012) when they applied the classes defined in CQLF-1 as evaluation criteria in the comparison of COSMAS II, POLIQARP and AQL.", "Clematide (2015) provides a very thorough re-flection and categorization of the various families of corpus query languages: text corpus, treebank, path-based 46 and logic-based.", "A point he makes that resonates well with other surveys is the importance of striking the right balance between usability and technical aspects in any practical situation.", "In some cases actual performance benchmarks have been published, such as testing Emdros with different RDBMS backends (Petersen, 2006b), 45 TGrep2, TIGERSearch, Emu, CorpusSearch, NiteQL, LPath 46 We argue for a more differentiated view on path-based query languages: While Clematide (2015) considers PML-TQ to be part of this family, we propose to move it together with ICARUS into a tree-based category of query languages, as their use of bracketed tree-expressions to describe structural relations represent a slightly different approach.", "comparisons between TIGERSearch and Emdros in Petersen (2005), MonaSearch and TIGERSearch in (Maryns and Kepser, 2009b) and Luotolahti et al. (2015) benchmarking SETS against ICARUS.", "However, due to the rapid change in technologies and the architectural differences between query systems, it tends to be very difficult to provide accurate and meaningful performance comparisons and readers are advised to carefully examine whether the reported use cases are applicable to their own.", "In this section we intend to condense some of our observations after analyzing a large number of query systems.", "We focus on the following two aspects suitable for pointing out challenges (stem-ming from past shortcomings) and motivating directions in development of future corpus query systems, protocols or architectures.", "The different generations of corpus query systems listed in Section 3 are the results of design processes with generally very distinct goals.", "The first generation in Section 3.1 can be seen as the initial step to have some means of querying beyond the search functions of grep or any text editor.", "Subsequently, the second time period described in Section 3.2 represents a general exploration phase : Approaches in almost every direction were implemented, either as proof of concept for new query features or to address very specific linguistic theories or phenomena.", "Many of those implementations however were not scalable to the degree demanded by the rapid growth 47 of corpora.", "As such the general trend in Section 3.3 was to overcome those limitations and provide scalable systems with also increased usability.", "At the same time the overall expressiveness of query languages provided took a step backwards.", "Especially concepts like closures over relations, (universal) quantification or existential negation often got rationalized in favor of performance in younger systems.", "Our vision of a hybrid architecture sketched in Section 7 is intended to overcome those limitations by utilizing and combining the different strengths of systems involved (such as the robust performance of indexing systems and the expressiveness and flexibility of custom query evaluation engines).", "With the enormous amounts of resources that have been invested into creating this zoo of corpus query languages and systems, it is surprising how little reuse and unification has occurred over the years.", "We attribute this trend to a variety of frequently recurring factors, particularly the following: Due to the lack of standards regarding the categorization of expressiveness of query languages it has always been extremely difficult to determine whether an existing system could meat all the requirements a new project, user scenario or corpus resource posed, leading to redundancy.", "48 The technological heterogeneity 49 involved also represented a major issue that only slowly is being overcome by the emergence of standards for corpus storage and interchange formats or the shift to more modular architectures such as microservices or plugin-engines, making it much easier to adapt a system to new requirements.", "50 Especially early query systems often emerged as an interface for a very particular corpus, a specific format or to support the phenomena a certain project was interested in.", "As such, the limited resources typically available for short-term funded projects rarely allowed for extending previous monolithically designed work.", "Newly implemented (and often isolated) solutions focusing on a narrow selection of very specific query features or annotations were a common result.", "With several dozens of systems contributing their individual variations, the pool of available corpus query tools and languages has become quite large.", "Navigating this ocean in order to find the right tool for the job and then learn to use it can already be as much effort as manually investigating the data at hand.", "Fortunately the CQLF standardization initiative aims at providing developers with the means of locating their tools on a map of query features, so that prospective users may find them without an odyssey.", "While this effort is still in an early stage, we are looking forward to having catalogs available 48 An aspect that CQLF is now addressing, removing the need of essentially reverse engineering a tool or studying its source code, as time constraints together with the lack of standardization often went along with poor documentation.", "49 Ranging from platform/language lock-ins to for-mat/storage dependencies, often in a monolithic composition.", "50 Such as new query features, formats, storage/database solutions, standalone apps or various client-server architectures.", "in the not too distant future, allowing us to browse for query languages based on our individual information needs.", "However, many questions regarding the future of corpus querying still remain, two of which we consider of particular importance and will discuss in the following sections.", "Today we have a cluttered buffet of corpus query languages to pick from depending on our information needs.", "Interestingly they all share the pros and cons of being designed as formal languages with the goal of taciturnity, meaning that for the untrained eye they usually represent just a weird salad of letters and special characters .", "51 This is particularly noteworthy, as all modern corpus query tools feature a rich GUI and could easily employ a more verbose query language while at the same time shield users from the time overhead when creating queries by clever auto-completion or recommendation functions.", "Likewise, today's corpus queries are not self-contained to the level of for instance SQL queries, which are composed of dedicated parts for scope selection, actual constraints and result preparation.", "Usually only the constraint part is present in corpus query languages, with only a few exceptions 52 , leaving additional configurations (result size limit, search direction, case sensitivity) exclusively to external components, such as the GUI, hampering the reproducibility of search results severely.", "A fully self-contained and human-readable query protocol that can embed any existing query language and augment it with (boilerplate) statements to bind the query content to actual corpora and annotation layers, provide information about the query dialect and its version and store config-uration and result preparation instructions, would go a long way towards unification and potential interoperability of corpus query systems.", "The typical architecture of corpus query systems today is a monolithic one and contains from bottom to top", "(i) a backend storage or custom data model, 51 Kaufmann and Bernstein (2010) investigated the usability of natural language queries for interfaces to the semantic web with positive results.", "It would be interesting to see similar studies on corpus query interfaces.", "52 cf.", "PML-TQ for exemplary post-processing instructions, allowing to treat results as tabular data and to perform various transformation and aggregation operations on it, including textual reports.", "(ii) a custom query evaluator or query interface to said backend and", "(iii) a query parser or translator to process the raw user query.", "Choices in technology or algorithms for", "(i) through", "(iii) definitively dictate the basic nature and structure of the information that can be queried.", "They usually make it very difficult, if not impossible, to implement changes or extensions retrospectively or from the outside.", "A strong dependency on indexing to access large corpora also presupposes a priori knowledge of what information is meant to be searchable, frequently confining corpus query tools to the role of being mere finding aids within a research process.", "We would like to see them become true enablers instead, allowing queries to go far beyond of what a corpus has to offer with its bare annotations alone and for example include the following extensions to create more informed search solutions: Use knowledge bases and similar external resources to allow more generalized queries, e.g. find verbal constructions containing a preposition in combination with some sort of furniture .", "Add (semantic) similarity measures (e.g. word embeddings) and other approaches for increased fuzziness to improve example-based search.", "Offer true scripting support for users to extent or customize the ability provided by a system.", "While this might affect performance in unpredictable and detrimental ways, raw (distributed) computing power and clever use of pre-filtering can offset the impacts on performance.", "Naturally all of these proposed features (and especially the last one) require a drastically different and quite heterogeneous architecture.", "Taking the microservices approach of KorAP as an example, it is easy to imagine a hierarchically organized architecture of query translation and evaluation services working together (by partially answering queries, filtering the results or otherwise post-process them) to provide the optimal combination of freedom in expressiveness and performance guarantees.", "Space does not permit we provide a detailed description of such a hybrid approach.", "Instead we refer to (Gartner, to appear) for an overview of our ongoing efforts to design and implement a hybrid corpus query architecture and associated query protocol.", "Twenty years ago this might have seemed utterly unrealistic, but advances in information management systems and distributed computing certainly put this vision within technical reach." ]
[ "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain" ]
[ "Structured sentiment analysis attempts to extract full opinion tuples from a text, but over time this task has been subdivided into smaller and smaller sub-tasks, e.g. , target extraction or targeted polarity classification.", "We argue that this division has become counterproductive and propose a new unified framework to remedy the situation.", "We cast the structured sentiment problem as dependency graph parsing, where the nodes are spans of sentiment holders, targets and expressions, and the arcs are the relations between them.", "We perform experiments on five datasets in four languages (English, Norwegian, Basque, and Catalan) and show that this approach leads to strong improvements over state-of-the-art baselines.", "Our analysis shows that refining the sentiment graphs with syntactic dependency information further improves results.", "Structured 1 sentiment analysis , i.e. , the task of predicting a structured sentiment graph like the ones in Figure 1, can be theoretically cast as an information extraction problem in which one attempts to find all of the opinion tuples O = O i , . . . , O n in a text.", "Each opinion O i is a tuple ( h, t, e, p ) where h is a holder who expresses a polarity p towards a target t through a sentiment expression e , implicitly defining pairwise relationships between elements of the same tuple.", "Liu (2012) argues that all of these elements 2 are essential to fully resolve the sentiment analysis problem.", "1 We use the term structured sentiment' distinctly from Al-mars et al. (2017), who use it to refer to the latent hierarchical structure of sentiment aspects.", "We instead use structured' to refer to predicting sentiment graphs as a structured prediction task, as opposed to the many text classification task that are found in sentiment analysis.", "Liu (2012)'s definition replaces sentiment expression with the time when the opinion was expressed.", "However, most research on sentiment analysis focuses either on a variety of sub-tasks, which avoids performing the full task, or on simplified and idealized tasks, e.g. , sentence-level binary polarity classification.", "We argue that the division of structured sentiment into these sub-tasks has become counterproductive, as reported experiments are often not sensitive to whether a given addition to the pipeline improves the overall resolution of sentiment, or do not take into account the inter-dependencies of the various sub-tasks.", "As such, we propose a unified approach to structured sentiment which jointly predicts all elements of an opinion tuple and their relations.", "Moreover, we cast sentiment analysis as a dependency graph parsing problem , where the sentiment expression is the root node, and the other elements have arcs which model the relationships between them.", "This methodology also enables us to take advantage of recent improvements in semantic dependency parsing (Dozat and Manning, 2018; Oepen et al., 2020; Kurtz et al., 2020) to efficiently learn a sentiment graph parser.", "This perspective also allows us to unify a number of approaches, including targeted, and opinion tuple mining.", "We aim to answer RQ1: whether graph-based approaches to structured sentiment outperform state-of-the-art sequence labeling approaches, and RQ2: how to best encode structured sentiment as parsing graphs.", "We perform experiments on five standard datasets in four languages (English, Norwegian, Basque, Catalan) and show that graph-based approaches outperform state-of-the-art baselines on all datasets on several standard metrics, as well as our proposed novel (unlabeled and labeled) sentiment graph metrics.", "We further propose methods to inject linguistic structure into the sentiment graphs using syntactic dependencies.", "Our main contributions are therefore 1) proposing a holistic approach to structured sentiment through Some others give the new UMUC 5 stars don't believe them .", "sentiment graph parsing, 2) introducing new evaluation metrics for measuring model performance, and 3) extensive experimental results that outperform state-of-the-art baselines.", "Finally, we release the code and datasets 3 to enable future work on this problem.", "Structured sentiment analysis can be broken down into five sub-tasks:", "i) sentiment expression extraction,", "ii) sentiment target extraction,", "iii) sentiment holder extraction,", "iv) defining the relationship between these elements, and", "v) assigning polarity.", "Previous work on information extraction has used pipeline methods which first extract the holders, targets, and expressions (tasks i iii ) and subsequently predict their relations (task iv ), mostly on the MPQA dataset (Wiebe et al., 2005).", "CRFs and a number of external resources (sentiment lexicons, dependency parsers, named-entity taggers) (Choi et al., 2006; Yang and Cardie, 2012) are strong baselines.", "Given the small size of the training data and the complicated task, these techniques often still outperform neural models, such as BiLSTMs (Kati-yar and Cardie, 2016).", "Transition-based end-to-end approaches have shown some potential (Zhang et al., 2019).", "However, all of this work ignores the polarity classification subtask.", "Targeted sentiment analysis only concentrates on extracting sentiment targets (task ii ) and classifying the polarity directed towards them (task iv ) (Jiang et al., 2011; Mitchell et al., 2013).", "Recent shared tasks on Aspect-Based Sentiment Analysis (ABSA) (Pontiki et al., 2014, 2015, 2016) also include target extraction and polarity classification subtasks.", "Joint approaches perform on par with pipeline methods (Li et al., 2019a) and multitask models can perform even better (He et al., 2019).", "Finally, pretrained language models (Devlin et al., 3 Code and datasets available at https://github. com/jerbarnes/sentiment_graphs . 2019) can also lead to improvements on the ABSA data (Li et al., 2019b).", "End2End sentiment analysis is a recently proposed subtask which combines targeted sentiment (tasks ii and v ) and sentiment expression extraction (task i ), without requiring the resolution of relationships between targets and expressions.", "Wang et al. (2016) augment the ABSA datasets with sentiment expressions, but provide no details on the annotation process or any inter-annotator agreement.", "He et al. (2019) make use of this data and propose a multi-layer CNN ( IMN ) to create hidden representations h which are then fed to a target and opinion extraction module (AE), which is also a multi-layer CNN.", "This module predicts y ae , a sequence of BIO tags 4 that predict the presence or absence of targets and expressions.", "After jointly predicting the targets and expressions, a second multi-layer CNN with a final self-attention network is used to classify the polarity, again as sequence labeling task (AS).", "This second module combines the information from h and y ae by incorporating the predicted probability of a token to be a target in the formulation of self-attention.", "Finally, an iterative message-passing algorithm updates h using the predictions from all the modules at the previous timestep.", "Chen and Qian (2020) instead propose Relation-Aware Collaborative Learning ( RACL ).", "This model creates task specific representations by first embedding a sentence, passing through a shared feed-forward network and finally a task-specific CNN.", "This approach then models interactions between each pair of sub-tasks (target extraction, expression extraction, sentiment classification) by creating pairwise weighted attention representations.", "These are then concatenated and used to create the task-specific predictions.", "The authors finally stack several RACL layers, using the output from the previous layer as input for the next.", "Both models perform well on the augmented SemEval data, but it is unlikely that these annotations are adequate for full structured sentiment, as Wang et al. (2016) only provide expression annotations for sentences that have targets, generally only include sentiment-bearing words (not phrases), and do not specify the relationship between target and expression.", "Finally, the recently proposed aspect sentiment triplet extraction (Peng et al., 2019; ? ) attempts to extract targets, expressions and their polarity.", "However, the datasets used are unlikely to be adequate, as they augment available targeted datasets, but do not report annotation guidelines, procedure, or inter-annotator agreement.", "Graph parsing: Syntactic dependency graphs are regularly used in applications, supplying them with necessary grammatical information (Mintz et al., 2009; Cui et al., 2005; Bjorne et al., 2009; Johansson and Moschitti, 2012; Lapponi et al., 2012).", "The dependency graph structures used in these systems are predominantly restricted to trees.", "While trees are sufficient to encode syntactic dependencies, they are not expressive enough to handle meaning representations , that require nodes to have multiple incoming arcs, or having no incoming arcs at all (Kuhlmann and Oepen, 2016).", "While much of the early research on parsing these new structures (Oepen et al., 2014, 2015) focused on specialized decoding algorithms, Dozat and Manning (2018) presented a neural dependency parser that essentially relies only on its neural network structure to predict any type of dependency graph without restrictions to certain structures.", "Using the parser's ability to learn arbitrary dependency graphs, Kurtz et al. (2020) phrased the task of negation resolution (Morante and Blanco, 2012; Morante and Daele-mans, 2012) as a graph parsing task.", "This transformed the otherwise flat representations to dependency structures that directly encode the often overlapping relations between the building blocks of multiple negation instances at the same time.", "In a simpler fashion, Yu et al. (2020) exploit the parser of Dozat and Manning (2018) to predict spans of named entities.", "We here focus on datasets that annotate the full task of structured sentiment as described initially.", "We perform experiments on five structured sentiment datasets in four languages, the statistics of which are shown in Table", "1. The largest available structured sentiment dataset is the NoReC Fine dataset (vrelid et al., 2020), a multi-domain dataset of professional reviews in Norwegian, annotated for structured sentiment.", "MultiB EU and MultiB CA (Barnes et al., 2018) are hotel reviews in Basque and Catalan, respectively.", "MPQA (Wiebe et al., 2005) annotates news wire text in English.", "Finally, DS Unis (Toprak et al., 2010) annotate English reviews of online universities and e-commerce.", "In our experiments, we use only the university reviews, as the e-commerce reviews have a large number of polar targets', i.e. , targets with a polarity, but no accompanying sentiment expression.", "While all the datasets annotate holders, targets, and expressions, the frequency and distribution of these vary.", "Regarding holders, MPQA has the most (2,054) and DS Unis has the fewest (94), whereas NoReC Fine has the largest proportion of targets (8,923) and expressions (11,115).", "The average length of holders (2.6 tokens) and targets (6.1 tokens) in MPQA is also considerably higher than the others.", "It is also worth pointing out that MPQA and DS Unis additionally include neutral polarity.", "In the case of MPQA the neutral class refers to verbs which are subjective but do not convey polarity, e.g. , say', opt for'.", "In DS Unis , however, the neutral label tends to indicate expressions that could entail mixed polarity or are polar under the right conditions, e.g. , the classes were not easy' is considered neutral, as it is possible for difficult classes to be desirable at a university.", "MultiB EU , and MultiB CA also have labels for strong positive and strong negative, which we map to positive and negative, respectively.", "Finally, NoReC Fine includes intensity annotations (strong, normal, slight), which we disregard for the purposes of these experiments.", "This section describes how we define and encode sentiment graphs, detail the neural dependency graph models, as well as two state-of-the-art baselines for end-to-end sentiment analysis (target and expression extraction, plus polarity classification).", "Structured sentiment graphs as in Figure 1 are directed graphs, that are made up of a set of labeled nodes and a set of unlabeled edges connecting pairs of nodes.", "Nodes in the structured sentiment graphs sentences holders targets expressions polarity # avg.", "can span over multiple tokens and may have multiple incoming edges.", "The resulting graphs can have multiple entry points ( roots ), are not necessarily connected, and not every token is a node in the graph.", "The sentence's sentiment expressions correspond to the roots of the graphs, connecting explicitly to their respective holders and targets.", "In order to apply the algorithm of Dozat and Manning (2018), we simplify these structures into bi-lexical dependency graphs visualized in Figure", "2. Here, nodes correspond one-to-one to the tokens of the sequence and follow the same linear order.", "The edges are drawn as arcs in the half-plane above the sentence, connecting heads to dependents .", "Similarly to the source structures, the graphs can have multiple roots and nodes can have multiple or no incoming arcs.", "For some rare instances of structured sentiment graphs, the reduction to dependency graphs is lossy, as they do not allow multiple arcs to share the same head and dependent.", "This results in a slight mismatch of the learned and aimed-for representations.", "The choice of how to encode the sentiment graphs as parsing graphs opens for several alternate representations depending on the choice of head/dependent status of individual tokens in the target/holder/expression spans of the sentiment graph.", "We here propose two simple parsing graph representations: head-first and head-final, which Metric Name Level Strictness + / Holder F 1 Token-level Partial No Target F 1 Token-level Partial No Exp.", "are shown in Figure", "2. For head-first , we set the first token of the sentiment expression as a root node, and similarly set the first token in each holder and token span as the head of the span with all other tokens within that span as dependents.", "The labels simply denote the type of relation (target/holder) and for sentiment expressions, additionally encode the polarity.", "Head-final is similar, but instead sets the final token of spans as the heads, and the final token of the sentiment expression as the root node.", "The neural graph parsing model used in this work is a reimplementation of the neural parser by Dozat and Manning (2018) which was used by Kurtz et al. (2020) for negation resolution.", "The parser learns to score each possible arc to then finally predict the output structure simply as a collection of all positively scored arcs.", "The base of the network structure is a bidirectional LSTM (BiLSTM), that processes the input sentence both from left-to-right and right-to-left, to create contextualized representations c 1 , . . . , c n = BiLSTM ( w 1 , . . . , w n ) where w i is the concatenation of a word embedding, POS tag embedding, lemma embedding, and character embedding created by a character-based LSTM for the i th token.", "In our experiments, we further augment the token representations with pretrained contextualized embeddings from multilingual BERT (Xu et al., 2019).", "We use multilingual BERT as several languages did not have available monolingual BERT models at the time of the experiments (Catalan, Norwegian).", "The contextualized embeddings are then processed by two feedforward neural networks (FNN), creating specialized representations for potential heads and dependents, h i = FNN head ( c i ) and d i = FNN dep ( c i ) .", "The scores for each possible arc-label combination are computed by a final bilinear transformation using the tensor U .", "Its inner dimension corresponds to the number of sentiment graph labels plus a special NONE label, indicating the absence of an arc, which allows the model to predict arcs and labels jointly, score ( h i , d j ) = h (cid:62) i Ud j .", "We compare our proposed graph prediction approach with three state-of-the-art baselines 5 for extracting targets and expressions and predicting the polarity: IMN 6 , RACL 7 , as well as RACL-BERT , which also incorporates contextualized embeddings.", "Instead of using BERT Large , we use the cased BERT-multilingual-base in order to fairly compare with our own models.", "Note, however, that our model does not update the mBERT representations, putting it at a disadvantage to RACL-BERT.", "We also compare with previously reported extraction results from Barnes et al. (2018) and vrelid et al. (2020).", "As we are interested not only in extraction or classification, but rather in the full structured sentiment task, we propose metrics that capture the relations between all predicted elements, while enabling comparison with previous state-of-the-art models on different subtasks.", "The main metrics we use to rank models are Targeted F 1 and Sentiment Graph F 1 .", "5 Despite having state-of-the-art results on MPQA , we do not compare with Katiyar and Cardie (2016) as they use different dataset splits, 10-fold cross-validation, and their code is not available.", "6 IMN code available at https://github.com/ ruidan/IMN-E2E-ABSA .", "Token-level F 1 for Holders, Targets, and Expressions To easily compare our models to pipeline models, we evaluate how well these models are able to identify the elements of a sentiment graph with token-level F 1 .", "Targeted F 1 This is a common metric in targeted sentiment analysis (also referred to as F 1 -i (He et al., 2019) or ABSA F 1 (Chen and Qian, 2020)).", "A true positive requires the combination of exact extraction of the sentiment target, and the correct polarity.", "Parsing graph metrics We additionally compute graph-level metrics to determine how well the models predict the unlabeled and labeled arcs of the parsing graphs: Unlabeled F 1 ( UF 1 ), Labeled F 1 ( LF 1 ).", "These measure the amount of (in)correctly predicted arcs and labels, as the harmonic mean of precision and recall (Oepen et al., 2014).", "These metrics inform us of the local properties of the graph, and do not overly penalize a model if a few edges of a graph are incorrect.", "Sentiment graph metrics The two metrics that measure how well a model is able to capture the full sentiment graph (see Figure 1) are Non-polar Sentiment Graph F 1 ( NSF 1 ) and Sentiment Graph F 1 ( SF 1 ).", "For NSF 1 , each sentiment graph is a tuple of (holder, target, expression), while for SF 1 we include polarity (holder, target, expression, polar-ity).", "A true positive is defined as an exact match at graph-level, weighting the overlap in predicted and gold spans for each element, averaged across all three spans.", "For precision we weight the number of correctly predicted tokens divided by the total number of predicted tokens (for recall, we divide instead by the number of gold tokens).", "We allow for empty holders and targets.", "All sentiment graph models use token-level mBERT representations in addition to word2vec skip-gram embeddings openly available from the NLPL vector repository 8 (Fares et al., 2017).", "We train all models for 100 epochs and keep the model that performs best regarding LF 1 on the dev set (Targeted F 1 for the baselines).", "We use default hyperparameters from Kurtz et al. (2020) (see Appendix) and run all of our models five times with different random seeds and report the mean (stan-dard deviation shown as well in Table 8 in the Appendix).", "We calculate statistical difference between the best and second best models through a bootstrap with replacement test (Berg-Kirkpatrick et al., 2012).", "As there are 5 runs, we require that 3 of 5 be statistically significant at p < 0 .", "05 .", "Table 3 shows the results for all datasets.", "On NoReC Fine , the baselines IMN, RACL, and RACL-BERT perform well at extracting targets (35.9, 45.6, and 47.2 F 1 , respectively) and expressions (48.7/55.4/56.3), but struggle with the full targeted sentiment task (18.0/20.1/30.3).", "The graph-based models extract targets better (50.1/54.8) and have comparable scores for expressions (54.4/55.5).", "The holder extraction scores have a similar range (51.1/60.4).", "These patterns hold throughout the other datasets, where the proposed graph models nearly always perform best on extracting spans, although RACL-BERT achieves the best score on extracting targets on DS Unis (44.6 vs. 42.1).", "The graph models also outperform the strongest baseline (RACL-BERT) on targeted sentiment on all 5 datasets, although this difference is often not statistically significant ( NoReC Fine Head-first, MultiB EU Head-final) and RACL-BERT is better than Head-first on DS Unis .", "Regarding the Graph metrics, the results depend highly on the dataset, with UF 1 and LF 1 ranging from 35.3/31.4 ( DS Unis Head-first) to 66.8/62.1 ( MultiB CA Head-first).", "Sentiment Graph metrics NSF 1 and SF 1 have a similar, though slightly lower range (24.5/17.7 62.0/56.8).", "The graph and sentiment graph metrics do not correlate perfectly, however, as UF 1 and LF 1 on MPQA are relatively good 8 Nordic Language Processing Laboratory vector", "repo.: http://vectors.nlpl.eu/repository/ .", "We used 300-dimensional embeddings trained on English Wikipedia and Gigaword for English (model id 18 in the repo.), and 100-dimensional embeddings trained on the 2017 CoNLL corpora for all others; Basque (id 32), Catalan (id 34), and Norwegian Bokm al (id 58).", "(40.0/36.9 and 41.4/38.0 for Head-first and Head-final, respectively), but the NSF 1 and SF 1 are poor (24.5/17.4 and 26.1/18.8).", "On average IMN is the weakest baseline, followed by RACL and then RACL-BERT.", "The main improvement that RACL-BERT gives over RACL on these datasets is seen in the Targeted metric, i.e. , the contextualized representations improve the polarity classification more than the extraction task.", "The proposed graph-based models are consistently the best models across the metrics and datasets.", "Regarding graph representations, the differences between Head-first and Head-final are generally quite small.", "Head-first performs better on MultiB CA and slightly better on MultiB EU , while for the others ( NoReC Fine , MPQA , and DS Unis ) Head-final is better.", "This suggests that the main benefit is the joint prediction of all spans and relationships, and that the specific graph representation matters less.", "In this section we perform a deeper analysis of the models in order to answer the research questions.", "Our two baseline graph representations, Head-first and Head-final, are crude approximations of linguistic structure.", "In syntactic and semantic dependency graphs, heads are often neither the first or last word, but rather the most salient word according to various linguistic criteria.", "First, we enrich the dependency labels to distinguish edges that are internal to a holder/target/expression span from those that are external and perform experiments by adding an in label' to non-head nodes within the graph, which we call +inlabel .", "We further inform the head selection of the parsing graphs with syntactic information in the Dep.", "edges parsing Spans Targeted Graph Sent.", "graphs, where we compute the dependency graph for each sentence 9 and set the head of each span to be the node that has an outgoing edge in the corresponding syntactic graph.", "As there can be more than one such edge, we default to the first.", "A manual inspection showed that this approach sometimes set unlikely dependency label types as heads, e.g. , punct , obl .", "Therefore, we suggest a final approach, Dep.", "labels , which filters out these unlikely heads.", "The full results are shown in Table 8 in the Appendix.", "The implementation of the graph structure has a large effect on all metrics, although the specific results depend on the dataset.", "We plot the average effect of each implementation across all datasets in Figure 3, as well as each individual dataset (Figures 48 in the Ap-pendix).", "+inlabel tends to improve results on the non-English datasets, consistently increasing target and expression extraction and targeted sentiment.", "It also generally improves the graph scores UF 1 and LF 1 on the non-English datasets.", "9 We use SpaCy (Honnibal et al., 2020) for English, Stanza (Qi et al., 2020) for Basque and Catalan and UDPipe (Straka and Strakov a, 2017) for Norwegian.", "Dep.", "edges has the strongest positive effect on the NSF 1 and SF 1 (an avg.", "2.52 and 2.22 percentage point (pp) over Head-final, respectively).", "However, this average is pulled down by poorer performance on the English datasets.", "Removing these two, the average benefit is 5.2 and 4.2 for NSF 1 and SF 1 , respectively.", "On span extraction and targeted sentiment, however, Dep.", "edges leads to poorer scores overall.", "Dep.", "labels does not lead to any consistent improvements.", "These results indicate that incorporating syntactic dependency information is particularly helpful for the full structured sentiment task, but that these benefits do not always show at a more local level, i.e. , span extraction.", "We hypothesize that predicting the full sentiment graph may have a larger effect on sentences with multiple targets.", "Therefore, we create a subset of the test data containing sentences with multiple targets and reevaluate Head-first, Head-final, and RACL-BERT on the target extraction task.", "Table 4 shows the number of sentences with multiple targets and the Target span extraction score for each model.", "On this subset, Head-first and Head-final outperform RACL-BERT on 9 of 10 experiments, confirming the hypothesis that the graph models improve on examples with multiple targets.", "We also perform experiments without mBERT (shown in Table 7 in the Appendix) and show the average gains (over all 6 graph setups) of including it in Table 5.", "Adding the mBERT features leads to average improvements in all experiments: for extracting spans an average gain of 4.1 pp for holders, 3.4 for targets, and 3.1 for expressions.", "For targeted sentiment there is a larger gain of 4.2 pp, while for the parsing graph metrics UF 1 and lF 1 the gains are more limited (3.3 pp/ 3.8 pp) and similarly for NSF 1 and SF 1 (3.6 pp/ 3.9 pp).", "The gains are NoReC Fine 57.0 (1.5) MultiB EU 75.7 (0.8) MultiB CA 71.7 (2.4) MPQA 38.5 (1.4) DS Unis 44.5 (2.4) Table 6: Polarity F 1 scores (unweighted and weighted) of models augmented with mBERT on the head-final setup.", "largest for the English datasets ( MPQA , DS Unis ) followed by NoReC Fine , and finally MultiB CA and MultiB EU .", "This corroborates the bias towards English and similar languages that has been found in multilingual language models (Artetxe et al., 2020; Conneau et al., 2020) and motivates the need for language-specific contextualized embeddings.", "In this section we zoom in on polarity, in order to quantify how well models perform at predicting only polarity.", "As the polarity annotations are bound to the expressions, we consider true positives to be any expression that overlaps the gold expression and has the same polarity.", "Table 6 shows that the polarity predictions are best on and MultiB CA , followed by NoReC Fine and DS Unis , and finally MPQA .", "This is likely due to the number of domains and characteristics of the data.", "NoReC Fine contains many domains and has longer expressions, while MPQA contains many highly ambiguous polar expressions, e.g. , said', asked', which have different polarity depending on the context.", "In this paper, we have proposed a dependency graph parsing approach to structured sentiment analysis and shown that these models outperform state-of-the-art sequence labeling models on five benchmark datasets.", "Using parse trees as input has shown promise for sentiment analysis in the past, either to guide a tree-based algorithm (Socher et al., 2013; Tai et al., 2015) or to create features for sentiment models (Nakagawa et al., 2010; Almeida et al., 2015).", "However, to the authors' knowledge, this is the first attempt to directly predict dependency-based sentiment graphs.", "sentiment graph parsing, either by augmenting the token-level representations with contextualized vectors from their heads in a dependency tree (Kurtz et al., 2020) or by multi-task learning to dependency parse.", "We would also like to explore different graph parsing approaches, e.g. , PERIN (Samuel and Straka, 2020).", "This work has been carried out as part of the SANT project (Sentiment Analysis for Norwegian Text), funded by the Research Council of Norway (grant number 270908).", "The computations were performed on resources provided by UNINETT Sigma2 the National Infrastructure for High Performance Computing and Data Storage in Norway." ]
[ "abstain", "objective", "abstain", "result", "result", "abstain", "abstain", "abstain", "method", "result", "abstain", "abstain", "result", "objective", "abstain", "method", "abstain", "objective", "objective", "objective", "objective", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "other", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "other", "other" ]
[ "We introduce a new task, M ulti M edia E vent E xtraction (M 2 E 2 ), which aims to extract events and their arguments from multimedia documents.", "We develop the first benchmark and collect a dataset of 245 multimedia news articles with extensively annotated events and arguments.", "1 We propose a novel method, W eakly A ligned S tructured E mbedding ( WASE ), that encodes structured representations of semantic information from textual and visual data into a common embedding space.", "The structures are aligned across modalities by employing a weakly supervised training strategy, which enables exploiting available resources without explicit cross-media annotation.", "Compared to unimodal state-of-the-art methods, our approach achieves 4.0% and 9.8% absolute F-score gains on text event argument role labeling and visual event extraction.", "Compared to state-of-the-art multimedia unstructured representations, we achieve 8.3% and 5.0% absolute F-score gains on multimedia event extraction and argument role labeling, respectively.", "By utilizing images, we extract 21.4% more event mentions than traditional text-only methods.", "Traditional event extraction methods target a single modality, such as text (Wadden et al., 2019), images (Yatskar et al., 2016) or videos (Ye et al., 2015; Caba Heilbron et al., 2015; Soomro et al., 2012).", "However, the practice of contemporary journalism (Stephens, 1998) distributes news via multimedia.", "By randomly sampling 100 multimedia news articles from the Voice of America (VOA), we find that 33% of images in the articles contain visual objects that serve as event arguments and are not mentioned in the text.", "Take These authors contributed equally to this work.", "Figure 1 as an example, we can extract the Agent and Person arguments of the Movement.Transport event from text, but can extract the Vehicle argument only from the image.", "Nevertheless, event extraction is independently studied in Computer Vision (CV) and Natural Language Processing (NLP), with major differences in task definition, data domain, methodology, and terminology.", "Motivated by the complementary and holistic na-ture of multimedia data, we propose M ulti M edia E vent E xtraction ( M 2 E 2 ), a new task that aims to jointly extract events and arguments from multiple modalities.", "We construct the first benchmark and evaluation dataset for this task, which consists of 245 fully annotated news articles.", "We propose the first method, W eakly A ligned S tructured E mbedding ( WASE ), for extracting events and arguments from multiple modalities.", "Complex event structures have not been covered by existing multimedia representation methods (Wu et al., 2019b; Faghri et al., 2017; Karpathy and Fei-Fei, 2015), so we propose to learn a structured multimedia embedding space.", "More specifically, given a multimedia document, we represent each image or sentence as a graph, where each node represents an event or entity and each edge represents an argument role.", "The node and edge embeddings are represented in a multimedia common semantic space, as they are trained to resolve event co-reference across modalities and to match images with relevant sentences.", "This enables us to jointly classify events and argument roles from both modalities.", "A major challenge is the lack of multimedia event argument annotations, which are costly to obtain due to the annotation complexity.", "Therefore, we propose a weakly supervised framework, which takes advantage of annotated uni-modal corpora to separately learn visual and textual event extraction, and uses an image-caption dataset to align the modalities.", "We evaluate WASE on the new task of M 2 E 2 .", "Compared to the state-of-the-art uni-modal methods and multimedia flat representations, our method significantly outperforms on both event extraction and argument role labeling tasks in all settings.", "Moreover, it extracts 21.4% more event mentions than text-only baselines.", "The training and evaluation are done on heterogeneous data sets from multiple sources, domains and data modalities, demonstrating the scalability and transferability of the proposed model.", "In summary, this paper makes the following contributions: We propose a new task, MultiMedia Event Extraction, and construct the first annotated news dataset as a benchmark to support deep analysis of cross-media events.", "We develop a weakly supervised training framework, which utilizes existing single-modal annotated corpora, and enables joint inference without cross-modal annotation.", "Our proposed method, WASE, is the first to leverage structured representations and graph-based neural networks for multimedia common space embedding.", "Each input document consists of a set of images M = { m 1 , m 2 , . . . } and a set of sentences S = { s 1 , s 2 , . . . } .", "Each sentence s can be represented as a sequence of tokens s = ( w 1 , w 2 , . . . ) , where w i is a token from the document vocabulary W .", "The input also includes a set of entities T = { t 1 , t 2 , . . . } extracted from the document text.", "An entity is an individually unique object in the real world, such as a person, an organization, a facility, a location, a geopolitical entity, a weapon, or a vehicle.", "The objective of M 2 E 2 is twofold: Event Extraction : Given a multimedia document, extract a set of event mentions, where each event mention e has a type y e and is grounded on a text trigger word w or an image m or both, i.e., e = ( y e , { w, m } ) .", "Note that for an event, w and m can both exist, which means the visual event mention and the textual event mention refer to the same event.", "For example in Figure 1, deploy indicates the same Movement.Transport event as the image.", "We consider the event e as text-only event if it only has textual mention w , and as image-only event if it only contains visual mention m , and as multimedia event if both w and m exist.", "Argument Extraction : The second task is to extract a set of arguments of event mention e .", "Each argument a has an argument role type y a , and is grounded on a text entity t or an image object o (represented as a bounding box), or both, a = ( y a , { t, o } ) .", "The arguments of visual and textual event mentions are merged if they refer to the same real-world event, as shown in Figure", "1. 2.2 The M 2 E 2 Dataset We define multimedia newsworthy event types by exhaustively mapping between the event ontology in NLP community for the news domain (ACE 2 ) and the event ontology in CV community for general domain (imSitu (Yatskar et al., 2016)).", "They cover the largest event training resources in each community.", "Table 1 shows the selected complete intersection, which contains 8 ACE types (i.e., 24% of all ACE types), mapped to 98 imSitu types (i.e., 20% of all imSitu types).", "We expand the ACE event role set by adding visual arguments from imSitu, such as instrument , bolded in Table", "1. This set encompasses 52% ACE events in a news corpus, which indicates that the selected eight types are salient in the news domain.", "We reuse these existing ontologies because they enable us to train event and argument classifiers for both modalities without requiring joint multimedia event annotation as training data.", "2 https://catalog.ldc.upenn.edu/ldc2006T06 Event Type Argument Role Movement.Transport(223 | 53) Agent (46 | 64), Artifact (179 | 103), Vehicle (24 | 51), Destination (120 | 0), Origin (66 | 0) Conflict.Attack(326 | 27) Attacker (192 | 12), Target (207 | 19), Instrument (37 | 15), Place (121 | 0) Conflict.Demonstrate(151 | 69) Entity (102 | 184), Police (3 | 26), Instrument (0 | 118), Place (86 | 25) Justice.ArrestJail(160 | 56) Agent (64 | 119), Person (147 | 99), Instrument (0 | 11), Place (43 | 0) Contact.PhoneWrite(33 | 37) Entity (33 | 46), Instrument (0 | 43), Place (8 | 0) Contact.Meet (127 | 79) Participant (119 | 321), Place (68 | 0) Life.Die(244 | 64) Agent (39 | 0), Instrument (4 | 2), Victim (165 | 155), Place (54 | 0) Transaction.TransferMoney (33 | 6) Giver (19 | 3), Recipient (19 | 5), Money (0 | 8) Table 1: Event types and argument roles in M 2 E 2 , with expanded ones in bold.", "We collect 108,693 multimedia news articles from the Voice of America (VOA) website 3 2006-2017, covering a wide range of newsworthy top-ics such as military, economy and health.", "We select 245 documents as the annotation set based on three criteria: (1) Informativeness: articles with more event mentions; (2) Illustration: articles with more images ( > 4 ); (3) Diversity: articles that balance the event type distribution regardless of true frequency.", "The data statistics are shown in Table", "2. Among all of these events, 192 textual event mentions and 203 visual event mentions can be aligned as 309 cross-media event mention pairs.", "The dataset can be divided into 1,105 text-only event mentions, 188 image-only event mentions, and 395 multimedia event mentions.", "We follow the ACE event annotation guidelines (Walker et al., 2006) for textual event and argument annotation, and design an annotation guideline 4 for multimedia events annotation.", "One unique challenge in multimedia event annotation is to localize visual arguments in complex scenarios, where images include a crowd of people or a group of object.", "It is hard to delineate 3 https://www.voanews.com/ 4 http://blender.cs.illinois.edu/software/ m2e2/ACL2020_M2E2_annotation.pdf Figure 2: Example of bounding boxes.", "each of them using a bounding box.", "To solve this problem, we define two types of bounding boxes: (1) union bounding box : for each role, we annotate the smallest bounding box covering all constituents; and (2) instance bounding box : for each role, we annotate a set of bounding boxes, where each box is the smallest region that covers an individual participant (e.g., one person in the crowd), following the VOC2011 Annotation Guidelines 5 .", "Figure 2 shows an example.", "Eight NLP and CV researchers complete the annotation work with two independent passes and reach an Inter-Annotator Agreement (IAA) of 81.2%.", "Two expert annotators perform adjudication.", "As shown in Figure 3, the training phase contains three tasks: text event extraction (Section 3.2), visual situation recognition (Section 3.3), and cross-media alignment (Section 3.4).", "We learn a cross-media shared encoder, a shared event classifier, and a shared argument classifier.", "In the testing phase (Section 3.5), given a multimedia news article, we encode the sentences and images into the structured common space, and jointly extract textual and visual events and arguments, followed by cross-modal coreference resolution.", "Text Structured Representation: As shown in Figure 4, we choose Abstract Meaning Representation (AMR) (Banarescu et al., 2013) to represent text because it includes a rich set of 150 fine-grained semantic roles.", "To encode each text sentence, we run the CAMR parser (Wang et al., 2015b,a, 2016) to generate an AMR graph, based on the named entity recognition and part-of-speech (POS) tagging results from Stanford CoreNLP (Manning et al., 2014).", "To represent each word w in a sentence s , we concatenate its 5 http://host.robots.ox.ac.uk/pascal/VOC/ voc2011/guidelines.html For the rebels, bravado goes hand-inhand with the desperate resistance the insurgents have", "pre-trained GloVe word embedding (Pennington et al., 2014), POS embedding, entity type embedding and position embedding.", "We then input the word sequence to a bi-directional long short term memory (Bi-LSTM) (Graves et al., 2013) network to encode the word order and get the representation of each word w .", "Given the AMR graph, we apply a Graph Convolutional Network (GCN) (Kipf and Welling, 2016) to encode the graph contextual information following (Liu et al., 2018a): w ( k +1) i = f ( X j N ( i ) g ( k ) ij ( WE ( i,j ) w ( k ) j + b ( k ) E ( i,j ) )) , (1) where N ( i ) is the neighbour nodes of w i in the AMR graph, E ( i, j ) is the edge type between w i and w j , g ij is the gate following (Liu et al., 2018a), k represents GCN layer number, and f is the Sigmoid function.", "W and b denote parameters of neural layers in this paper.", "We take the hidden states of the last GCN layer for each word as the common-space representation w C , where C stands for the common (multimedia) embedding space.", "For each entity t , we obtain its representation t C by averaging the embeddings of its tokens.", "Event and Argument Classifier: We classify each word w into event types y e 6 and classify each 6 We use BIO tag schema to decide trigger word boundary, i.e., adding prefix Bto the type label to mark the beginning of a trigger, Ifor inside, and O for none.", "entity t into argument role y a : P ( y e | w ) = exp (cid:0) W e w C + b e (cid:1) P e exp ( W e w C + b e ) , P ( y a | t ) = exp( W a [ t C ; w C ] + b a ) P a exp( W a [ t C ; w C ] + b a ) .", "(2) We take ground truth text entity mentions as input following (Ji and Grishman, 2008) during training, and obtain testing entity mentions using a named entity extractor (Lin et al., 2019).", "Image Structured Representation: To obtain image structures similar to AMR graphs, and inspired by situation recognition (Yatskar et al., 2016), we represent each image with a situation graph , that is a star-shaped graph as shown in Figure 4, where the central node is labeled as a verb v (e.g., destroying ), and the neighbor nodes are arguments labeled as { ( n, r ) } , where n is a noun (e.g., ship ) derived from WordNet synsets (Miller, 1995) to indicate the entity type, and r indicates the role (e.g., item ) played by the entity in the event, based on FrameNet (Fillmore et al., 2003).", "We develop two methods to construct situation graphs from images and train them using the imSitu dataset (Yatskar et al., 2016) as follows.", "(1) Object-based Graph: Similar to extracting entities to get candidate arguments, we employ the Caption AMR Graph Attention-based Graph Image Structured Multimedia Common Space ... ... :agent :destination :item :item attack-01 protest-01 bus :ARG0 :ARG1 B i -LSTM Context Thailand :name rally-01 :mod oppose-01 :ARG0-of person Bangkok :location support-01 pro-government Red Shirt :ARG0 :ARG0-of :mod :ARG1 attack-01 ... protest-01 bus rally-01 Bangkok :agent :destination ...", "most similar task in CV, object detection, and obtain the object bounding boxes detected by a Faster R-CNN (Ren et al., 2015) model trained on Open Images (Kuznetsova et al., 2018) with 600 object types (", "classes).We employ a VGG-16 CNN (Si-monyan and Zisserman, 2014) to extract visual features of an image m and and another VGG-16 to encode the bounding boxes { o i } .", "Then we apply a Multi-Layer Perceptron (MLP) to predict a verb embedding from m and another MLP to predict a noun embedding for each o i .", "We compare the predicted verb embedding to all verbs v in the imSitu taxonomy in order to classify the verb, and similarly compare each predicted noun embedding to all imSitu nouns n which results in probability distributions: P ( v | m ) = exp ( mv ) P v exp ( mv ) , P ( n | o i ) = exp( o i n ) P n exp( o i n ) , where v and n are word embeddings initialized with GloVE (Pennington et al., 2014).", "We use another MLP with one hidden layer followed by Softmax ( ) to classify role r i for each object o i : P ( r i | o i ) = (cid:0) MLP r ( o i ) (cid:1) .", "the situation loss functions:", "L v = log P ( v | m ) , L r = log( P ( r i | o i ) + P ( n i | o i ))", "(2) Attention-based Graph: State-of-the-art object detection methods only cover a limited set of object types, such as 600 types defined in Open Images.", "Many salient objects such as bomb , stone and stretcher are not covered in these ontologies.", "Hence, we propose an open-vocabulary alternative to the object-based graph construction model.", "To this end, we construct a role-driven attention graph, where each argument node is derived by a spatially distributed attention (heatmap) conditioned on a role r .", "More specifically, we use a VGG-16 CNN to extract a 7 7 convolutional feature map for each image m , which can be regarded as attention keys k i for 7 7 local regions.", "Next, for each role r defined in the situation recognition ontology (e.g., agent ), we build an attention query vector q r by concatenating role embedding r with the image feature m as context and apply a fully connected layer: q r = W q [ r ; m ] + b q .", "Then, we compute the dot product of each query with all keys, followed by Softmax, which forms a heatmap h on the image, i.e., h i = exp( q r k i ) P j 7 7 exp( q r k j ) .", "We use the heatmap to obtain a weighted average of the feature map to represent the argument o r of each role r in the visual space: o r = X i h i m i .", "Similar to the object-based model, we embed o r to o r , compare it to the imSitu noun embeddings to define a distribution, and define a classification loss function.", "The verb embedding m and the verb prediction probability P ( v | m ) and loss are defined in the same way as in the object-based method.", "Event and Argument Classifier: We use either the object-based or attention-based formulation and pre-train it on the imSitu dataset (Yatskar et al., 2016).", "Then we apply a GCN to obtain the structured embedding of each node in the common space, similar to Equation", "1. This yields m C and o C i .", "We use the same classifiers as defined in Equation 2 to classify each visual event and argument using the common space embedding: P ( y e | m ) = exp( W e m C + b e ) P e exp( W e m C + b e ) , P ( y a | o ) = exp( W a [ o C ; m C ] + b a ) P a exp( W a [ o C ; m C ] + b a ) .", "(3) 3.4 Cross-Media Joint Training In order to make the event and argument classifier shared across modalities, the image and text graph should be encoded to the same space.", "However, it is extremely costly to obtain the parallel text and image event annotation.", "Hence, we use event and argument annotations in separate modalities (i.e., ACE and imSitu datasets) to train classifiers, and simultaneously use VOA news image and caption pairs to align the two modalities.", "To this end, we learn to embed the nodes of each image graph close to the nodes of the corresponding caption graph, and far from those in irrelevant caption graphs.", "Since there is no ground truth alignment between the image nodes and caption nodes, we use image and caption pairs for weakly supervised training, to learn a soft alignment from each words to image objects and vice versa.", "where w i indicates the i th word in caption sentence s and o j represents the j th object of image", "m .", "Then, we compute a weighted average of softly aligned nodes for each node in other modality, i.e., w i = X j ij o C j , o j = X i ji w C i .", "We use a triplet loss to pull relevant image-caption pairs close while pushing irrelevant ones apart:", "where m is a randomly sampled negative image that does not match s .", "Note that in order to learn the alignment between the image and the trigger word, we treat the image as a special object when learning cross-media alignment.", "The common space enables the event and argument classifiers to share weights across modalities, and be trained jointly on the ACE and imSitu datasets, by minimizing the following objective functions: L e = X w log P ( y e | w ) X m log P ( y e | m ) , L a = X t log P ( y a | t ) X o log P ( y a | o ) , All tasks are jointly optimized: L = L v + L r + L e + L a + L c 3.5 Cross-Media Joint Inference In the test phase, our method takes a multimedia document with sentences S = { s 1 , s 2 , . . . } and images M = { m 1 , m 2 , . . . , } as input.", "We first generate the structured common embedding for each sentence and each image, and then compute pairwise similarities h s, m i .", "We pair each sentence s with the closest image m , and aggregate the features of each word of s with the aligned representation from m by weighted averaging: w i = (1 ) w i + w i , (5) where = exp( h s, m i ) and w i is derived from m using Equation 4.", "word into an event type and to classify each entity into a role with multimedia classifiers in Equation", "2. To this end, we define t i similar to w i but using t i and t i .", "Similarly, for each image m we find the closest sentence s , compute the aggregated multimedia features m and o i , and feed into the shared classifiers (Equation 3) to predict visual event and argument roles.", "Finally, we core-fer the cross-media events of the same event type if the similarity h s, m i is higher than a threshold.", "Evaluation Metrics We conduct evaluation on text-only, image-only, and multimedia event mentions in M 2 E 2 dataset in Section 2.2.", "We adopt the traditional event extraction measures, i.e., Precision , Recall and F 1 .", "For text-only event mentions, we follow (Ji and Grishman, 2008; Li et al., 2013): a textual event mention is correct if its event type and trigger offsets match a reference trigger; and a textual event argument is correct if its event type, offsets, and role label match a reference argument.", "We make a similar definition for image-only event mentions: a visual event mention is correct if its event type and image match a reference visual event mention; and a visual event argument is correct if its event type, localization, and role label match a reference argument.", "A visual argument is correctly localized if the Intersection over Union (IoU) of the predicted bounding box with the ground truth bounding box is over 0.5.", "Finally, we define a multimedia event mention to be correct if its event type and trigger offsets (or the image) match the reference trigger (or the reference image).", "The arguments of multimedia events are either textual or visual arguments, and are evaluated accordingly.", "To generate bounding boxes for the attention-based model, we threshold the heatmap using the adaptive value of 0 .", "75 p , where p is the peak value of the heatmap.", "Then we compute the tightest bounding box that encloses all of the thresholded region.", "Examples are shown in Figure 7 and Figure 8.", "Baselines The baselines include: (1) Text-only models: We use the state-of-the-art model JMEE (Liu et al., 2018a) and GAIL (Zhang et al., 2019) for comparison.", "We also evaluate the effectiveness of cross media joint training by including a version of our model trained only on ACE, denoted as WASET .", "(2) Image-only models: Since we are the first to extract newsworthy events, and the most similar work situation recognition can not localize arguments in images, we use our model trained only on image corpus as baselines.", "Our visual branch has two versions, object-based and attention-based, denoted as WASEI obj and WASEI att .", "(3) Multimedia models: To show the effectiveness of structured embedding, we include a baseline by removing the text and image GCNs from our model, which is denoted as Flat.", "The Flat baseline ignores edges and treats images and sentences as sets of vectors.", "We also compare to the state-of-the-art cross-media common representation model, Contrastive Visual Semantic Embedding VSE-C (Shi et al., 2018), by training it the same way as WASE.", "Parameter Settings The common space dimension is 300 .", "The dimension is 512 for image position embedding and feature map, and 50 for word position embedding, entity type embedding, and POS tag embedding.", "The layer of GCN is 3 .", "As shown in Table 3, our complete methods (WASE att and WASE obj ) outperform all baselines in the three evaluation settings in terms of F 1 .", "The comparison with other multimedia models demonstrates the effectiveness of our model architecture and training strategy.", "The advantage of structured embedding is shown by the better performance over the flat baseline.", "Our model outperforms its text-only and image-only variants on multimedia events, showing the inadequacy of single-modal information for complex news understanding.", "Furthermore, our model achieves better performance on text-only and image-only events, which demonstrates the effectiveness of multimedia training framework in knowledge transfer between modalities.", "WASE obj and WASE att , are both superior to the state of the art and each has its own advantages.", "WASE obj predicts more accurate bounding boxes since it is based on a Faster R-CNN pretrained on bounding box annotations, resulting in a higher argument precision.", "While WASE att achieves a higher argument recall as it is not limited by the predefined object classes of the Faster R-CNN.", "Furthermore, to evaluate the cross-media event coreference performance, we pair textual and visual event mentions in the same document, and calculate Precision , Recall and F 1 to compare with ground truth event mention pairs 7 .", "As shown in Table 4, WASE obj outperforms all multimedia embedding models, as well as the rule-based baseline using event type matching.", "This demonstrates the effectiveness of our cross-media soft alignment.", "Our cross-media joint training approach successfully boosts both event extraction and argument role labeling performance.", "For example, in Figure 5", "(a), the text-only model can not extract Jus-7 We do not use coreference clustering metrics because we only focus on mention-level cross-media event coreference instead of the full coreference in all documents.", "tice.Arrest event, but the joint model can use the image as background to detect the event type.", "In Figure 5", "(b), the image-only model detects the image as Conflict.Demonstration , but the sentences in the same document help our model not to label it as Conflict.Demonstration .", "Compared with multimedia flat embedding in Figure 6, WASE can learn structures such as Artifact is on top of Vehicle , and the person in the middle of Justice.Arrest is Entity instead of Agent .", "One of the biggest challenges in M 2 E 2 is localizing arguments in images.", "Object-based models suffer from the limited object types.", "Attention-based method is not able to precisely localize the objects for each argument, since there is no supervision on attention extraction during training.", "For example, in Figure 7, the Entity argument in the Conflict.Demonstrate event is correctly predicted as troops , but its localization is incorrect because Place argument share similar attention.", "When one argument targets at too many instances, attention heatmaps tend to lose focus and cover the whole image, as shown in Figure 8.", "Text Event Extraction Text event extraction has been extensively studied for general news do-Entity:", "main (Ji and Grishman, 2008; Liao and Grishman, 2011; Huang and Riloff, 2012; Li et al., 2013; Chen et al., 2015; Nguyen et al., 2016; Hong et al., 2018; Liu et al., 2018b; Chen et al., 2018; Zhang et al., 2019; Liu et al., 2018a; Wang et al., 2019; Yang et al., 2019; Wadden et al., 2019).", "Multimedia features has been proven to effectively improve text event extraction (Zhang et al., 2017).", "Visual Event Extraction Events in NLP usually refer to complex events that involve multiple entities in a large span of time (e.g. protest), while in CV (Chang et al., 2016; Zhang et al., 2007; Ma et al., 2017) events are less complex single-entity activities (e.g. washing dishes) or actions (e.g. jumping).", "Visual event ontologies focus on daily life domains, such as dogshow and wed-ding ceremony (Perera et al., 2012).", "Moreover, most efforts ignore the structure of events including arguments.", "There are a few methods that aim to localize the agent (Gu et al., 2018; Li et al., 2018; Duarte et al., 2018), or classify the recipient (Sigurdsson et al., 2016; Kato et al., 2018; Wu et al., 2019a) of events, but neither detects the complete set of arguments for an event.", "The most similar to our work is Situation Recognition (SR) (Yatskar et al., 2016; Mallya and Lazebnik, 2017) which predicts an event and multiple arguments from an input image, but does not localize the arguments.", "We use SR as an auxiliary task for training our visual branch, but exploit object detection and attention to enable localization of arguments.", "Silberer and Pinkal redefine the problem of visual argument role labeling with event types and bounding boxes as input.", "Different from their work, we extend the problem scope to including event identification and coreference, and further advance argument localization by proposing an attention framework which does not require bounding boxes for training nor testing.", "Multimedia Representation Multimedia common representation has attracted much attention recently (Toselli et al., 2007; Weegar et al., 2015; Hewitt et al., 2018; Chen et al., 2019; Liu et al., 2019; Su et al., 2019a; Sarafianos et al., 2019; Sun et al., 2019b; Tan and Bansal, 2019; Li et al., 2019a,b; Lu et al., 2019; Sun et al., 2019a; Rahman et al., 2019; Su et al., 2019b).", "However, previous methods focus on aligning images with their captions, or regions with words and entities, but ignore structure and semantic roles.", "UniVSE (Wu et al., 2019b) incorporates entity attributes and relations into cross-media alignment, but does not capture graph-level structures of images or text.", "In this paper we propose a new task of multimedia event extraction and setup a new benchmark.", "We also develop a novel multimedia structured common space construction method to take advantage of the existing image-caption pairs and single-modal annotated data for weakly supervised training.", "Experiments demonstrate its effectiveness as a new step towards semantic understanding of events in multimedia data.", "In the future, we aim to extend our framework to extract events from videos, and make it scalable to new event types.", "We plan to expand our annotations by including event types from other text event ontologies, as well as new event types not in existing text ontologies.", "We will also apply our extraction results to downstream applications including cross-media event inference, timeline generation, etc.", "This research is based upon work supported in part by U.S. DARPA AIDA Program No.", "FA8750-18-2-0014 and U.S. DARPA KAIROS Program No.", "FA8750-19-2-1004.", "The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of DARPA, or the U.S. Government.", "The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein." ]
[ "objective", "objective", "objective", "abstain", "result", "result", "method", "abstain", "abstain", "result", "objective", "method", "abstain", "objective", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "result", "abstain", "abstain", "objective", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "other", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "other", "abstain", "method", "method", "method", "method", "method", "method", "other", "method", "method", "method", "method", "abstain", "method", "method", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "other", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "method", "method", "other", "objective", "other", "other", "other", "objective", "objective", "abstain", "objective", "objective", "result", "other", "other", "other", "other", "other" ]
[ "The largest store of continually updating knowledge on our planet can be accessed via internet search.", "In this work we study giving access to this information to conversational agents.", "Large language models, even though they store an impressive amount of knowledge within their weights, are known to hallucinate facts when generating dialogue (Shuster et al., 2021); moreover, those facts are frozen in time at the point of model training.", "In contrast, we propose an approach that learns to generate an internet search query based on the context, and then conditions on the search results to finally generate a response, a method that can employ up-to-the-minute relevant information.", "We train and evaluate such models on a newly collected dataset of human-human conversations whereby one of the speakers is given access to internet search during knowledge-driven discussions in order to ground their responses.", "We find that search-query based access of the internet in conversation provides superior performance compared to existing approaches that either use no augmentation or FAISS-based retrieval (Lewis et al., 2020b).", "Open-domain dialogue, which involves chat about any topic, rather than a specific goal-directed topic, is commonly studied by training large language models (Adiwardana et al., 2020; Zhang et al., 2020; Roller et al., 2021).", "These models are trained either in a encoder-decoder or decoder only setting on large datasets of human-human conversations, and any knowledge obtained during training is stored in the weights of the model.", "Such static language modeling fails to take into account the dynamic state of the world, where new information is coming in by the day or even by the minute as the knowledge in static models is gleaned from the point in time when the dataset was collected, and then frozen into the model that is trained; see Figure 1: Cherry picked example of a model with internet-augmentation trained on our new Wizard of the Internet task.", "(Lazaridou et al., 2021) for criticisms of this approach.", "Further, static language models are known to hallucinate , that is they generate plausible looking statements that are factually incorrect, which can be interpreted as a form of lossy compression when employing training to encode that knowledge within the weights of a neural network; see Shuster et al. (2021) for an in-depth study.", "In this work we study generative models that are instead capable of accessing the vast knowledge of the internet dynamically in order to inform their responses.", "Utilizing encoder-decoder archi-8460 tectures, we consider models that, given a dialogue context, first generate a search query.", "The queries are then used to retrieve relevant knowledge that is prepended to the conversational history, which is encoded using the Fusion-in-Decoder method (Izacard and Grave, 2021).", "Taking into account this encoded knowledge, a response is finally generated using the decoder.", "This ability to access the internet means the model is always up-to-date, unlike existing models that only know about facts in their fixed training set.", "Our model, in contrast, can potentially make use of the latest sports scores, movies or TV shows that were just released, the latest reviews, and so forth amongst the countless other topics available on the internet.", "In order to train and evaluate such models, we collect a new crowdsourced English dataset involving human-human conversations, where one of the workers plays the role of a wizard who conducts internet searches in order to inform their responses during knowledge-grounded conversations.", "We show that internet-augmented models trained to replace the human wizard outperform conventional non-augmented generation models on this task as measured by automatic metrics as well as human evaluations, and with our search query generation based approach also outperforms existing retrieval-augmented FAISS-based approaches such as RAG (Lewis et al., 2020b) and FiD-RAG (Shuster et al., 2021).", "We make our final models and the new task we have collected, publicly available.", "1 .", "The majority of work on dialogue generation has focused on training on natural or crowdsourced data where the task is, given a dialogue context (history), to generate the next response.", "Datasets such as pushshift.io Reddit (Baumgartner et al., 2020), PersonaChat (Zhang et al., 2018) or Empathetic Dialogues (Rashkin et al., 2019) (see Huang et al. (2020) for a review) are typically employed to train the weights of a Transformer encoder-decoder.", "This is the standard approach in state-of-the-art chatbots such as Meena (Adiwardana et al., 2020) or BlenderBot (Roller et al., 2021).", "Such models do not augment their generations with access to external knowledge, instead relying on facts originally provided in the training datasets themselves being stored into the weights of the model.", "ing generative models with external knowledge.", "Earlier works such as Memory Networks (Weston et al., 2015) and DrQA (Chen et al., 2017) utilized TFIDF-based retrieval over documents to provide additional input to neural models for the task of question answering, following the well studied area of non-neural methods that use retrieval for QA (Voorhees and Tice, 2000).", "More recently, the RAG (Retrieval-Augmented Generation) (Lewis et al., 2020b) and FiD (Fusion-in-Decoder) (Izac-ard and Grave, 2021) models developed these ideas further, using a neural retriever as well, with superior results.", "Retrieval-augmentation is also studied in the area of language modeling, where it is used for pre-training (Guu et al., 2020), and as a memory (Yogatama et al., 2021), especially using k -nearest neighbor-based cache models (Khandelwal et al., 2021, 2020; Grave et al., 2017; Merity et al., 2017).", "In dialogue, knowledge grounding is becoming more popular an area, with several datasets developed to study it (Zhou et al., 2018; Dinan et al., 2019; Ghazvininejad et al., 2018; Gopalakrishnan et al., 2019; Galetzka et al., 2020).", "Some of these such as Topical-Chat (Gopalakrishnan et al., 2019) and CMU_Dog (Zhou et al., 2018) are constructed given a gold passage of knowledge, and the task analyzes whether the model can use this knowledge in dialogue.", "Other works (Zhao et al., 2020; Kim et al., 2020; Bruyn et al., 2020) study whether knowledge selection is possible from a (small) set of knowledge.", "However, a retrieval step (or search engine) is not used, as we consider here.", "Perhaps the closest to our work is the Wizard of Wikipedia task (Dinan et al., 2019) which involves conversations grounded in Wikipedia, using a TFIDF retrieval model to find relevant knowledge from that database.", "Our work can be seen as a much richer task, covering all of the information that is publicly available on the internet and hence a more diverse range of conversational topics rather than just Wikipedia, while allowing human wizards to search for relevant knowledge themselves.", "Moreover, we consider sophisticated neural-in-the-loop retrieval mechanisms and real search engines.", "Shuster et al. (2021) studied neural-retriever-in-the-loop methods on this dataset.", "neighbor database, FAISS (Johnson et al., 2019), or", "(ii) using an Internet Search Engine directly to retrieve pages.", "For the FAISS-based methods, there are a number of possible variants that we consider, which we will describe first.", "In our experiments, the FAISS-based methods share the same core setup.", "First, we store and utilize the Common Crawl dump of the internet from Wenzek et al. (2020) 2 in a FAISS database, with keys that are dense vectors.", "The retrieval system uses a DPR (Dense Passage Retrieval) (Karpukhin et al., 2020) Transformer-based model which scores document-context pairs in order to rank them based on their match using a bi-encoder framework, where the base DPR model is pre-trained on QA data pairs.", "We use the pre-trained DPR model from the KILT Benchmark (Petroni et al., 2021).", "The documents (webpages) are encoded using DPR into dense vectors and these are stored in the FAISS index.", "During dialogue-based retrieval, the dialogue context is also encoded by DPR into a dense vector and FAISS approximate nearest-neighbor lookup is performed, where the top N documents are returned.", "We then consider several recent neural methods for utilizing this retrieval mechanism in various ways.", "RAG (Retrieval Augmented Generation) RAG (Lewis et al., 2020b) is an approach which consists of two components which are trained end-to-end:", "(i) the neural-in-the-loop retrieval system; and", "(ii) an encoder-decoder for generating final responses given the results of the retrieval.", "Using DPR, the top N documents are returned as described above, and in the RAG-Token model (just called RAG in the rest of the paper) each in turn is encoded along with the context for each token, and the most likely sequence is generated from the set.", "During backpropagation training steps, the DPR context encoder is also tuned to perform well at FAISS retrieval, but the document encodings are held fixed.", "This approach has been shown to optimize both retrieval and generation jointly, improving results.", "2 We use the November 2020 dump, head only, consisting of 109M English webpages.", "Each document is split into 100-word chunks, giving 250M passages to index in FAISS.", "We also consider the dump of Wikipedia from (Karpukhin et al., 2020) in this work.", "2021).", "In this case, the pre-trained retriever is used, i.e. DPR with FAISS, and then each of the top N documents returned is prepended to the context and encoded separately by the encoder, and finally all the results are concatenated.", "The decoder then attends to these encodings to produce a final response, so all fusion happens in the decoding stage.", "This relatively simple method was shown to outperform RAG in some cases.", "FiD-RAG The FiD approach works well, but there is no end-to-end training of the retriever in that case, and so it relies completely on being pre-trained well, as opposed to RAG which tunes the retrieval for generation.", "FiD-RAG, proposed in (Shuster et al., 2021) combines the two methods.", "First the retriever is trained in a RAG setup, and then FiD is used with that retriever.", "This was shown to give superior results to both RAG and FiD on dialogue tasks.", "FAISS + Search Query-based Retrieval Instead of just encoding the context into a dense vector, in this approach an encoder-decoder is employed to generate a search query given the context.", "The search query is input into a DPR model to produce a dense vector, and is matched to documents in the FAISS index.", "Returned documents can then be used in the final response generation encoder-decoder as before.", "Any of the existing approaches (RAG, FiD or FiD-RAG) could potentially be used to fuse the DPR and generator models.", "We used the standard DPR FiD setup.", "The previously described FAISS-based approaches can take advantage of many existing methods developed for QA and dialogue tasks, as we saw, but have several disadvantages.", "First, they may be difficult to update to real-time web documents; second, there may be a limit to the number of documents storable in local FAISS deployments; and third, such methods will not take advantage of the high quality ranking that has been finely tuned in Internet Search engines over decades of use.", "We thus consider using Internet search engines directly.", "Method Our proposed method consists of two components: 1) A search query generator: an encoder-decoder Transformer that takes in the dialogue context as input, and generates a search query.", "This is given to the black-box search engine API, and N documents are returned; 2) A FiD-style 8462 Train Valid Test Total Dialogues 8,614 516 503 9,633 Utterances 82,952 5,781 4,932 93,665 Avg.", "encoder-decoder model that encodes each document individually, concatenates them to the dialogue context encoding, and then finally generates the next response.", "We can train each of these modules separately if we have supervised data available for both tasks, the first module requiring (context, search query) pairs, and the second module requiring (context, response) pairs.", "As we will see, the data we collect in this work (detailed in section 4) fulfills both of these requirements.", "For FiD, we try two methods:", "(i) Conventional FiD whereby we use the returned search results from using our trained search query generator in order to build the relevant document contexts for the FiD training set;", "(ii) FiD-Gold: as we will have available human-written search queries for the training set, and their corresponding search results, we can use these gold results to build training document contexts instead.", "Although these might not look like the queries and hence results predicted at test time, they are more likely to contain the knowledge used in generating the training set responses, thus a clearer grounding may be apparent for the model to learn correspondences.", "Search Engine The search engine is a black box in this system, and could potentially be swapped out for any method.", "In our numerical experiments we use the Bing Search API to generate a list of URLs for each query; then, we use these URLs as keys to find their page content from a lookup table we built for our Common Crawl snapshot, in order to populate a set of pages for that query.", "This makes our comparison more direct with our FAISS-based methods.", "In addition, we can also consider if the URL is from English Wikipedia, in that case we can extract the page title from the URL and look up its corresponding page inside the dump of Wikipedia.", "In order to both train and evaluate generative models that can use search engines in-the-loop, we design, collect and release a dataset for this purpose.", "The overall setup involves pairing crowdworkers that are instructed to have a conversation together.", "One plays the role of the wizard , who has access to a search engine during conversation, while the other, the apprentice , does not.", "The apprentice however has an assigned persona that describes their interests.", "The purpose of the exchange is to have an in-depth conversation about [those] assigned interests.", "This mirrors conversations we expect to be more prevalent between a human and a bot: the conversations are more likely to be centered around the human's interests than the bot's, and the bot is the one that is going to be using the search engine to ground their knowledge.", "Hence, when we train or evaluate on this task, a given model will replace the role of the wizard.", "Apprentice Persona We show the apprentice several possible persona choices for the character that they are going to play, and let them choose one, e.g. I love tennis, Rafael Nadal is my favorite player..", "The intent here is that they can choose a topic they are both more interested in themselves to talk about and also have enough knowledge of so that they can conduct a reasonable conversation.", "The choices we show are themselves mined from the interests provided in the existing Persona-Chat dataset (Zhang et al., 2018) and the topics given in the existing Topical-Chat dataset (Gopalakrishnan et al., 2019).", "More details of the choices we give 8463 are provided in Appendix A. Wizard Active and Passive Openings We randomize which speaker takes their turn first.", "If the wizard speaks first, we encourage them to start with an opening that addresses the apprentice's interests.", "For example, if they know their partner is interested in tennis, they could search for the latest tennis news, and open with an interesting point based on that knowledge.", "If the apprentice goes first, their goal is to converse with the wizard more based on their own interests, e.g. in this same case they could talk about tennis in detail.", "Wizard Search At each turn, the wizard can enter free text search terms in a left-hand panel (with the main conversation panel on the right) much like in a conventional search engine.", "The top few results are shown in the left panel, below the search query 3 .", "For each document the titles are shown for space reasons, and each document is expandable.", "If the wizard finds one or more search results useful for their response, they can click on the sentences they find relevant, and then enter their conversational response in the right-hand panel.", "They are also free to try another search query if they did not find their first results appropriate, or else can enter a conversational response and choose to ignore the search results entirely.", "Full System Each crowdworker has to pass an on-boarding task to be able to be part of the main data collection task, and pass some automatic checks (average response length, use of search).", "They are asked to play a particular role (\"Create an interesting character that you want to play\"), and are given instructions to avoid toxic or biased language.", "We randomly assign for any given crowdworker a fixed choice of either wizard or apprentice for all of their data collection, otherwise we found that switching role introduced lower quality conversations, probably due to confusion between the different goals and instructions per role.", "After pairing, we collect between 5-6 turns (10-12 utterances) for each conversation.", "We ask workers to skip initial greeting messages, as these bring little extra value to the task.", "Screenshots of the crowdworker task can be seen in Figure 4 in the appendix.", "Example collected dialogues are shown in Figure 5 and Figure 6 3 We run two searches, one with the given query, and one with the query terms plus the word news (with the news results shown as the top two knowledge candidates), in order to encourage topical discussion.", "The overall collected data consists of 9633 dialogues in total, with 82952 utterances in the training set, and validation and test sets of 5781 and 4932 utterances, respectively.", "Overall statistics can be found in Table 1.", "We find that 84.81% of all turns by the wizard involve search, so a large amount of knowledge grounding based on internet results is taking place.", "Of those, the wizard is allowed to repeat the search with different search terms if they did not find what they were looking for.", "When the wizard searches, we find 1.19 search queries are performed on average, so while mostly a single search is employed, a number of further knowledge searches are attempted.", "Wizards use the search results (indicated by selecting relevant sentences) 80.3% of the time.", "We show in Figure 2 a breakdown of the most common domains used during search on the validation set.", "We see that the domains are rather diverse, coming from all kinds of topics, and in particular that the Wikipedia domain is actually fairly small (8.56% of queries), which is interesting because most other studies have used Wikipedia only as their knowledge resource (Chen et al., 2017; Lewis et al., 2020b; Dinan et al., 2019; Shuster et al., 2021).", "Our training set spans 26192 unique selected URLS for grounding knowledge from 10895 domains, indicating a wide variety of topics and knowledge is used across all conversations.", "We evaluate models on our new Wizard of the Internet (WizInt) task, using its dedicated training set.", "We also consider the existing Wizard of Wikipedia (WoW) training resource as well, either for building baselines or for multi-tasking.", "We consider fine-tuning various existing pre-trained models: T5 (Raffel et al., 2020), BART-Large (Lewis et al., 2020a) and BlenderBot variants (Roller et al., 2021).", "For all retrieval-augmented methods we use N = 5 returned documents.", "For all models, when generating responses we fix the decoding parameters to beam search (beam size 3) with a minimum sequence length of 20 and beam blocking of 3-grams within the response (but not the context), similar to choices in (Roller et al., 2021).", "Following Shuster et al. (2021) we report perplexity, F1 and Knowledge F1 (KF1) metrics.", "F1 measures the overlap between the model's response and the human response from the dataset.", "KF1 instead measures the overlap between the model's response and the knowledge on which the human grounded during dataset collection (i.e., the sentences they clicked as relevant from the web search documents retrieved, see section 4).", "We note that KF1 and F1 can be traded off, for example a model that could copy the knowledge directly would have a high KF1 but a low F1 it would be knowledgeable, but not conversational.", "Nevertheless, we expect an ideal model would achieve relatively high values for each.", "Finally, we also perform a human evaluation, the details of which will be discussed further in subsection 5.3.", "Pre-training models We evaluate the performance of using different standard pre-training models when training on our new task.", "Results are given in Table 3. Comparing BlenderBot (BB) 400M and 2.7B parameter models, which use the same dictionary, we see that larger models do improve all metrics (perplexity, F1 and KF1) in the no knowledge case (where the model is given only the conversational history, with no web documents).", "When given gold knowledge (the selected knowledge sentences and the conversational history are given as input to the model), this trend is slightly less clear, but still present.", "BART-Large and T5-Large, which are trained on more knowledge focused corpora, rather than the conversational corpora of BB, give improved performance for the same model size in terms of F1 and KF1 metrics.", "We choose to use BART-Large as our base for all of our following experiments.", "No knowledge vs. gold knowledge baselines We compare Transformers that are given only the dialogue context (no knowledge) to Transformers that are given both the dialogue context and the gold knowledge from the task which human annotators (wizards) labeled as being used to craft responses.", "They can be compared in Table 3 across different models.", "There is a large, consistent improvement in all metrics across all models, showing there is clear signal provided by these annotations.", "While in practice gold annotations will not be available, this can be seen as both an upper bound on possible performance, as well as confirmation that knowledge retrieval has the potential to bring sig-8465 nificant gains over non-retrieval augmented (no knowledge) models.", "Wizard of Wikipedia baselines We train models on the Wizard of Wikipedia (WoW) dataset as baselines, to compare the difference between coverage of the WoW task and our new WizInt task, in both the no knowledge and gold knowledge settings.", "Results are given in Table 4, evaluating on both the WoW and WizInt validation sets.", "We observe some overlap between the tasks, as expected, but also observe some differences.", "Perplexity improves from 20.4 to 17.4 and a corresponding boost in F1 of 15.8 to 17.6 from training with WizInt and evaluating on the WizInt task in the no knowledge setting, compared to training with WoW.", "Similarly, the WoW task provides better training data for its own task.", "We draw similar conclusions in the gold knowledge case as well.", "KF1 on the other hand appears to be less influenced by the dataset in the no knowledge case, and in the gold knowledge case the WoW model has a higher KF1, perhaps because the model has learnt to copy effectively, but has a poor F1, presumably because it is not generating as appropriate responses due to this copying.", "Multi-tasking with Wizard of Wikipedia We can also multi-task the WoW and WizInt tasks together, perhaps bringing improvements as we have shown they have some similarity in their tasks.", "Results are also given in Table 4. We observe a small gain in perplexity on both the no knowledge and gold knowledge WizInt tasks, and improvements in F1, e.g. from 17.6 to 18.0 on the no knowledge task, and from 25.4 to 26.3 on the gold knowledge task.", "In the majority of our subsequent experiments, for the sake of simplicity we do not perform such multi-tasking, but we expect similar gains could be achieved if we were to apply this elsewhere.", "DPR+FAISS-based models We trained DPR+FAISS-based models using either the WoW or WizInt training datasets, and either Wikipedia or Common Crawl (CC) as the database.", "Results of the most salient methods on the test set are given in Table 2, with full results on the validation set in Table 9.", "Comparing to WoW-trained Transformers with no augmentation (no knowledge), we find the WoW-trained DPR+FAISS-augmented methods using FiD give unclear improvements: there is no improvement in F1 using Wikipedia as a database, and a small improvement in F1 (from 14.7 to 15.3) when using CC, as measured on the test set.", "Moreover, perplexity in both cases increases (e.g., from 22.3 to 22.8).", "However, FiD-RAG performs better, improving F1 from 14.7 to 15.5 while maintaining the same perplexity.", "Nevertheless, these WoW-trained baselines fail to match even the non-augmented no knowledge Transformer trained on WizInt (Table 2, row 2) which has a perplexity of 18.7 and F1 of 16.9.", "Training DPR+FAISS on WizInt, we also see clear improvements over WoW-trained models, and similar conclusions that FiD-RAG is superior to RAG, with the best approach achieving a perplexity of 17.1 and F1 of 18.0 on the validation set, see Table 9 in the appendix.", "The impact on the test set however is still fairly minimal, see Table 2.", "Search Query+FAISS-based models We find that using a search query generator and then using FAISS to retrieve from the database of web documents performs slightly worse than DPR+FAISS-based models.", "Perplexity is actually no better than the no knowledge model 19.0 for Search Query+FAISS compared to 18.7 for no knowledge.", "Search Engine-based models The search engine based method provides the best performance in terms of perplexity of all the models tested, with a validation perplexity of 16.4 when trained on WizInt and 16.1 when trained on both Wow and WizInt for the CC+Wikipedia case, see Table 9.", "While F1 and KF1 metrics are hardly impacted, we do see a similar reduction in perplexity on the test set, see Table 2.", "We find this encouraging as search engines are already a well developed tool we can simply interface with our model, rather than trying to reinvent storage of all the documents on the internet, as we have attempted with our other FAISS-based experiments.", "We thus select this method as our main candidate for human evaluations.", "We perform a human evaluation using crowdworkers.", "The conversations begin with a random apprentice persona from the WizInt validation set being selected and shown, and the crowdworker is asked to play that role.", "We ask the crowdworkers to have a natural conversation, where they will also evaluate their partner's responses for conversational attributes, in particular knowledgeability, factual (in)correctness, engagingness and consistency.", "Screenshots can be found in Figure 7 (in the appendix) which detail further the definitions 8466 Factually Final # Annotated Model Consistent Engaging Knowledgeable Incorrect Rating Responses WizInt Transformer (No Knowledge) 66.5% 69.9% 38.6% 7.1% 3.64 764 Search engine FiD (Bing Search) 76.1% 81.4% 46.5% 5.3% 3.73 757 Table 5: Human Evaluation Results.", "of those attributes.", "On each turn of the conversation the crowdworker is asked to check all attribute boxes that apply to the last turn.", "Each conversation consists of 15 messages (7 from the human, 8 from the bot).", "At the end of the conversation, an additional question collects an overall engagingness score (out of 5) for their speaking partner.", "We compared the WizInt BART-Large Transformer (no-knowledge) model, which is a standard Transformer with no retrieval augmentation, to the WizInternet Search engine FiD model, with live Bing search (without using a CC subset).", "The results are given in Table 5.", "For each model, around 750 responses were annotated over nearly 100 model conversations.", "The search engine-based method outperformed the no-knowledge baseline across the board.", "Not only was the search engine-based model judged to be knowledgeable more often (46.5% vs. 38.6% of the time) and factually incorrect less often (5.3% vs. 7.1%), but it also was measured to be more consistent (76.1% vs. 66.5%) and more engaging (81.4% vs. 69.9% on an utterance level, and 3.73 vs. 3.64 on a conversation level).", "Success cases In the best case, our augmented models are able to construct appropriate internet search queries, read the corresponding web pages and provide information relevant to the conversation.", "We show a cherry picked conversations between a human (paper author) and the WizInternet Search engine FiD model (using live Bing search) in Figure 1, and in Figure 8, Figure 9, Figure 10 and Figure 11 in the appendix.", "In each case, we can compare to a WizInt BART-Large Transformer (no-knowledge) model using the same conversational messages on the human side.", "We find the search engine model is capable of diverse conversations spanning drink ingredients, TV shows, restaurants and machine learning research.", "In the TV show and restaurant cases the model is able to surface recommendations and provide details about them, for example the correct address and phone number of a pizza store in Princeton, or the plots of recent TV shows such as The Underground Railroad.", "Standard BART-Large fine-tuned models on the other hand typically either hallucinate knowledge or else fall back to generic statements.", "Failure cases Analysis also exposes various kinds of error.", "Lemon picked conversations between human (paper authors) and the WizInternet Search engine FiD model (using live Bing search) are shown in Figure 12 in the appendix.", "First, there are generation mistakes despite finding the correct knowledge, for the example where the model incorrectly names Bruno Mars as working on the Cardi B song Bodak Yellow.", "Bruno Mars did collaborate with Cardi B on other songs, and the model confuses and mixes various pieces of evidence within the given knowledge sources.", "Second, search query generation mistakes given the context, for example missing out key search terms.", "Third, selecting the wrong knowledge given earlier context, as in the case where the model associates the wrong authors to a paper.", "A fourth additional issue is that even if the correct knowledge is available the model may err on the side of not using it and select a more generic response instead, as often happens in the non-augmented model.", "See for example Figure 8 and Figure 11 in the appendix.", "This work has studied the problem of siloed knowledge in large language models, whereby they cannot access the knowledge of the world other than through their fixed training set.", "Developing methods that instead can access the internet as an augmentation to the generation process, we have showed such models can display more knowledge and generate less factually incorrect information during dialogue with humans.", "Future work should aim to develop improved architectures that can be trained and evaluated on our new task.", "Going for-ward, in the long-term we require machine learning methods that interact with the world, rather than only having a simple text context and access to the internet is a natural step in that direction.", "Large language models bring an impact on the environment in terms of resources required to train and deploy them, and concerns about toxic language, bias and other issues during language generation (Bender et al., 2021).", "For dialogue in particular, see Xu et al. (2020) for a review of the literature and evaluation of recent methods that try to mitigate these safety issues.", "The initial pre-training dataset used in this work contains varied and potentially offensive text content, as they were originally procured from the Internet by third parties.", "However, our fine-tuning task is built with crowdworkers with specific instructions to not use toxic language, a procedure which is shown to yield safer language models (Roller et al., 2021).", "This work, different to other language generation models, specifically augments the generations with knowledge from the internet.", "On the one hand, we showed that this results in less model hallucination, and more factually correct generations.", "Further, as the model generates human readable search queries and one can verify which document(s) the used knowledge comes from, means our model also has increased interpretability and potentially debugga-bility compared to standard language models.", "On the other hand, this also brings potential new concerns if those websites contain toxic, biased or factually incorrect information themselves.", "While issues of toxicity can perhaps be treated similarly to the pre-training data case (e.g. safety classi-fiers), fact checking is a separate area with ongoing work, e.g. Hassan et al. (2017); Fan et al. (2020).", "We further remark however, that the use of internet search engines to augment models, instead of FAISS-based retrieval (Lewis et al., 2020b), means that machine learning models can take advantage of decades of work in search engine safety issue mitigations, rather than having to completely rebuild those tools again." ]
[ "method", "method", "abstain", "objective", "method", "result", "abstain", "abstain", "other", "abstain", "abstain", "method", "objective", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "objective", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "method", "method", "method", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "objective", "abstain", "abstain", "abstain", "method", "result", "objective", "result", "result", "abstain", "abstain", "abstain" ]
[ "Dense video event captioning aims to generate a sequence of descriptive captions for each event in a long untrimmed video.", "Video-level context provides important information and facilities the model to generate consistent and less redundant captions between events.", "In this paper, we introduce a novel H ierarchical C ontext-aware N etwork for dense video event captioning (HCN) to capture context from various aspects.", "In detail, the model leverages local and global context with different mechanisms to jointly learn to generate coherent captions.", "The local context module performs full interaction between neighbor frames and the global context module selectively attends to previous or future events.", "According to our extensive experiment on both Youcook2 and Activitynet Captioning datasets, the video-level HCN model outperforms the event-level context-agnostic model by a large margin.", "The code is available at https://github.com/ KirkGuo/HCN .", "With the increase of video data uploaded online every day, the acquisition of knowledge from videos especially for Howto tasks is indispensable for peo-ple's daily life and work.", "However, watching a whole long video is time-consuming.", "Existing technologies focus on two main research directions to compact video information: video summarization to trim long videos to short ones and (dense) video captioning to generate a textual description of the key events in the video.", "Typically for long untrimmed videos, dense video event captioning generates fine-grained captions for all events to facilitate users quickly skimming the video content and enables various applications e.g. video chaptering and search inside a video.", "Dense video event captioning (Krishna et al., 2017) and multi-modal video event captioning (Iashin and Rahtu, 2020b) aims to generate a sequence of captions for all events regarding to uni-modality (video) or multi-modality (video + speech) inputs.", "Figure 1 presents a showcase, which demonstrates the challenges of this task from both vision and speech text perspective.", "For vision understanding, the fine-grained objects are hard to recognize due to ambiguity, occlusion, or state change.", "In this case, the object dough is occluded in event 1 and is hard to recognize from the video.", "However, it can be recognized from the previous neighbor video frame with a clear appearance.", "From speech text perspective, although the speech text offers semantic concepts (Shi et al., 2019; Iashin and Rahtu, 2020b), it brings another challenge of co-reference and ellipsis in speech text due to the informal utterance of oral speeches.", "In the case of Figure 1, the entity dough in event 3 is an ellipsis in the text.", "Nonetheless, it is capable of generating consistent objects dough in event 3 with the contextual information from other events such as event 1 in this example.", "To sum up, both local neighbor-clip and global inter-event contexts are important for event-level captioning to generate coherent and less duplication descriptions between events.", "Previous endeavors widely used recurrent neural network(Krishna et al., 2017) which suffers from capturing long dependency, while recently attention-based model(Zhou et al., 2018b; Sun et al., 2019b,a) is becoming the new paradigm for dense video event captioning and effective for multi-modal video captioning (Shi et al., 2019; Iashin and Rahtu, 2020b).", "However, existing attention-based models generate the captioning only relying on the video clip inside each event, and ignore video-level local and global context.", "Motivated by this, we mainly investigate how to effectively and jointly leverage both local and global context for video captioning.", "In this paper, we propose a novel hierarchical context-aware model for dense video event captioning (HCN) to capture both the local and global context simultaneously.", "In detail, we first exploit a local context encoder to embed the visual and linguistic features of the source and surrounding clips, then design a global context encoder to capture relevant features from other events.", "Specifically, we apply different mechanisms: a flat attention module between the source and local context; a cross attention module for the source to select the global context.", "With regards to the neighbor frames (tem-porally close) usually alike, e.g. with the same objects, the flat attention is a full interaction to generate accurate and coherent captions.", "Contemporaneously, the cross attention on global context can selectively attend to the relevant events and capture prior temporal dependency between events to generate coherent and less duplicate captions.", "The experimental results demonstrate the effectiveness of our model.", "Our contributions can be summarized as: 1) We propose a hierarchical context-aware model for dense video event captioning to capture video-level context.", "2) We carefully design different mechanisms to capture both local and global context: a flat attention model with full interaction between neighbor frames and a cross attention model to selectively capture inter-event features.", "3) Experimental results on both Youcook2 and Activitynet Captions dataset demonstrate the effectiveness of our models and outperforms the context-agnostic model to a large extent.", "The dense video event captioning task is to produce a sequence of events and generate a descriptive sentence for each event given a long untrimmed video.", "In this work, we focus only on the task to generate captions and directly apply the ground-truth event proposals similar to (Hessel et al., 2019; Iashin and Rahtu, 2020b).", "The paradigm for video captioning is an encoder-decoder network, which inputs video features and outputs descriptions for each event.", "In this section, we describe the task formulation including the context-agnostic model as well as the context-aware model in one framework.", "Problem Definition We define a sequence of event segment proposals as e = (cid:8) e i | i [1 , m ] (cid:9) , representing the video with m proposals, e i is the feature of the i -th event including both video and text feature, e i = { v i , t i } , where v i is video feature and t i is transcript text feature (if available) of the i -th event.", "We take all the video frames and transcript tokens of the event between the start and end time.", "The number of video frames is likely to be different from the number of text tokens depending on the actual video clip.", "Given all events e , the goal is to predict the target descriptive sentences Y = (cid:8) y i | i [1 , m ] (cid:9) .", "Each y i is a sequence of descriptive words corresponding to each event e i .", "The probability of the expected sentences Y .", "which is to predict y i conditioned on the event e i .", "The context-aware model considers local context v (cid:54) = i (the neighboring video clip) and global context e (cid:54) = i (the clips of past and future events) respectively.", "The context-aware probability can be approximated as P ( Y | e ) = m (cid:89) i =1 P ( y i | e i , v (cid:54) = i , e (cid:54) = i ) (2) 3 Methodology 3.1 Context-agnostic model The context-agnostic model of captioning is to generate a descriptive sentence given the short-trimmed video clip of each event.", "The paradigm for multi-modal video captioning is an encoder-decoder network as in (Hessel et al., 2019).", "First, we pre-process each event and extract features separately.", "For the event e i , we extract both video feature v i and transcript feature t i if available.", "Next, both the video features and transcript features are concatenated together as the input to the transformer encoder.", "This encoder implements self-attention of each modality and cross attention between both modalities in one unified transformer.", "Finally, a transformer decoder generates the text tokens of the description with the enhanced features.", "We propose a context-aware video event captioning model with a hierarchical context-aware network (HCN) and the architecture is a general framework for either uni-modal or multi-modal inputs as explained in Figure 2.", "For visual features, we adopt a pre-trained 3D feature extractor to extract k features as v i = (cid:8) v j | j [1 , k ] (cid:9) of the i -th event.", "We further add a projection layer to map the raw feature to the input dimension through an embedding layer f ( v i ) = { e | e = Embedding ( v i ) } .", "For transcript text, we tokenize the text into words and represent each word with 1-hot representation.", "The tokens within each event are represented as t i = { t j | j [1 , l ] } , where l is the length of the tokens corresponding to the number of the transcript text in the speech of the event.", "Moreover, we embed each token to continuous representation by an embedding layer f ( t i ) = { e | e = Embedding ( t i ) } .", "Similar to the work in (Hessel et al., 2019), we build the vocabulary using all tokens in the captioning sentence.", "The input for each event comprises of three types of embedding: 1) visual feature f ( v i ) (and speech text feature f ( t i ) if available); 2) position embedding p ( v i ) and p ( t i ) as introduced in the transformer model(Vaswani et al., 2017); 3) type embedding s ( v i ) and s ( t i ) representing whether the current embedding is from context or source.", "where + is the add operator, E ( v i ) and E ( t i ) are the embeddings of video and text respectively.", "For multi-modal input, both visual and text features are concatenated for further processing.", "We extract two types of contextual information: event-agnostic local context and event-aware global context.", "Event-agnostic context takes frames temporally close to the video event.", "Video is a continuous signal and neighboring video frames are likely to be semantically related to each other e.g. same objects.", "This is especially helpful for recognizing objects with state change or occluded in the current event.", "Moreover, objects are likely to be explicitly mentioned in the contextual transcript which can be used to deal with object co-reference and ellipsis typically for instructional videos.", "Event-aware context utilizes the video frames of both previous and future events, which attempts to model the relation between events.", "The global context provides overall features and prior knowledge of temporal dependency.", "Specifically for a particular domain like a recipe, the event mix the flour and water is often followed by knead the dough.", "This prior knowledge of event dependency learned from a global context is effective for understanding long videos.", "The overall pipeline includes 4 modules: 1) the hierarchical model starts with a local context module (LCM) to encode the local context features,", "the neighbor video clip temporally close to the event.", "Specifically, the LCM adopts a flat attention model similar to (Ma et al., 2020) to enhance the source video feature by local context.", "Besides, given multi-modal inputs, LCM is a general model to fuse both the visual features f ( v i ) and the text features f ( t i ) inside the event with one unified transformer as in (Hessel et al., 2019); 2) we further employ a global context module (GCM) to make the source event to interact with other event features flexibly.", "The GCM is a cross attention model, which contains one source encoder SEncoder and one cross encoder CEncoder.", "SEncoder is a self-attention module for encoding event features, and CEncoder is a cross attention module for interaction between source and context events; 3) the hierarchical context-aware model further combines both the neighbor-clip (around the event) or inter-event (other events) context from both previous and future using gating mechanism; 4) finally, an auto-regressive decoder is used to generate the sentence with a masked transformer model.", "Local Context Module We first introduce the local context module to encode multi-modal source video features together with the event-agnostic context features (surrounding frames).", "The flat transformer in (Ma et al., 2020) is effective for encoding contextual information with full interaction between source and context features.", "In addition, when the speech text is available for multi-modal video captioning, this flat encoder can also perform the fusion of visual and text modalities, which is similar to (Hessel et al., 2019).", "To sum up, we employ one unified flat encoder to accomplish two actions: source-context interaction and multi-modal fusion as explained in Figure 3a.", "E ( e i ) = [ E ( v i ); E ( t i )] (5) H ( m i ) = FFN ( MultiHead ([ E ( v i k l ); E ( e i )])) (6) H ( e li ) = H ( m i )[ i 1 : i n ] (7) where [;] is concatenation operation, FFN means the feed-forward network and MultiHead is the multi-head attention network in trans-former(Vaswani et al., 2017).", "We apply residual connection for all components.", "We only perform equation 5 for multi-modal video event captioning, and E ( e i ) is the concatenation of the visual embedding and text embedding for the event", "i. We then feed the embedding E ( e i ) together with the embedding of neighbor frames E ( v i k l ) into the", "transformer blocks and get context-aware encoding H ( m i ) , and k l is the local context length.", "Finally, we only select the output of source encoding instead of using all embedding for further processing.", "Intuitively, the source is more important than the context.", "In equation 7, H ( e li ) is the hidden state of the source input, which requires the model to focus on the current source event, i 1 is the start of the event i and i n is the end of the event", "i. LCM outputs the enhanced event representation by local context and multi-modal inputs.", "Global Context Module We then illustrate the global context module to encode the output of LCM together with event-aware context (previ-ous or future events).", "GCM is a cross attention model, which selectively attends to previous or future events to enhance the source video representation.", "Different from LCM, which applies a unified transformer to encode a short context, GCM exploits a cross attention model similar to (Maruf et al., 2019) to encode long global context effi-ciently.", "The unified transformer model is hard to deal with long input sequences due to complexity.", "The cross attention model facilitates the source to interact with each context event and can easily be scaled out for long videos.", "Figure 3b illustrates the GCM model structure.", "We exploit the GCM for each contextual event and then combine all the encoding through a context gating mechanism similar to (Maruf et al., 2019).", "First, the self-attention module encodes each source or context event separately.", "Then, the cross attention module empowers the source to attend to context.", "H ( e i ) = FFN ( MultiHead ([ H ( e li )])) (8) H ( e j ) = FFN ( MultiHead ([ E ( e j )])) (9) H ( e cj ) = FFN ( MultiHead ([ H ( e i ) , H ( e j )])) (10) where H ( e i ) is the encoding of source event i , H ( e j ) is encoding of the j -th context event, and H ( e cj ) is the source attended to the j -th event.", "Next, we adopt a gated recurrent unit (GRU) (Cho et al., 2014) to selectively update the source feature with context enhanced feature which is shown to be effective in our ablation study.", "z j = ( w z H ( e i ) + u z H ( e cj ) + b z ) (11) r j = ( w r H ( e i ) + u r H ( e cj ) + b r ) (12) h j = ( w h H ( e i ) + u h ( r j (cid:12) H ( e cj ) + b h ) (13) h j = (1 z j ) (cid:12) H ( e cj ) + z j h j (14) where is a logistic sigmoid operation, is the activation function tanh, w and u are learnable weight matrices, and h j is the encoded representation after the source event i attended to the context event", "j. Context Gating We adopt the gate in (Tu et al., 2018) to regulate the source H ( e li ) and context information h j .", "Then we get the context-enhanced source embedding for further decoding.", "= ( w j h p + w k h f ) (15) h c = h p + (1 ) h f (16) = ( w c h c + w s H ( e li )) (17) H = h c + (1 ) H ( e li ) (18) where h c is the integration of all previous context h p and future context h f .", "The w j , w k , w c and w s are learnable parameter matrices, and H is the final representation.", "The decoder is an auto-regressive transformer model to generate tokens one by one.", "We adopt the cross-entropy loss to minimize the negative log-likelihood over ground-truth words and apply the label smoothing strategy.", "We run our experiments on both Youcook2 dataset (Zhou et al., 2018a) and ActivityNet Caption dataset (Krishna et al., 2017).", "YouCook2 is the task-oriented instructional video dataset for video procedural captioning on the recipe domain.", "We follow the data partition in VideoBERT (Sun et al., 2019b) which uses 457 videos in the YouCook2 validation set as the testing set and the rest for development.", "In all, we use 1,278 videos for training and validation.", "We extract the visual feature by S3D model pre-trained on Howto100M(Miech et al., 2019) dataset through MIL-NCE(Miech et al., 2020) model.", "This visual representation is a better representation of Howto videos.", "The ASR transcript is automatically extracted from the off-the-shelf recognition tool 1 .", "Different from the Youcook2 dataset, Activitynet captions are open-domain videos with overlapping proposals, while Youcook2 has non-overlapping event proposals.", "We apply the same data partition in (Iashin and Rahtu, 2020b) with the ground truth labels.", "We directly download the copy of the dataset in (Iashin and Rahtu, 2020b) which contains 9,167 (out of 10,009) training and 4,483 (out of 4,917) validation videos.", "The dataset only contains partially available videos (91%) due to no longer available Youtube links.", "To make a fair comparison, we only list the experimental results on the same dataset.", "This open-source code and data portal contains the speech content extracted from the closed captions (CC) from the YouTube ASR system.", "We employ the metrics BLEU3, BLEU4 (Pa-pineni et al., 2002), METEOR (Banerjee and Lavie, 2005), ROUGE-L(Lin and Och, 2004) and CIDEr(Vedantam et al., 2015) to evaluate the performance.", "We follow the work in (Iashin and Rahtu, 2020a) on ActivityNet caption dataset which reported BLEU3, BLEU4 and METEOR.", "We directly apply the open-source tool 2 to evaluate our results as in (Krishna et al., 2017).", "We develop our model based on the open-source code 3 of MDVC(Iashin and Rahtu, 2020b), and will release our code later.", "The embedding size of video, hidden size of the multi-head, and feed-forward layer are 1024, 512, and 128 respectively.", "The number of the head is 8 and the dropout rate is 0.4.", "We set the local context length k l as 10, that is, the 10 previous and 10 future frames as a local event-agnostic context, and one previous event and one next event as a global event-aware context for a trade-off between performance and efficiency.", "We adopt the Adam optimizer (Kingma and Ba, 2015) with learning rate of 1e-4, and set two momentum parameters 1 = 0.9 and 2 = 0.98.", "For label smoothing, and the smoothing rate is 0.4.", "We set the batch size to 128.", "For model complexity, the HCN model introduces only 3% more parameters to the base model.", "All models are trained on 1 Tesla P100 GPUs for 4 hours for Youcook2 and 30 hours for Activitynet Captions.", "3 https://github.com/v-iashin/MDVC", "linear classifier of the S3D backbone and applied 3D average pooling to obtain a 1024-dimension feature vector.", "We got 1 feature per second and set k to 80.", "We demonstrate the results of our context-aware model on the Youcook2 dataset in Table 3. There are several existing baseline models: (1) Bi-LSTM with Temporal Attention (Bi-LSTM + TempoAttn) (Shou et al., 2016), which adopts Bi-LSTM language encoder; (2) End-to-End Masked Transformer (EMT) (Zhou et al., 2018b), an transformer based model; (3,4) VideoBERT (Sun et al., 2019b) and Contrastive Bidirectional Transformer (CBT) (Sun et al., 2019a), the pre-training based methods; (5) AT+Video (Hessel et al., 2019), the multimodal transformer method.", "Besides the work (Shou et al., 2016) using a recurrent network, other baseline methods adopted the transformer model.", "Our context-aware model achieves the best results for uni-modal video event captioning and outperforms the context-agnostic base model by a large margin.", "Furthermore, our HCN model with multimodal inputs can achieve comparable results with state-of-the-art results.", "We list experimental results on a partial dataset of ActivityNet Captions as (Iashin and Rahtu, 2020b) and ignore others on the full dataset as (Kr-ishna et al., 2017) to make a fair comparison.", "Table 2 presents the results of baseline methods and HCN.", "There are several baseline methods: (1) WLT (Rah-man et al., 2019), a weakly supervised method with multi-modal input; (2) multi-modal video event captioning (MDVC) (Iashin and Rahtu, 2020b), a transformer-based model with multi-modal inputs; (3) BMT (Iashin and Rahtu, 2020a), a better use of visual-audio information.", "Among these methods, WLT encoded the context using a recurrent network, while others are transformer models.", "HCN outperforms the base context-agnostic methods to a large extent and achieves state-of-the-art results.", "From both experimental results, we can see that our methods with context-aware information can improve the base context-agnostic model by a large margin for both unimodal or multi-modal input.", "We introduce the ablation study of the HCN model on the Youcook2 dataset.", "In our experiment, we use uni-modal input and illustrate the ablation results in Table 3. We remove one component at a time from the full HCN model to compare the performance.", "Type embedding: we remove the type embedding which is used to distinguish whether the input is source or context event.", "From the results, we can observe the performance drop by removing the type embedding.", "Past/Future context: we investigate the model with the only past context or future context and found that both past and future contexts are effective and complementary with each other.", "The model with the context in both directions achieves the best result.", "Cross attention gate: The GRU gate in the cross attention model is more effective than the simple combination, which shows that the GRU gate is better for modeling a sequential context.", "Local/global context: From the results in Table 3, we can see that the global context is more effective than the local context.", "The HCN model with both contexts outperforms all the models.", "Context length .", "1) With regards to the local context, the results of 10 or 20 context frames are similar with CIDEr as 141.1 and 141.3 correspondingly, while the performance with 40 frames drops with CIDEr as 138.", "2) For the global context, we have increased the number of previous and next events as the global context, but there is no further improvement.", "We found that irrelevant events even bring noise or duplicated information to learn.", "We analyzed several cases and found two interesting videos shown in Figure 4 and 5.", "We depict the visual thumbnail, ground-truth caption, predicted results of our baseline and HCN methods.", "From the case in Figure 4, we can see that the baseline context-agnostic model generates the caption of each event solely leading to inconsistent captions.", "The baseline model predicts the ambiguous object as chicken for event 1 with prior bias, but output the object as pork for event 2. Our HCN model can tackle this issue and is prone to predict captions with a consistent object in the procedure.", "Besides, as shown in event 1, the entity pork can also be learned from previous frames.", "The context-aware model is effective in resolving entity ambiguity and generating coherent captions.", "The case in Figure 5 presents another challenge.", "Since the visual cue of the three events is very similar, the base context-agnostic model inevitably predicts the same caption as knead the dough.", "The HCN model can learn the prior dependency between events, and hinder generating redundant sentences for similar events in the video.", "Therefore, the HCN model can generate the correct sentence for event 3. However, although the model tries to predict different captions for event 1, it is still hard to recognize the fine-grained entity salt from the video, and all models predict the object by mistake.", "Fine-grained entity recognition from a video is still a challenging problem.", "To sum up, from these cases we can see that, 1) the neighboring context can provide extra information to make an accurate and coherent prediction.", "2) the HCN model can capture the temporal dependency between events as prior knowledge, and generate consistent and less duplicate captions between events.", "3) fine-grained object recognition from a video is still a challenging problem.", "Visual coreference resolution (Kottur et al., 2018) can be the future work to tackle this problem.", "Video Captioning The tasks mainly contain three types of captioning: single-sentence captioning (Xu et al., 2016; Wang et al., 2018b; Zhang et al., 2018), paragraph-level captioning (Yu et al., 2016; Lei et al., 2020; Ging et al., 2020) and event-level captioning (Krishna et al., 2017; Li et al., 2018; Wang et al., 2018a; Mun et al., 2019; Chen et al., 2019; Zhou et al., 2018b).", "The difference between these tasks is whether to generate one or multiple sentences for the whole video or each separate event of the video.", "In this paper, we focus on the more challenging dense event-level video captioning task to generate descriptions for each event.", "Previous works (Krishna et al., 2017; Li et al., 2018; Wang et al., 2018a) mainly exploited recurrent neural models such as long short-term memory network (LSTM) (Hochreiter and Schmid-huber, 1997) or recurrent unit (GRU) (Cho et al., 2014) to encode context.", "However, the recurrent model suffers from modeling long dependency effectively.", "Zhou et al. (Zhang et al., 2018; Sun et al., 2019b,a) introduced a self-attention model (Vaswani et al., 2017) which generates the caption based on the clip of each event solely.", "Compared with these works, we are the first to implement a novel video-level hierarchical context-aware network for dense video event captioning.", "Multi-modal Video Captioning Video naturally has multi-modal inputs including visual, speech text, and audio.", "Previous works explore visual RGB, motion, optical flow features, audio features (Hori et al., 2017; Wang et al., 2018b; Rahman et al., 2019) as well as speech text features (Shi et al., 2019; Hessel et al., 2019; Iashin and Rahtu, 2020b) for captioning.", "According to the work in (Shi et al., 2019; Hessel et al., 2019; Iashin and Rahtu, 2020b), although the speech text is noisy and informal, it can still capture better semantic features and improve performance especially for instructional videos.", "Later on, Lashin et al. (Iashin and Rahtu, 2020b) proposed to embed all visual, audio, and speech text for dense video event captioning.", "However, context-aware models are rarely investigated in multi-modal video event captioning.", "Therefore, we propose a novel attention model for effectively encoding the local and global context to tackle ambiguous object recognition and transcript co-reference through jointly modeling multi-modal inputs.", "Context-aware Language Generation Our work is inspired by context-aware language generation e.g. document-level neural machine translation (NMT) (Miculicich et al., 2018; Maruf et al., 2019; Ma et al., 2020).", "Miculicich et al. (Miculicich et al., 2018) adopted a hierarchical context-aware network in a structured and dynamic manner.", "Mar-cuf et al. (Maruf et al., 2019) and Ma (Ma et al., 2020) further explored a scalable and effective attention mechanism.", "For the local neighbor-clip and global inter-event context, we further design a hierarchical context-aware network with a hybrid mechanism of multi-modal video captioning to dynamically leverage various video-level information through a gating scalar.", "Dense video event captioning is a typical video understanding task to learn procedural events in a long untrimmed video.", "It is essential to model holistic video information for event understanding.", "In this paper, we propose a novel hierarchical context-aware network to encode both the local and global context of long videos.", "Our HCN model is effective in modeling context and outperforms the context-agnostic model by a large margin.", "In future work, we tend to extend our hierarchical network to further investigate how to effectively attend to the long context to filter ambiguous and irrelevant information." ]
[ "abstain", "abstain", "objective", "abstain", "abstain", "result", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "method", "abstain", "abstain", "objective", "objective", "method", "objective", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "method", "other", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "other", "method", "method", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "other", "abstain", "result", "result", "result", "abstain", "other", "abstain", "abstain", "result", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "other", "other", "method", "other", "other", "other", "objective", "other", "other", "other", "other", "other", "objective", "abstain", "other", "other", "method", "abstain", "abstain", "objective", "result", "objective" ]
[ "This work proposes a standalone, complete Chinese discourse parser for practical applications.", "We approach Chinese discourse parsing from a variety of aspects and improve the shift-reduce parser not only by integrating the pre-trained text encoder, but also by employing novel training strategies.", "We revise the dynamic-oracle procedure for training the shift-reduce parser, and apply unsupervised data augmentation to enhance rhetorical relation recognition.", "Experimental results show that our Chinese discourse parser achieves the state-of-the-art performance.", "Discourse parsing is one of the fundamental tasks in natural language processing (NLP).", "Typical types of discourse parsing include hierarchical discourse parsing and shallow discourse parsing.", "The former is aimed at finding the relationships among a series of neighboring elementary discourse units (EDUs) and further building up a hierarchical tree structure (Mann and Thompson, 1988).", "Instead of establishing a tree structure, the latter finds the across-paragraph relations between all text units in a paragraph or a document.", "Based on Rhetorical Structure Theory Discourse Treebank (RST-DT) (Carlson et al., 2001a), hierarchical discourse parsing in English has been well-studied.", "This paper focuses on hierarchical discourse parsing in Chinese.", "Previous work on hierarchical Chinese discourse parsing is mostly based on the RST-style Chinese Discourse Treebank (Li et al., 2014).", "To distinguish from the other Chinese Discourse Treebank (Zhou and Xue, 2012), which is annotated with the PDTB-style for shallow discourse parsing, we use the term CDTB-14 to refer to the RST-style one and the term CDTB-12 to refer to the PDTB-style one.", "Kong and Zhou (2017) propose a pipeline framework and generate the discourse parsing tree in a bottom-up way.", "Lin et al. (2018) propose an end-to-end system based on a recursive neural network (RvNN) to construct the parsing tree with a CKY-like algorithm.", "Sun and Kong (2018) use transition-based method with the stack augmented parser-interpreter neural network (SPINN) (Bowman et al., 2016) as the backbone model, helping their model make a better prediction with the previous information.", "In this work, we attempt to construct a complete Chinese discourse parser, which supports all the four sub-tasks in hierarchical discourse parsing, including EDU segmentation, tree structure construction, nuclearity labeling, and rhetorical relation recognition.", "Given a paragraph, our parser extracts all EDUs, builds the tree structure, iden-tifies the nucleuses, and recognizes the rhetorical relations of all internal nodes.", "We propose a revised dynamic-oracle procedure (Yu et al., 2018) for training the shift-reduce parser.", "Because of the limited training instances in CDTB-14, we also address the data sparsity issue by introducing unsupervised data augmentation (Xie et al., 2019).", "Experimental results show that our methodology is effective, and our model outperforms all the previous models.", "The contributions of this work are three-fold shown as follows.", "1. We explore the task of Chinese discourse parsing with a variety of strategies, and our parser achieves the state-of-the-art performance.", "Our robust dynamic-oracle procedure can be applied to other shift-reduce parsers.", "2. Our complete Chinese discourse parser handles a raw paragraph/document directly and performs all the subtasks in hierarchical discourse parsing.", "No pre-processing procedures such as Chinese word segmentation, POS-tagging, and syntactic parsing are required.", "Figure 1 gives an overview of our parser.", "Five stages are performed to transform a raw document into a parse tree: EDU segmentation, tree structure construction, rhetorical relation and nuclearity classification, binary tree conversion, and beam search.", "Typically, EDU segmentation is a sequence labeling task (Wang et al., 2018; Peters et al., 2018).", "We propose a model for labeling each Chinese character in a raw document.", "The Begin-Inside scheme is employed that the word beginning with a new EDU will be labeled as B , and the rest of the words will be labeled as I .", "Our model is based on the pre-trained text encoder BERT (Devlin et al., 2018).", "More specifically, we adopt the version BERT-base, Chinese since this is the only pre-trained BERT dedicated to Chinese so far.", "As the BERT for Chinese is character-based, we feed each Chinese character into a BERT layer to obtain its contextual embedding.", "Then, we fine tune the representation with an additional dense layer and measure the probability of each label of each character with a softmax layer.", "The model is further trained as conditional random fields (CRFs) (Lafferty et al., 2001) for finding the global optimal label sequence.", "We propose a shift-reduce parser for building the structure of the discourse parse tree.", "A shift-reduce parser maintains a stack and a queue for representing a state during parsing, and an action classifier is trained to predict the action (i.e., shift or reduce) for making a transition from the given state to the next state.", "In the initial state, the stack is empty, and the queue contains all the EDUs in a raw document.", "In the final state, the queue is empty, and the stack contains only one element, i.e., the discourse parse tree of the whole paragraph.", "To decide whether to shift or to reduce, we propose an action classifier by considering the information of the top two elements of the stack s 1 and s 2 (i.e., the two most recent discourse units) and the first element of the queue q (i.e., the next 1 https://github.com/jeffrey9977/ Chinese-Discourse-Parser-ACL2020 Raw document Classifier Sense , Center Reduce EDUs Segmenter BI III IBII I IIIII IIIIIBI IIIIIIIIIIIIIIIIIBIIIIIIII IIB I IIIIIIIIIIIIII Converter stack queue Shift Figure 1: Overview of our Chinese discourse parser. EDU).", "The textual form of each of these three discourse units will be fed into the BERT encoder for representing as Enc ( s 1 ) , Enc ( s 2 ) , and Enc ( q ) .", "Next, we concatenate the max pooling of Enc ( s 1 ) , Enc ( s 2 ) , and Enc ( q ) and feed the resulting vector into a dense layer to predict the next action.", "Since shift-reduce is a greedy algorithm, it can hardly recover from an error state.", "The shift-reduce parser is typically trained with the teacher mode, where only correct states are given, and the resulting parser may perform poor when it reaches unfamiliar states.", "For this reason, we propose a revised dynamic-oracle procedure (Yu et al., 2018) for training our discourse parser.", "One drawback of the original dynamic oracle is that some golden training examples may be neglected.", "Because CDTB-14 has relatively few action steps to build a tree, the probability of falling into a wrong state is much small compared to that of RST-DT.", "In our revision, we want to guarantee all correct states have been trained.", "As shown in Algorithm 1, the document will be gone through twice when training a document example.", "We first follow the golden actions, and choose action predicted by the model with a probability at the second time.", "We refer to them as teacher mode and student mode, respectively.", "Note that we follow the suggestion of Yu et al. (2018) to set to 0.7.", "If two discourse units are decided to be merged during the tree construction stage, a new internal node will be generated and the relationship of the two discourse units will be determined.", "Predicting the relation between two textual arguments is a typical classification task in NLP.", "We propose a BERT-based classifier, which predicts the relation of two arguments separated by the symbol [SEP] , with additional dense layers as the output.", "In CDTB-14, the coordination relation accounts for 59.6% of the training data, while minor relations suffer from data sparseness.", "To address this issue, we introduce unsupervised data augmentation (UDA) (Xie et al., 2019) to enhance the performance.", "We adopt the discourse pairs in CDTB-12 as the material for UDA.", "Note that other unlabeled text pairs can also be used for UDA.", "We chose those from CDTB-12 simply because the format is convenient for us to use.", "The original loss is shown as Eq.", "1. Given a span of text x , our main model P ( ) predicts the rhetorical relation y c .", "Eq.", "2 shows the additional consistency loss to enforce the smoothness of our main model, and x stands for the augmented unlabeled sentence pair.", "L and U stand for labeled data and unlabeled data, respectively.", "As shown in Eq.", "3, we train both objectives at the same time with a weight to adjust the effect of UDA.", "H = 1 NN (cid:88) x L M (cid:88) c =1 y c log ( P ( y c | x )) (1) DKL = 1 NN (cid:88) x UP ( y | x ) log (cid:18) P ( y | x ) P ( y | x ) (cid:19) (2) L ( ) = H + DKL (3) The UDA procedure first generates the augmented unlabeled sentence pairs.", "approaches to paraphrasing can be employed.", "In this work, we utilize the back-translation strategy (Sen-nrich et al., 2016), where we translate the Chinese sentence pair to English and then translate back to Chinese.", "This is equivalent to add noises to the original inputs.", "As the original and the back-translated sentence pairs express the same meaning, our model is expected to predict the same label for both pairs.", "By minimizing the consistency loss, our model can behave consistently no matter whether an original instance or its paraphrases are given.", "In this way, the model can be more generalized and robust.", "Besides, when our model is able to predict the same label for both sentence pairs, it means that our model has also learned their label.", "Nuclearity labeling is aimed at determining the nucleus from a sentence pair.", "The nuclearity of two sentences has a correlation with their relationship, thus we jointly train the rhetorical relation and the nuclearity classifiers, where the loss for back-propagation is the sum of the losses of both classifiers.", "Similar to the imbalance issue of rhetorical relation recognition, the 'Equal' class accounts for 51% of training data.", "We also employ UDA for performance enhancement.", "For simplicity, our shift-reduce parser constructs a binary tree.", "However, the parse trees annotated in CDTB-14 are not always binary.", "In the training and the test sets, 8.9% and 10% of the internal nodes have more than two children, respectively.", "Most of the previous works do not handle the binary tree conversion, and some of the work further convert the golden trees into binary trees to calculate their scores, resulting in less accurate evaluation.", "In the training stage, we convert the multiway trees to their corresponding left-heavy binary trees (Morey et al., 2018).", "In the testing stage, we convert the binary tree constructed by our parser to the corresponding multiway tree.", "For example, a three-way node, A XY Z , will be converted to A A (cid:48) Z and A (cid:48) XY .", "The conversion is deterministic and bidirectional, so it is free from ambiguity.", "To decode a transition sequence during the testing stage, the standard method is to choose the action that has the maximum probability of the current time step as the input for the next time step.", "However, this greedy approach might fail to find the sequence that has the maximum overall probability only because one of the action probability is small in that sequence.", "Beam search (Wiseman and Rush, 2016) is a heuristic search algorithm that explores a graph by maintaining the top k results at every time step.", "This approach helps keep a number of potential candidates from discarding.", "Note that the greedy approach is equivalent to beam search with a beam width k = 1. When performing the shift-reduce parsing, two kinds of states have only one action to choose: (1) less than two elements in the stack, and (2) no element in the queue.", "Under the above two conditions, the probability of the selected action will be 1, making our model to be overly biased on those sequences having many non-optional stages.", "For this reason, we apply an alternative way to compute the sequence probability during beam search.", "Our modified beam search is still fulfilled by maintaining the top k sequences, but the score of a sequence is calculated by the average probabilities of the selected actions that have more than one choice.", "Following the setting of Kong and Zhou (2017), we divide CDTB-14 into the training set, including 450 articles (2,125 paragraphs), and test set, including 50 articles (217 paragraphs).", "We keep 10% of the training data for validation.", "PARSEVAL (Carlson et al., 2001b) is used for evaluation.", "Table 1 shows the performances of our parser in micro-averaged F-score, compared with previous work Zhou (Kong and Zhou, 2017) and Lin (Lin", "et al., 2018).", "We also implement BERT-CKY , a CKY parser by using BERT for representation, as an additional baseline model.", "The evaluation is based on multiway trees.", "Both the performances with and without golden EDUs are measured.", "The results show that BERT is highly competitive and has the ability to catch the potential relations between discourse units since Lin and BERT-CKY basically use the same approach while the latter model uses BERT as the text encoder.", "Our parser outperforms all the baseline models and achieves a significant improvement without the golden EDUs given.", "Note that BERT-CKY is based on Lin et al. (2018), which has its own EDU segmentation module different from ours, hence the EDU score is different.", "We examine the performance of three different training techniques for shift-reduce parsing.", "As mentioned in Section 2.2, Normal stands for action classifier trained with gold standard actions, Dynamic stands for Dynamic Oracle introduced by Yu et al. (2018), and Ours stands for our revised dynamic-oracle procedure where the model is trained with both gold standard actions and dynamic oracle actions.", "Compared to Normal , experimental results show no improvement made by the original dynamic oracle, while our revised dynamic oracle outperforms the other two strategies.", "Our strategy does not ignore the golden action in every correct state and also has the chance to explore error states.", "In order to compare with SUN (Sun and Kong, 2018), we convert the golden standard trees into binary trees and measure the performances on bi-Model EDU +T +R +N All Sun 93.0 78.2 53.2 Ours 97.4 83.3 58.1 55.7 52.0 Table 2: Performances measured on binary trees, reported in macro-averaged F-score.", "nary trees in macro-averaged F-score.", "The results are shown in Table", "2. Sun and Kong (2018) do not address all subtasks in Chinese discourse parsing, and our model outperforms SUN in every subtask.", "To examine the effectiveness of UDA, Table 3 shows the performances of rhetorical relation recognition with and without UDA.", "Experimental results show that application of UDA successfully enhances the recall scores of the three minor classes with a little trade-off in the recall score of the dominant class, Coordination.", "In addition, the F-scores of all the four relations are increased.", "In other words, applying UDA deals with the data imbalance issue and improves the overall performance.", "Applying UDA to nuclearity classification also has a similar improvement as Table", "3. Theoretically, beam search with a larger beam width helps find a better solution.", "Table 4, however, our parser is worse when a larger beam width is used, which means the sequence having higher overall score does not ensure the better decoding result.", "Our experiment only shows the beam widths up to five because the scores of worse sequences are already higher than that of the correct sequence in some cases.", "That is, the larger beam widths seem to be unnecessary.", "The reason may be that beam search is not really suitable for the shift-reduce paradigm.", "For example, a sequence might fall into a seriously bad stage but the rest of actions can be easily determined so that the sequence will get a high overall probability.", "This assumption also implies that unlike beam search applied on sequence to sequence model, we cannot judge a transition sequence is good or bad by solely considering its overall score.", "In addition, for longer textual units such as paragraph, human readers and writers may not follow the assumption of overall optimization.", "Instead, human beings read and write sequentially, similar to the greedy nature.", "We also evaluate our approach in English discourse parsing.", "The famous dataset, RST-DT, is used.", "Our model achieves F-scores of 85.0%, 58.8%, 69.9%, and 56.7% in tree construction, rhetorical relation recognition, nuclearity labeling, and all subtasks, respectively.", "The overall performance is similar to that of the state-of-the-art model (Yu et al., 2018).", "This work proposes a standalone, complete Chinese discourse parser.", "We integrate BERT, UDA, and a revised training procedure for constructing a robust shift-reduce parser.", "Our model is compared with a number of previous models, and experimental results show that our model achieves the state-of-the-art performance and is highly competitive with different setups.", "We will explore cross-lingual transfer learning for supporting more languages.", "This research was partially supported by Ministry of Science and Technology, Taiwan, under grants MOST-106-2923-E-002-012-MY3, MOST-109-2634-F-002-040-, MOST-109-2634-F-002-034-, MOST-108-2218-E-009-051-, and by Academia Sinica, Taiwan, under grant AS-TP-107-M05." ]
[ "objective", "objective", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "result", "objective", "objective", "method", "method", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "objective", "abstain", "abstain", "abstain", "other", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "objective", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "objective", "method", "result", "objective", "other" ]
[ "We tackle the task of Term Set Expansion (TSE): given a small seed set of example terms from a semantic class, finding more members of that class.", "The task is of great practical utility, and also of theoretical utility as it requires generalization from few examples.", "Previous approaches to the TSE task can be characterized as either distributional or pattern-based.", "We harness the power of neural masked language models (MLM) and propose a novel TSE algorithm, which combines the pattern-based and distributional approaches.", "Due to the small size of the seed set, fine-tuning methods are not effective, calling for more creative use of the MLM.", "The gist of the idea is to use the MLM to first mine for informative patterns with respect to the seed set, and then to obtain more members of the seed class by generalizing these patterns.", "Our method outperforms state-of-the-art TSE algorithms.", "Implementation is available at: https://github.com/ guykush/TermSetExpansion-MPB/ 1 Introduction Term Set expansion (TSE) is the task of expanding a small seed set of terms into a larger (ideally complete) set of terms that belong to the same semantic category.", "For example, the seed set { orange, apple } should expand into a set of fruits , while { orange, blue } into a set of colors , and { apple,google } into a set of tech companies .", "Beyond being of great practical utility, the TSE task is a challenging instance of a generalization from few examples problem.", "Solving TSE requires the algorithm to: (1) identify the desired concept class based on few examples; and (2) identify additional members of the class.", "We present an effective TSE method which is based on querying large, pre-trained masked language models (MLMs).", "Pre-trained language models (LMs) have been shown to contain semantic (Tenney et al., 2019), syntactic (Goldberg, 2019; Hewitt and Manning, 2019; Linzen et al., 2016) and factual knowledge (Petroni et al., 2019), and to be great starting points for transfer-learning to new tasks via fine-tuning on few examples.", "However, the TSE seed sets are too small for fine-tuning, calling for a different approach.", "Our method uses the MLMs directly for the task they were trained forlanguage-modelingby issuing word-completion queries and operating on the returned word distributions.", "1 Previous solutions to the TSE problem (also called semantic class induction) can be roughly categorized into distributional and pattern-based approaches (Shi et al., 2010).", "Our method can be seen as a combination of the two.", "The distributional approach to TSE (Hindle, 1990; Pantel and Lin, 2002; Pantel et al., 2009; Mamou et al., 2018; Mahabal et al., 2018) operates under the hypothesis that similar words appear in similar contexts (Harris, 1968).", "These methods represent each term in the vocabulary as an embedding vector that summarizes all the contexts the term appears in in a large corpus, and then look for terms with vectors that are similar to those of the seed term.", "The methods differ in their context definitions and in their way of computing similarities.", "A shortcoming of these methods is that they consider all occurrences of a term in the corpus when calculating its representation, including many contexts that are irrelevant to the concept at hand due to polysemy, noise in the corpus or noninformative contexts.", "2 In contrast, the pattern-based approach consid-1 See (Amrami and Goldberg, 2018) for a method that uses MLM word completions for word-sense induction.", "2 The work of Mahabal et al. (2018) is unique in this regard by considering only a subset of the contexts that are relevant for the expansion, as determined from the seed set.", "ers specific indicative patterns that signal the desired concept, looking for them in a large corpus, and extracting the terms that appear in them.", "Patterns can be binary (Hearst, 1992; Ohshima et al., 2006; Zhang et al., 2009) ( such as X or Y ), indicating that both X and Y belong to the same class, or unary (Gupta and Manning, 2014; Wang and Cohen, 2007) ( fruits such as X , First I painted the wall red, but then I repainted it X ), suggesting that X belongs to a certain category ( fruit , color ).", "The patterns can be determined manually (Hearst, 1992) or automatically (Wang and Cohen, 2007; Gupta and Manning, 2014).", "While well tailored patterns can be precise and interpretable, a notable shortcoming of pattern-based methods is their lack of coverage, due to the challenge of finding patterns that are specific enough to be accurate yet common enough in a large corpus to be useful.", "Wang and Cohen (2007) use patterns from nonnatural language (HTML) while Gupta and Manning (2014) restrict themselves to short patterns of 2-4 words to each side of the masked term.", "Our method.", "By using MLMs, we combine the power of the pattern-based and the distributional approaches: like the patterns-based approaches, we consider only specific, indicative corpus locations (retaining specificity and transparency).", "We then use the distributional nature of the neural LM to generalize across patterns and corpus locations.", "We use sentences with a single masked location as indicative patterns.", "For example, We took Rexy, our pet , to the vet.\" is an indicative pattern for the house animals semantic class. Given an initial set of seed terms, we first search the corpus for indicative patterns for members of the set (2.1). Intuitively, an indicative pattern is a corpus location which is considered by an LM to be a good fit for all seed members. Once we identified indicative patterns, we extend the set to terms that can appear in similar patterns. We propose two methods for doing this. The first method (2.2) queries an MLM for completions. While effective, this method restricts the expanded set to the LM vocabulary. The second method (2.3) uses the MLM to define a similarity metric over patterns, and searches the corpus for terms that appear in patterns that are similar to the indicative ones. To summarize, we embrace the pattern-based approach, while using distributional similarity for identifying good patterns as well as for generalizing across patterns. 2 Method Task formulation we are given a seed set S of k 3 terms S = t 1 , ..., t k , that come from a larger (and unknown) gold set S g . Our goal is to return S g . Practically, our (and other) algorithms return a ranked list of terms rather than a fixed set. The evaluation is then performed over the ranking: ideally, all terms in S g will be ranked above all terms not in S g . We operate in stages. First, we search the corpus for (cid:96) indicative masked patterns m 1 , ..., m (cid:96) , that are likely to signal the concept class in S g with high probability. Then, we use the patterns to extend the set. 2.1 Finding indicative masked-patterns A masked pattern m is a sequence of words with a single masked location (marked as ), where the mask indicates one or more words. We look for patterns such that, with high probability, instances of the desired semantic class will make good mask replacements, while instances of other classes will make bad replacements. For example, The capital of is a good pattern for the countries class. We collect L pattern candidates for each seed term t j by querying a corpus for sentences that contain the term, and replacing the term position with a mask. We then score each of the kL resulting pattern candidate m i , and take the (cid:96) -best ones. Intuitively, we seek a diverse set of patterns in which all seed terms are ranked high (ie, have low rank index) in the MLM's prediction: we look for patterns whose worst-fitting seed term is still high on the list of replacement terms. Formally, let LM ( m ) be the word completions (mask replacements) predicted by the LM for pattern m , ranked by their probability, and let RLM ( t, m ) be the rank (index) of term t in LM ( m ) . The score of the a pattern is then the maximal rank of any of the seed terms: 4 s ( m i ) = maxRank ( m i ) = max t j SRLM ( t j , m i ) (1) We then sort the patterns by s ( m i ) and take the patterns with minimal values. This min-over-max 3 In this work we focus on small values of k . Our experiments use k = 3 seed terms. 4 We assume the seed terms are a part of the LM's vocabulary. # patt # sent 20 100 300 1000 2000 4000 1 .794 .729 .704 .843 .939 .939 5 .834 .938 .960 .969 .981 .964 10 .839 .938 .974 .978 .990 .975 20 .838 .932 .972 .987 .990 .978 40 NA .916 .962 .993 .993 .989 80 NA .913 .954 .992 .996 .993 160 NA NA .949 .985 .998 .997 600 NA NA NA .981 .994 .993 Table 1: Number of indicative patterns used ( # patt ), and number of candidate seed-term containing sentences ( # sent ) used for selecting these indicative patterns. Set is the NFL team set, method is MPB1. Every value is an avg MAP on 5 seeds (chosen randomly, fixed for all values of # sent and # patt ) of size 3. NA: # patt can not be bigger than # sent . formulation ensures that the patterns are a good fit for all seed terms. 5 To achieve the diversity objective, we use the following heuristic: after sorting all candidate patterns m i by s ( m i ) , rather than taking the first (cid:96) items we go over the sorted list in order, and keep a pattern only if it differs by at least 50% of it's tokens from an already kept pattern. We do this until collecting (cid:96) patterns. 2.2 seed set extension via MLM query Having identified indicative patterns, we now turn to suggest terms for expanding the seed set. Each indicative pattern m i naturally provides a ranked list of candidate terms LM ( m i ) = t 1 , ..., t | V | , where V is the LM's vocabulary and each term t j is scored by its pattern-conditional probability. We combine the term scores from all chosen indicative patterns using a product of experts approach, scoring each term by the product of probabilities (sum of log probabilities) assigned to it by each context. Let p LM ( t | m i ) be the probability assigned to vocabulary term t in pattern m i . The term score is: score ( t ) = (cid:96) (cid:88) i =1 c i log p LM ( t | m i ) (2) where c i = ( maxRank ( m i ) 1 (cid:80) (cid:96)j =1 ( maxRank ( m j ) 1 is a weighing factor for indicative pattern m i , giving more weight to tighter indicative patterns. This method is fast and effective, requiring only (cid:96) queries to the LM. However, it assumes that all the desired terms from S g appear as vocabulary 5 Contrast this to a min-over-average formulation, which may score very well on some seed terms but badly on others. items in the LM. This assumption often does not hold in practice: first, for efficiency reasons, pretrained LM vocabularies are often small ( 50 k items), precluding rare words. Second, many terms of interest are multi-word units, that do not appear as single items in the LM vocabulary. 2.3 Extended coverage via pattern similarity We seek a term expansion method that will utilize the power of the pre-trained LM, without being restricted by its vocabulary: we would like to identify rare words, out-of-domain words, and multiword units. Our solution is to generalize the indicative patterns. Rather than looking for terms that match the patterns, we instead search a large corpus for patterns which are similar to the indicative ones, and collect the terms that appear within them. Following the distributional hypothesis, these terms should be of the desired concept class. By looking at patterns that surround corpus locations, we are no longer restricted by the LM vocabulary to single-token terms. However, considering all corpus locations as candidate patterns is prohibitively expensive. Instead, we take a ranking approach and restrict ourselves only to corpus locations that correspond to occurrences of candidate terms returned by a high-recall algorithm. 6 We use the LM to define a similarity measure between two masked patterns that aims to capture our desired notion of similarity: masked patterns are similar if they are likely to be filled by the same terms. Let top q ( LM ( m i )) be the q highest scoring terms for pattern m i . We define the similarity between two patterns as the fraction of shared terms in their top q predictions ( q being a hyperparame-ter): sim ( m i , m j ) = | top q ( LM ( m i )) top q ( LM ( m j )) | /q For a candidate term t , let pats ( t ) = m t 1 , ..., m tn be the set of patterns derived from it: sentences that contain t , where t is replaced with a mask. Note that t can be an arbitrary word or word sequence. We wish to find terms for which the similarity between pats ( t ) and the indicative patterns is high. However, since words have dif-6 For example, one that is based simple distributional similarity to the seed terms. In this work we use the nearest neighbours returned by the sense2vec model (Trask et al., 2015), as implemented in https://spacy.io/ universe/project/sense2vec . Set k=1 k=5 k=50 k=300 k=700 k=3000 States .693 .848 .986 .965 .972 .975 NFL .876 .939 .938 .919 .921 .916 Table 2: Effect of similarity measure's k on performance, using MPB2 on a single random seed from each set. ferent senses, it is sufficient for only some patterns in pats ( t ) to be similar to patterns in m 1 , ..., m (cid:96) . We score a term t as: score ( t ) = (cid:96) (cid:88) i =1 c i max m pats ( t ) sim ( m i , m ) (3) where c i is the pattern weighing factor from equation (2). As (cid:80) (cid:96)i =1 c i = 1 , the term score score ( t ) for every term t is [0 , 1] . 3 Experiments and Results We refer to the method in Section (2.2) as MPB1 and the method in section (2.3) as MPB2 . Setup. In our experiments we use BERT (Devlin et al., 2019) as the MLM, and English Wikipedia as the corpus. Following previous TSE work (e.g. (Mahabal et al., 2018)), we measure performance using MAP (using MAP 70 for the open set). For each method we report the average MAP over several runs (exact number mentioned under each ta-ble), each with a different random seed set of size 3. Based on preliminary experiments, for MPB1 we use (cid:96) = 160 and L = 2000 /k and for MPB2 we use (cid:96) = 20 and L = 2000 /k . 7 When comparing different systems (i.e, in Table 3), each system sees the same random seed sets as the others. For smaller sets we expand to a set of size 200, while for the Countries and Capitals sets, which have expected sizes of > 100 , we expand to 350 items. Dataset. Automatic TSE evaluation is challenging. A good TSE evaluation set should be complete (contain all terms in the semantic class), clean (not contain other terms) and comprehensive (contain all different synonyms for all terms). These are hard to come by. Indeed, previous work either used a small number of sets, or used some automatic set acquiring method which commonly are not complete. We curated a dataset with 7 closed, well defined sets, which we make publicly available. The sets are National football league teams (NFL, size:32), Major league baseball 7 see Additional experiments for a justification of these parameter choices. teams (MLB, 30), US states (States, 50), Countries (Cntrs, 195), European countries (Euro, 44) Capital cities (Caps, 195) and Presidents of the USA (Pres, 44). We also provide on one open class set: Music Genres (Genre). This set created by manually verifying the items in the union of the output of all the different algorithms. This set contains around 600 unique items. Compared Methods. We compare our methods, MPB 1 (MLM-pattern-based) (Section 2.2) and MPB 2 8 (Section 2.3), to two state-of-the-art systems: setExpander 9 ( SE ) (Mamou et al., 2018), and category builder ( CB ) (Mahabal et al., 2018). We also compare to two baselines: The first, BB (basic-BERT), is a baseline for MPB1. This is a BERT-based baseline that uses the MPB 1 method on patterns derived from sentences that include seed terms, without the selection method described in Section 2.1. The second, S 2 V , is a baseline for MPB2. This is a basic distributional method that uses sense2vec (Trask et al., 2015) representations, 10 which is also our candidate acquisition method for MPB 2 (A). As MPB 2 relies on external candidate generation, we also report on the oracle case MPB 2+O where we expand the S 2 V -generated candidate list to include all the members of the class. Main Results. Our main results are reported in Table 3. Our first method, MPB1, achieves the best scores on two of the three sets suitable for its limitations (where all or most of the set's terms are in the LM's vocabulary), and second-best results on the third. 11 MPB2 outperforms all other methods on 5 out of 7 closed sets when assuming gold-standard candidates (MPB2+O), and even when considering the missing candidates it outperforms other expanders on 4 out of 7 closed sets, averaging the best MAP score on all sets. While other 8 We follow (Mahabal et al., 2018) and limit MPB2 to 200,000 most frequent terms. MPB2 can work with any number of terms and is limited only by the candidate supplying method (in this implementationsence2vec which has 3,400,000 terms). 9 We use the non-grouping release version because it reaches better results on our dataset than the grouping one. 10 https://explosion.ai/demos/sense2vec 11 MPB1's relatively poor performance on the president's set can be a result of the basic terms MPB1 considers. MPB1 ranks only terms which are in the LM's vocabulary, which means that while other expanders can rank terms like President George W. Bush, MPB1 will consider terms like bush, which are harder to ascribe to the presidents set. While this is true for all sets, it seams to be more significant for a set containing person names. Method NFL MLB Pres States Cntrs Euro Caps Genre Avg SE (SetExpander) .54 .45 .33 .55 .55 .61 .14 .99 .52 CB (CategoryBuilder) .98 .97 .70 .93 .74 .46 .21 .67 .71 BB (BERT Baseline) .91 .92* .52** NA NA NA NA NA .78 MPB1 (Section 2.2) .98 .99 * .63** NA NA NA NA NA .87 S2V (Sense2Vec Baseline) .95 .80 .18 .94 .71 .78 .21 .90 .68 MPB2 (Section 2.3) .95 .82 .37 .98 .76 .79 .27 .98 .74 MPB2+O (Sec 2.3, Oracle) .95 .90 .88 .98 .91 .81 .80 NA' .89 Table 3: Main results.", "methods tend to stand out in either closed sets (CB) or the open set (SE), 12 MPB2 shows good performance on both kinds of sets.", "The results also suggest that a better candidate-acquiring method may lead to even better performance.", "Additional experiments.", "How many sentences should we query when searching for indicative patterns, and how many patterns should we retain?", "Table 1 shows a grid of these parameters.", "We use the NFL set for this experiment, as terms in this set all have more than one meaning, and for most the common usage is not the one that belongs to the NFL set (e.g jets, dolphins ).", "Therefore, this set should give a pessimistic estimation for the the number of sentences we need to extract to find quality indicative patterns.", "Results imply that 2000 appearances of seed terms are sufficient, and that good results can be obtained also with fewer instances.", "This shows thatbeyond the data used to train the initial MLMwe do not require a large corpus to achieve good results, suggesting applicability also in new domains.", "13 How sensitive is the algorithm to the choice of k when computing the pattern similarity?", "Table 2 shows that the similarity measure is effective for various k values, with max performance at 50.", "Finally, how do the different methods behave in a case where the seed terms are a part of a sub-12 SE does not rank the seed terms, as opposed to other methods.", "For fairness, we add them in the beginning of the returned list before computing the MAP score.", "13 While for MPB1 there are no prominent downsides in using a large number of indicative patterns, for MPB2 doing so will force us to use a large number of occurrences of the candidate terms also.", "This will (1) be costly run-time wise and (2) many occurrences of rare terms might not always be available.", "Therefore, we choose different parameters for MPB1 and MPB2.", "While in both we will use 2000 sentences to search for these indicative patterns ( L = 2000 /k ), for MPB1 we will use 160 indicative patterns ( (cid:96) = 160 ) and for MPB2 we will use only 20 of them ( (cid:96) = 20 ).", "set?", "Table 4 shows a case where seed terms are European countries.", "Ideally, we would like top results to be European countries, later results to be non-European countries, and then unrelated terms.", "MPB2+O achieves the best MAP scores on both the set and the subset.", "In the subset case, even when not provided with all oracle terms, MPB2 is better then all other expanders.", "While other expanders tend to reach stronger results on either the set or the subset, MPB2+O achieves similar scores on both.", "We introduce an LM-based TSE method, reaching state-of-the-art results.", "The method uses the power of LM predictions to locate indicative patterns for the concept class indicated by the seed terms, and then to generalize these patterns to other corpus locations.", "Beyond strong TSE results, our method demonstrates a novel use of pre-trained MLMs, using their predictions directly rather than relying on their states for fine-tuning.", "This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme, grant agreement No. 802774 (iEXTRACT)." ]
[ "result", "abstain", "abstain", "objective", "abstain", "abstain", "result", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "objective", "other" ]
[ "Nowadays, fake news detection, which aims to verify whether a news document is trusted or fake, has become urgent and important.", "Most existing methods rely heavily on linguistic and semantic features from the news content, and fail to effectively exploit external knowledge which could help determine whether the news document is trusted.", "In this paper, we propose a novel end-to-end graph neural model called CompareNet, which compares the news to the knowledge base (KB) through entities for fake news detection.", "Considering that fake news detection is correlated with topics, we also incorporate topics to enrich the news representation.", "Specifically, we first construct a directed heterogeneous document graph for each news incorporating topics and entities.", "Based on the graph, we develop a heterogeneous graph attention network for learning the topic-enriched news representation as well as the contextual entity representations that encode the semantics of the news content.", "The contextual entity representations are then compared to the corresponding KB-based entity representations through a carefully designed entity comparison network , to capture the consistency between the news content and KB.", "Finally, the topic-enriched news representation combining the entity comparison features are fed into a fake news classifier.", "Experimental results on two benchmark datasets demonstrate that CompareNet significantly outperforms state-of-the-art methods.", "production, dissemination and consumption.", "Fake news are news documents that are intentionally and verifiably false, and could mislead readers (Allcott and Gentzkow, 2017).", "Fake news can easily misguide public opinion, cause the crisis of confidence, and disturb the social order (Vosoughi et al., 2018).", "It is well known that fake news exerted an influence in the past 2016 US presidential elections (Allcott and Gentzkow, 2017).", "Thus, it is very important to develop effective methods for early fake news detection based on the textual content of the news document.", "Some existing fake news detection methods rely heavily on various hand-crafted linguistic and semantic features for differentiating between news documents (Conroy et al., 2015; Rubin et al., 2016; Rashkin et al., 2017; Khurana and Intelligentie, 2017; Shu et al., 2020).", "To avoid feature engineering, deep neural models such as Bi-LSTM and convolutional neural networks (CNN) have been employed (Oshikawa et al., 2020; Wang, 2017; Rodrguez and Iglesias, 2019).", "However, they fail to consider the sentence interactions in the document.", "Vaibhav et al. showed that trusted news and fake news have different patterns of sentence interactions (Vaibhav et al., 2019).", "They modeled a news document as a fully connected sentence graph and proposed a graph attention model for fake news detection.", "Although these existing approaches can be effective, they fail to fully exploit external KB which could help determine whether the news is fake or trusted.", "External KB such as Wikipedia contains a large amount of high-quality structured subject-predicate-object triplets and unstructured entity descriptions, which could serve as evidence for detecting fake news.", "As shown in Figure 4, the news document about mammograms are not effective at detecting breast tumors is likely to be detected as fake news with the knowledge that The goal of mammography is the early detection of breast can-cer in the Wikipedia entity description page 1 .", "Pan et al. proposed to construct knowledge graphs from positive and negative news, and apply TransE to learn triplet scores for fake news detection (Pan et al., 2018).", "Nevertheless, the performance is largely influenced by construction of the knowledge graph.", "In this paper, to take full advantage of the external knowledge, we propose a novel end-to-end graph neural model CompareNet which directly compares the news to the KB through entities for fake news detection.", "In CompareNet, we also consider using topics to enrich the news document representation for improving fake news detection, since fake news detection and topics are highly correlated (Zhang et al., 2020; Jin et al., 2016).", "For example, the news documents in the health topic are inclined towards false, while the documents belonging to the economy topic are biased to be trusted instead.", "Particularly, we first construct a directed heterogeneous document graph for each news document, containing sentences, topics and entities as nodes.The sentences are fully connected in bi-direction.", "Each sentence is also connected with its top relevant topics in bi-direction.", "If a sentence contains an entity, one directed link is built from the sentence to the entity.", "The reason for building one-way links from sentences to entities is to ensure that we can learn contextual entity representations that encode the semantics of the news, while avoiding the influence of the true entity knowledge to the news representation.", "Based on the directed heterogeneous document graph , we develop a heterogeneous graph attention network to learn topic-enriched news representations and contextual entity representations.", "The learned contextual entity representations are then compared to the corresponding KB-based entity representations with a carefully designed entity comparison network , in order to capture the semantic consistency between the news content and external KB.", "Finally, the topic-enriched news representations and the entity comparison features are combined for fake news classification.", "To facilitate related researches, we release both our code and dataset to the public 2 .", "In summary, our main contributions include:", "1) In this paper, we propose a novel end-to-end graph neural model CompareNet which compares the news to the external knowledge through entities for fake news detection.", "2) In CompareNet, we also consider the useful topic information.", "We construct a directed heterogeneous document graph incorporating topics and entities.", "Then we develop heterogeneous graph attention networks to learn topic-enriched news representations.", "A novel entity comparison network is designed to compare the news to the KB.", "3) Extensive experiments on two benchmark datasets demonstrate that our model significantly outperforms state-of-the-art models on fake news detection by effectively incorporating external knowledge and topic information.", "Fake news detection has attracted much attention in recent years (Zhou and Zafarani, 2020; Oshikawa et al., 2020).", "A lot of works also focus on the related problem, i.e., fact checking, which aims to search evidence from external knowledge to verify the veracity of a claim (e.g., a subject-predicate-object triple) (Thorne et al., 2018; Zhou et al., 2019; Zhong et al., 2020).", "Generally, fake news detection usually focuses on news events while fact-checking is broader (Oshikawa et al., 2020).", "The approaches for fake news detection can be divided into two categories: social-based and content-based.", "Social context related to news documents contains rich information such as user profiles and social relationships to help detect fake news.", "Social based models basically include stance-based and propagation-based.", "Stance-based models utilize users' opinions to infer news veracity (Jin et al., 2016; Wu et al., 2019).", "Tacchini et al. constructed a bipartite network of user and posts with like' stance information, and proposed a semi-supervised probabilistic model to predict the likelihood of posts being hoaxes (Tacchini et al., 2017).", "Propagation-based approaches for fake news detection are based on the basic assumption that the credibility of a news event is highly related to the credibilities of relevant social media posts.", "Both Figure 1: An example of directed heterogeneous document graph incorporating topics and entities.", "homogeneous (Jin et al., 2016) and heterogeneous credibility networks (Gupta et al., 2012; Shu et al., 2019; Zhang et al., 2020) have been built to model the propagation process.", "For instance, (Zhang et al., 2020) constructed a heterogeneous network of news articles, creators and news subjects, and proposed a deep diffusive network model for incorporating the network structure information to simultaneously detect fake news articles, creators and subjects.", "On the other hand, news contents contain the clues to differentiate fake and trusted news.", "A lot of existing works extract specific writing styles such as lexical and syntactic features (Conroy et al., 2015; Rubin et al., 2016; Khurana and Intelligentie, 2017; Rashkin et al., 2017; Shu et al., 2020; Oshikawa et al., 2020) and sensational headlines (Potthast et al., 2018; Sitaula et al., 2019) for fake news clas-sifier.", "To avoid hand-crafted feature engineering, neural models have been proposed (Wang, 2017; Rodrguez and Iglesias, 2019).", "For example, Ibrain et al. applied deep neural networks, such as Bi-LSTM and convolutional neural networks (CNN) for fake news detection (Rodr guez and Iglesias, 2019).", "However, these works fail to consider different sentence interaction patterns between trusted and fake news documents.", "Vaibhav et al. proposed to model a document as a sentence graph capturing the sentence interactions and applied graph attention networks for learning document representation (Vaibhav et al., 2019).", "Pan et al. proposed to construct knowledge graphs from positive and negative news, and apply TransE to learn triplet scores for fake news detection (Pan et al., 2018).", "Nevertheless, they relied heavily on the quality of the construction of knowledge graphs.", "In this paper, we propose a novel graph neural model CompareNet which directly compares the news to external knowledge for fake news detection.", "Considering that the detection of fake news is correlated with topics, we also use topics to enrich the news representation for improving fake news detection.", "Some works (Wang, 2017; Khattar et al., 2019; Wang et al., 2020) also consider incorporating multi-modal features such as images for improving fake news detection.", "In this section, we detail our proposed fake news detection model CompareNet, which directly compares the news to external knowledge for fake news detection.", "As shown in Figure 2, we also consider topics for enriching news representation since fake news detection is highly correlated with topics (Zhang et al., 2020).", "Specifically, we first construct a directed heterogeneous document graph for each news document incorporating topics and entities as shown in Figure 1.", "The graph well captures the interactions among sentences, topics and entities.", "Based on the graph, we develop a heterogeneous graph attention network to learn the topic-enriched news representation as well as the contextual entity representations that encode the semantics of the news document.", "To fully leverage external KB, we take the entities as the bridge between the news document and the KB.", "We compare the contextual entity representations with the corresponding KB-based entity representations using a carefully designed entity comparison network .", "Finally, the obtained entity comparison features are combined with the topic-enriched news document representation for fake news detection.", "For each news document d , we construct a directed heterogeneous document graph G = ( V , E ) incorporating topics and entities, as shown in Figure 1.", "There are three kinds of nodes in the graph: sentences S = { s 1 , s 2 , , s m } , topics T = { t 1 , t 2 , , t K } and entities E = { e 1 , e 2 , , e n } , i.e., V = S T E .", "The set of edges E represent the relations among sentences, topics and entities.", "The details of constructing the graph are described as follows.", "We first split the news document as a set of sentences.", "Sentences are bidirectionally connected with each other in the graph, capturing the interaction of each sentence with every other sentence.", "Since topic information is important for fake news detection (Zhang et al., 2020), we apply the unsupervised LDA (Blei et al., 2003) (the total topic number K is set as 100) to mine the latent topics T from all the sentences of all the documents in our dataset.", "Specifically, each sentence is taken as a pseudo-document and is assigned to the top P relevant topics with the largest probabilities.", "Thus, each sentence is also connected with its top P assigned topics in bi-direction, allowing the useful topic information to propagate among the sentences.", "Note that we can also deal with new coming news documents by inferring the topics with trained LDA.", "We identify the entities E in the document d and map them to Wikipedia using the entity linking tool TAGME 3 .", "If a sentence s contains an entity e , we build a one-way directed edge from a sentence to the entity e , in order to allow only information propagation from sentences to entities.", "In this way, we can avoid integrating true entity knowledge directly into news representation, which may mislead the detection of fake news.", "Based on the above directed heterogeneous document graph G , we develop a heterogeneous graph attention network for learning the news representation as well as the contextual entity representations.", "It considers not only the weights of different nodes with different types (Hu et al., 2019) but also the edge directions in the heterogeneous graph.", "Formally, we have three types T = { 1 , 2 , 3 } of nodes: sentences S , topics T and entities E with different feature spaces.", "We apply LSTM to encode a sentence s = { w 1 , , w m } and get its feature vector x s RM .", "The entity e E is initialized with the entity representations e KB RM learned from the external KB (see Subsection 3.3.1).", "The topic t T is initialized with one-hot vector x t RK .", "Next, consider the graph G = ( V , E ) where V and E represent the set of nodes and edges respectively.", "Let X R |V| M be a matrix containing the nodes with their features x v RM (each row x v is a feature vector for a node v ).", "A and D are 3 https://sobigdata.d4science.org/group/tagme/ the adjacency matrix and the degree matrix, respectively.", "The heterogeneous convolution layer updates the ( l + 1 )-th layer representation of the nodes H ( l +1) by aggregating the features of their neighboring nodes H ( l ) with different types .", "(Ini-tially, H (0) = X ): H ( l +1) = ( (cid:88) T B H ( l ) W ( l ) ) , (1) where ( ) denotes the activation function.", "Nodes with different types have different transformation matrix W ( l ) .", "The transformation matrix W ( l ) considers the different feature spaces and projects them into an implicit common space.", "B R |V||V | is the attention matrix, whose rows represent all the nodes and columns represent their neighboring nodes with the type .", "Its element vv (cid:48) in the v -th row and the v (cid:48) -th column is computed as follows: vv (cid:48) = Softmax v (cid:48) ( ( T [ h v , h v (cid:48) ])) , (2) where is the attention vector and is the type-level attention weight.", "h v and h v (cid:48) are respectively the representation of the current node v and its neighboring node v (cid:48) .", "Softmax function is applied to normalize across the neighboring nodes of node v .", "We calculate the type-level attention weights based on the current node embedding h v and the type embedding h = (cid:80) v (cid:48) A vv (cid:48) h v (cid:48) (the weighted sum of the neighboring node embeddings h v (cid:48) with the type , where the weight matrix A = D 12 ( A + I ) D 12 is the normalized adjacency matrix with added self-connections) as follows: = Softmax ( ( T [ h v , h ])) , (3) where is the attention vector for the type .", "Softmax function is applied to normalize across all the types.", "After L -layer heterogeneous graph convolution, we can finally get all the node (including sentences and entities) representations aggregating neighborhood semantics.", "We use max pooling over the representations of the sentence nodes H s RN to obtain the final topic-enriched news document embedding H d RN .", "The learned entity representations that encode the contextual semantics of the document are taken as contextual entity representations e c RN .", "In this subsection, we detail our entity comparison network which compares the learned contextual entity embeddings e c to the corresponding KB-based entity embeddings e KB .", "We believe entity comparison features could improve fake news detection based on the assumption that e c learned from trusted news document can be better aligned with the corresponding e KB ; while inverse for fake news.", "We first illustrate how to take full advantage of both structured subject-predicate-object triplets and unstructured textual entity descriptions in the KB (i.e., Wikipedia) to learn KB-based entity representations e KB .", "Structural Embedding .", "A wide range of knowledge graph embedding methods can be applied to obtain structured entity embeddings.", "Due to the simplicity of TransE (Bordes et al., 2013), we adopted TransE to learn entity representations e s RM from the triplets.", "Formally, given a triplet ( h, r, t ) , TransE regards a relationship r as a translation vector r from the head entity h to the tail entity t , namely h + r = t .", "Textual Embedding .", "For each entity, we take the first paragraph of the corresponding Wikipedia page as its text description.", "Then we apply LSTM (Hochreiter and Schmidhuber, 1997) to learn entity representations e d RM that encode the entity descriptions.", "Gating Integration .", "Since both the structural triplets and textual description provide valuable information for an entity, we integrate these information into a joint representation.", "Particularly, as we have the structural embedding e s and textual embedding e d , we adopt a learnable gating function to integrate entity embeddings from the two sources.", "Formally, e KB = g e (cid:12) e s + ( 1 g e ) (cid:12) e d , (4) where g e RM is a gating vector (w.r.t. the entity e ) to trade-off information from the two sources and its elements are in [0 , 1] .", "(cid:12) denotes element-wise multiplication.", "The gating vector g e means that each dimension of e s and e d are summed by different weights.", "To constrain the value of each element in [0 , 1] , we compute the gate g e with the Sigmoid function: g e = ( g e ) , (5) where g e RM is a real-value vector and is learned in the training process.", "After fusing the two types of embeddings with the gating function, we obtain the final KB-based entity embeddings e KB RM which encode both structural information from the triplets and textual information from the entity descriptions in the KB.", "We then perform entity-to-entity comparison between the news document and the KB, to capture the semantic consistency between the news content and the KB.", "We calculate a comparison vector a i between each contextual entity representation e c RN and its corresponding KB-based entity embedding e KB RM .", "where f cmp () denotes the comparison function, and W e RN M is a transformation matrix.", "To measure the embedding closeness and relevance (Shen et al., 2018), we design our comparison function as: f cmp ( x, y ) = W a [ x y, x (cid:12) y ] , (7) where W a RN 2 N is a transformation matrix and (cid:12) is hadamard product, i.e., element-wise product.", "The final output comparison feature vector C RN is obtained by the max pooling over the alignment vectors A = [ a 1 , a 2 , ..., a n ] of all the entities E = { e 1 , e 2 , ..., e n } in the news document.", "After obtaining the comparison vector C RN and the final news document representation vector H d RN , we concatenate and feed them into a Softmax layer for fake news classification.", "Formally, Z = Softmax ( W o [ H d , C ] + b o ) , (8) where W o and b o are the parameter matrix and vection of a linear transformation.", "During model training, we exploit the cross-entropy loss over the training data with the L2-norm of the parameters: L = (cid:88) i D train (cid:88) j =1 Y ij log Z ij + (cid:107) (cid:107) 2 , (9) where D train is the set of news documents for training, Y is the corresponding label indicator matrix, is the model parameters, and is regularization factor.", "For model optimization, we adopt the gradient descent algorithm.", "We conduct extensive experiments across various settings and datasets.", "Following the previous work (Vaibhav et al., 2019), we use SLN: Satirical and Legitimate News Database (Rubin et al., 2016), and LUN: Labeled Unreliable News Dataset (Rashkin et al., 2017) for our experiments.", "Table 1 shows the statistics.", "Our baseline models include deep neural models: LSTM (Hochreiter and Schmidhuber, 1997), CNN (Kim, 2014), BERT+LSTM (Vaibhav et al., 2019) (BERT for sentence encoder and then LSTM for document encoder) and BERT (Devlin et al., 2019) (directly for document encoder).", "We also compare our model with graph neural models: GCN and GAT based on an undirected fully-connected sentence graph, which use attention pooling or max pooling for learning news document representation.", "For fair comparison with the previous work (Vaibhav et al., 2019), we use LSTM to encode sentences with randomly initialized word embeddings, which is the same as all the graph neural baselines.", "We run our model 5 times and report the micro-averaged (Precision = Recall = F1) and macro-averaged scores (Precision, Recall, F1) in all the settings including 2-way and 4-way classification.", "2-way classification : We use the satirical and trusted news articles from LUN-train for training, LUN-test for validation and evaluate our model on the entire SLN dataset.", "This is done to emulate a real-world scenario where we want to see the performance of our model on an out-of-domain dataset.", "set.", "We use the LUN-test as our in-domain test set.", "Experimental Setting.", "In our experiments, we set the number of topics K = 100 in LDA.", "Each sentence is assigned to top P = 2 topics with the largest probabilities.", "The layer number of our heterogeneous graph convolution is set as L = 1 .", "These parameters are chosen according to the best experimental results on validation set.", "The other hyper-parameters are set as the same as the baseline (Vaibhav et al., 2019) for fair comparison.", "Specifi-cally, all the hidden dimensions used in our model are set as M = 100 .", "The node embedding dimension N = 32.", "For GCN, GAT and CompareNet, we set the activation function as LeakyRelU with slope 0.2.", "For model training, we train the models for a Dataset Trusted (#Docs) Satire (#Docs) Hoax (#Docs) Propaganda (#Docs) LUN-train GN except APW' and WPB' (9,995) The Onion (14,047) American News (6,942) Activist Report (17,870) LUN-test GN only APW' and WPB' (750) The Borowitz Report, Clickhole (750) DC Gazette (750) The Natural News (750) SLN The Toronto Star, The NY Times (180) The Onion, The Beaverton (180) -Table 1: Statistics of the datasets.", "maximum of 15 epochs and use Adam optimizer with learning rate 0.001.", "We set L2 normalization factor as 1e-6.", "Table 2 shows the results for the two-way classification between satirical and trusted news articles.", "We report only micro F1 since micro Precision = Recall = F1.", "As we can see, our proposed model CompareNet significantly outperforms all the state-of-the-art baselines in terms of all the metrics.", "Compared to the best baseline model, CompareNet improves both micro F1 and macro F1 by nearly 3%.", "We can also find that the graph neural network based models GCN and GAT all perform better than the deep neural models including CNN, LSTM and BERT.", "The reason is that the deep neural models fail to consider the interactions between sentences, which is important for fake news detection since different interaction patterns are observed in trusted and fake news documents (Vaibhav et al., 2019).", "Our model CompareNet further improves fake news detection by effectively exploiting the topics as well as the external KB.", "The topics enrich the news representation, and the external KB offers evidences for fake news detection.", "We also present the results of four-way classification in Table 3.", "Consistently, all graph neural models capturing sentence interactions outperform the deep neural models.", "Our model CompareNet achieves the best performance in terms of all metrics.", "We believe that our model CompareNet benefits from the topics and external knowledge.", "In this subsection, we conduct experiments to study the effectiveness of each module in CompareNet and the way we incorporate external knowledge.", "We study the average performance of 5 runs on the LUN-test set.", "As shown in Table 4, we test the performance of CompareNet removing structured triplets, removing the entire external knowledge, removing topics, and removing both topics and external knowledge.", "In the last two rows, we further Model Micro Macro F1 Prec Recall F1 CNN 67.50 67.79 67.50 67.37 LSTM 81.11 82.12 81.11 80.96 BERT+LSTM 75.83 76.62 75.83 75.65 BERT 84.16 84.73 84.16 84.10 (Rubin et al., 2016) -88.00 82.00 GCN + Max 85.83 86.16 85.83 85.80 GCN + Attn 85.27 85.59 85.27 85.24 GAT + Max 86.39 86.44 86.38 86.38 GAT + Attn (2019) 84.72 85.65 84.72 84.62 CompareNet 89.17 89.82 89.17 89.12 Table 2: 2-way classification results on SLN dataset.", "examine the constructed directed heterogeneous document graph and the designed entity comparison function.", "The variant CompareNet (undirected) does not consider the edge directions of the directed heterogeneous document graph.", "The variant model CompareNet (concatenation) replaces the entity comparison function as the simple concatenation operation.", "As we can see from Table 4, removing structural entity knowledge (i.e., w/o Structured Triplets) leads to slight performance drop.", "If we remove the entire external knowledge (i.e., w/o Entity Cmp), the performance decreases by around 1.3% and 1.8% on micro F1 and macro F1, respectively.", "Removing topics (i.e., w/o topics) will comparably impair the performance, which shows that the topic Variants Micro Macro F1 Prec Recall F1 CompareNet 69.05 72.94 69.04 68.26 w/o Structured Triplets 68.74 69.34 68.79 68.17 w/o Entity Cmp 67.46 70.38 67.43 66.35 w/o Topics 67.40 69.75 67.41 66.73 w/o Both 65.00 66.75 64.84 63.79 CompareNet (undirected) 66.35 68.11 66.36 65.74 CompareNet (concatenation) 67.40 70.05 67.39 66.25 Table 4: Ablation study of modules.", "information is as important as the external knowledge.", "Removing both topics and external knowledge (i.e., w/o Both) will lead to substantial performance drop (4.0-5.0%).", "It demonstrates the importance of both topics and external knowledge.", "The variant model CompareNet (undirected) although incorporating both topics and external knowledge achieves lower performance than CompareNet w/o Entity Cmp and CompareNet w/o Topics.", "The reason could be that CompareNet (undirected) directly aggregates the true entity knowledge into the news representation in graph convolution without considering the directed edges, which misleads the classi-fier for differentiating fake news.", "This verifies the appropriateness of our constructed directed heterogeneous document graph.", "The last variant CompareNet (concatenation) also performs lower than CompareNet w/o Entity Cmp, further indicating that directly concatenating true entity knowledge is not a good way for incorporating entity knowledge.", "Its performance drops by around 2.0% compared to CompareNet.", "These demonstrate the effectiveness of the carefully designed entity comparison network in CompareNet.", "Figure 3 shows the performance (micro and macro F1) of our model CompareNet on LUN validation set with different number of top assigned topics P to each sentence.", "As we can see clearly, micro F1 and macro F1 first consistently rises with the increase of P and then drops when P is larger than News Entity Description that may easily be misused bytheFDA to target and threaten the natural health community the FDA could have illegitimately used it to target practically any company it wanted to .", "2.", "This may because that connecting too many low-probability topics will introduce some noise.", "Thus, in our experiments, we set P = 2 .", "To further illustrate why our model outperforms state-of-the-art baseline GAT+Attn (Vaibhav et al., 2019), we present two real news examples from the LUN-test set.", "The baseline model GAT+Attn and the variant model CompareNet w/o Entity Cmp mistakenly predict these two examples as trusted news, while our model CompareNet can successfully predict both of them.", "As we can see from Figure 4, the content of the news document is in conflict with the entity description from Wikipedia.", "Specifically, the news about FDA target and threaten the natural health community delivers contrary meaning from the entity description that FDA is responsible for protecting and promoting public health 4 .", "Similarly, the news document about mammograms are not effective at detecting breast tumors conveys different meaning from the entity description of mammograms.", "We believe that our model CompareNet benefits from the comparison to Wikipedia knowledge by the entity comparison network.", "We find there are also unsuccessful cases since an entity could be mistakenly linked to a wrong entity in the Wikipedia.", "In this paper, we propose a novel end-to-end graph neural model CompareNet which compares the news to the external knowledge for fake news detection.", "Considering that the detection of fake news is correlated with topics, in our model, we also use topics to enrich the news document representation for improving fake news detection.", "Particularly, we first construct a directed heterogeneous document graph for each news document capturing the interactions among sentences, topics and entities.", "4 https://en.wikipedia.org/wiki/Food and Drug Administration Based on the graph, we develop a heterogeneous graph attention network for learning topic-enriched news representation as well as contextual entity representations that encode the semantics of the content of the news document.", "To capture the semantic consistency of the news content and the KB, the learned contextual entity representations are then compared to the KB-based entity representations, with a carefully designed entity comparison network.", "Finally, the obtained entity comparison features are combined with the news representation for an improved fake news classifier.", "Experiments on two benchmark datasets have demonstrated the effectiveness of the way we incorporate the external knowledge and topics.", "In future work, we will explore a better way to combine multi-modal data (e.g., images) and external knowledge for fake news detection.", "Sepp Hochreiter and Jurgen Schmidhuber.", "1997.", "Long short-term memory.", "Neural computation , 9(8):17351780.", "Linmei Hu, Tianchi Yang, Chuan Shi, Houye Ji, and Xiaoli Li.", "2019.", "Heterogeneous graph attention networks for semi-supervised short text classification.", "In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing , pages 48214830.", "Zhiwei Jin, Juan Cao, Yongdong Zhang, and Jiebo Luo.", "2016.", "News verification by exploiting conflicting social viewpoints in microblogs.", "In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence , pages 29722978.", "Dhruv Khattar, Jaipal Singh Goud, Manish Gupta, and Vasudeva Varma.", "2019.", "Mvae: Multimodal variational autoencoder for fake news detection.", "In The World Wide Web Conference , page 29152921.", "Urja Khurana and Bachelor Opleiding Kunstmatige In-telligentie.", "2017.", "The linguistic features of fake news headlines and statements .", "Ph.D. thesis, Mas-ter's thesis, University of Amsterdam.", "Yoon Kim.", "2014.", "Convolutional neural networks for sentence classification.", "In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing , pages 17461751.", "Ray Oshikawa, Jing Qian, and William Yang Wang.", "2020.", "A survey on natural language processing for fake news detection.", "ArXiv , abs/1811.00770.", "Jeff Z. Pan, Siyana Pavlova, Chenxi Li, Ningxi Li, Yangmei Li, and Jinshuo Liu.", "2018.", "Content based fake news detection using knowledge graphs.", "In The Semantic Web ISWC 2018 17th International Semantic Web Conference , volume 11136, pages 669 683.", "Martin Potthast, Johannes Kiesel, Kevin Reinartz, Janek Bevendorff, and Benno Stein.", "2018.", "A stylo-metric inquiry into hyperpartisan and fake news.", "In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics , pages 231 240.", "Hannah Rashkin, Eunsol Choi, Jin Yea Jang, Svitlana Volkova, and Yejin Choi.", "2017.", "Truth of varying shades: Analyzing language in fake news and political fact-checking.", "In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing , pages 29312937.", "Alvaro Ibrain Rodrguez and Lara Lloret Iglesias.", "2019.", "Fake news detection using deep learning.", "CoRR , abs/1910.03496.", "Victoria Rubin, Niall Conroy, Yimin Chen, and Sarah Cornwell.", "2016.", "Fake news or truth?", "using satirical cues to detect potentially misleading news.", "In Proceedings of the Second Workshop on Computational Approaches to Deception Detection , pages 717.", "The work is supported by the National Natural Science Fundation of China (No. 61806020, U1936220, 61972047, 62076245) and the Microsoft Research Asia's Star Track project." ]
[ "abstain", "abstain", "objective", "method", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "result", "abstain", "objective", "abstain", "abstain", "method", "objective", "abstain", "abstain", "abstain", "objective", "objective", "method", "method", "objective", "objective", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "objective", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "result", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "result", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "result", "objective", "result", "objective", "other", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other" ]
[ "Latent alignment objectives such as CTC and AXE significantly improve non-autoregressive machine translation models.", "Can they improve autoregressive models as well?", "We explore the possibility of training autoregressive machine translation models with latent alignment objectives, and observe that, in practice, this approach results in degenerate models.", "We provide a theoretical explanation for these empirical results, and prove that latent alignment objectives are incompatible with teacher forcing.", "Latent alignment objectives, such as CTC (Graves et al., 2006) and AXE (Ghazvininejad et al., 2020a), have been recently proposed for training non-autoregressive models for machine translation (Li-bovick and Helcl, 2018; Saharia et al., 2020).", "These objectives use a dynamic program to comb the space of monotonic alignments between the gold target sequence and the token probabilities the model predicts, thus reducing the loss from positional misalignments and focusing on the original prediction error instead.", "For example, consider the target sequence there is a tiny difference between pink and magenta ; if the model's distribution favors the paraphrase there is a very small difference between pink and magenta , substituting one token ( tiny ) with two ( very small ) will cause a misalignment, and result in a disproportionately large cross entropy loss.", "A latent alignment loss would match the predictions of both very and small with the target tiny , while aligning the rest of the sentence properly and computing a much lower loss that focuses on this particular discrepancy.", "Could latent alignments also benefit autoregressive models?", "when trained with teacher forcing (Williams and Zipser, 1989), CTC reduces to the vanilla cross entropy loss because CTC assumes that the prediction sequence is longer than the target and has only one valid alignment when they are equal.", "We further examine AXE, which does not share this assump-tion, and find that it yields a degenerate model that almost perfectly fits the training set but completely fails at inference time.", "Our analysis reveals that latent alignments and teacher forcing are fundamentally incompatible.", "We observe that there exists a valid alignment in which the prediction p i is aligned with the target y i 1 for almost every token.", "Simultaneously, teacher forcing feeds the model with y i 1 when computing the prediction p i , encouraging the model to simply predict its input under this alignment.", "While AXE allows this alignment for equal-length prediction and target sequences, the phenomenon also occurs (theoretically) in CTC if the predictions are longer, and in fact, occurs in any latent alignment objective that can align a prediction p j with a target y i where i < j .", "A latent alignment objective measures the compatibility between the target sequence Y and the sequence of predicted token probabilities P by considering a subspace of possible mappings between Y and P .", "Latent alignments are typically used in non-autoregressive models for automatic speech recognition, and optical character recognition (Graves et al., 2006), and have recently been introduced to the task of machine translation (Li-bovick and Helcl, 2018; Ghazvininejad et al., 2020a; Saharia et al., 2020).", "We describe two such objectives, beginning with an overview of the common notation and framework.", "p 1 , . . . , p m be the model prediction, a sequence of m token probability distributions.", "A monotonic alignment is a function that maps every target position i { 1 , . . . , n } to a set of one or more consecutive prediction positions ( i ) { 1 , . . . , m } , such that i j max ( i ) min ( j ) .", "Objective Given an alignment , the objective is defined as follows: L ( Y, P ) = n (cid:89) i =1 (cid:89) j ( i ) p j ( y i ) (1) Since is not provided a priori, it is necessary to aggregate over all the possible alignments (hence latent alignments), by either summation (Equa-tion", "2) or maximization (Equation 3): L (cid:80) ( Y, P ) = (cid:88) L ( Y, P ) (2) L max ( Y, P ) = max L ( Y, P ) (3) In practice, the negative log loss is minimized during training: (cid:96) ( Y, P ) = log L ( Y, P ) (4) Dynamic Programming Aggregation can be done efficiently with dynamic programming, using derivations of the forward-backward algorithm (for summation, as in CTC) or the Viterbi algorithm (for maximization, as in AXE).", "These algorithms create an aggregation matrix A R n m , where each cell represents the desired aggregation score f (sum or max) over prefixes of the target and prediction probability sequences: A i,j = L f ( Y i , P j ) .", "The dynamic program combs through the space of alignments by implicitly constructing every possibility using the set of local operators defined in Table 1.", "The subspace of alignment functions that the program explores is determined by the subspace of operators it employs.", "Connectionist Temporal Classification (CTC) The CTC objective (Graves et al., 2006) was originally introduced for speech and handwriting recognition, where the prediction sequence P is typically much longer than the target sequence Y ( m (cid:29) n ).", "While computing the summation objective (Equa-tion 2), CTC uses only the align , clone target , and delimiter operators.", "This means that CTC restricts to the space of alignments where every item in P is aligned with at most one item in Y , i.e. ( i ) ( j ) = for i (cid:54) = j .", "CTC was used in non-autoregressive machine translation by Libovick and Helcl (2018) and more recently by Saharia et al. (2020).", "In both cases, the prediction sequence was artificially in-flated to be double (or more) the length of the source-language input sequence in order to simulate the m (cid:29) n condition of speech recognition.", "Aligned Cross Entropy (AXE) The AXE objective (Ghazvininejad et al., 2020a) is specifically designed for non-autoregressive machine translation.", "AXE finds the monotonic alignment that minimizes the cross entropy loss (i.e., maximizes the likelihood, Equation", "3) in order to focus the penalty on the root errors instead of positional shifts that result from them.", "AXE uses only the align , clone prediction , and delimiter operators.", "This combination of operators allows AXE to align prediction and target sequences of any lengths because clone prediction inflates the prediction sequence while delimiter adds new target tokens.", "However, since AXE cannot clone target tokens, every target position i is always aligned to a single prediction position, i.e. | ( i ) | = 1 .", "Figure 1 illustrates how AXE aligns the model's predictions with the target sequence.", "In an autoregressive setting, it is standard practice to use teacher forcing (Williams and Zipser, 1989); i.e., when predicting the i -th token, the model takes the prefix of the (gold) target sequence Y <i as input.", "This dictates that the number of predictions is identical to the number of target tokens ( m = | P | = | Y | = n ).", "However, CTC assumes that the prediction sequence P is typically much longer than the target sequence Y ( m (cid:29) n ), and can only inflate Y via clone target and delimiter (see Section 2).", "This leaves only one valid alignment when m = n : the trivial alignment ( i ) = { i } .", "CTC will thus default to the same objective as the standard cross entropy loss.", "Unlike CTC, the AXE objective aggregates over multiple alignments even when m = n , because it uses both the delimiter operator (which inflates Y ) as well as the clone prediction operator (which inflates P ).", "To apply AXE to autoregressive machine translation, we use a standard sequence-to-sequence transformer model (Vaswani et al., 2017) trained with teacher forcing, replace the simple cross entropy loss function with AXE, and add the empty token to the vocabulary.", "We remove the tokens after decoding.", "Experiment Setup We use fairseq (Ott et al., 2019) to train a transformer encoder-decoder (Vaswani et al., 2017) on the IWSLT'14 DE-EN dataset (Cettolo et al., 2015).", "The dataset is preprocessed and tokenized into subwords with BPE Figure 2: Training and validation loss when using the AXE objective on IWSLT'14 DE-EN with an autoregressive model.", "(Sennrich et al., 2016) using the scripts provided by fairseq .", "We also use the implementation's default hyperparameters: 6 layers of encoder/decoder, 512 model dimensions, 1024 hidden dimensions, 4 attention heads.", "We optimize with Adam (Kingma and Ba, 2015) for 50k steps with early stopping using 4096 tokens per batch.", "We decode with beam search ( b = 5 ) and evaluate performance with BLEU (Papineni et al., 2002).", "Results We observe two seemingly contradictory behaviors.", "On the one hand, the model approaches a near-zero training loss within a single epoch, and observes similar results when computing AXE loss on unseen examples in the validation set (Figure 2).", "Meanwhile, at inference time, the model consistently produces the empty sequence (after removing all instances of ), scoring 0 BLEU on the test set.", "This indicates that the model has learned to game the AXE objective without actually learning anything useful about machine translation.", "What shortcut did the model learn?", "To understand how the model learns to game the AXE objective, we analyze the optimal alignments chosen by the objective, and find that they allow the model to condition on the target token when trying to predict it.", "We prove that this is the optimal solution when combining teacher forcing and AXE, and that it holds for any latent alignment objective that allows the model to align future target tokens with the current prediction.", "AXE finds a constant alignment We examine the alignments chosen by AXE's dynamic program for a sample of training examples, and observe that they all belong to a consistent pattern: delimiter , align , align , ..., clone prediction .", "In other words, the chosen path skips the first prediction by emitting the blank token and then aligns each prediction p i with the previous target token y i 1 .", "The alignment synchronizes the positions at the end of the sequence by cloning the last prediction to compensate for the offset produced by the initial delimiter operator.", "Each prediction conditions on its target The teacher forcing algorithm conditions the prediction p i on the ground truth of the previous tokens y 1 , . . . , y i 1 to predict the target token y i .", "However, if the prediction p i is aligned with the target y i 1 , then it is effectively observing its target through the input, and only needs to learn the identity function.", "Formally, we see that for every 1 < i < n the prediction is trivial: p i ( y i 1 ) = P r ( y i 1 | X, Y <i ) = P r ( y i 1 | y i 1 ) = 1 Figure 3 demonstrates this phenomenon on an actual example using the model's predictions.", "The cost of sharing the last prediction It is now clear to see that the loss should indeed be close to zero.", "Having said that, it is not infinitesimal; the last two tokens (typically . and EOS ) need to be predicted from the same distribution.", "At best, this yields a loss of 2 log(0 . 5) /n , which is just below the loss observed in Figure 2 when considering the average target sequence length in IWSLT'14 DE-EN is around n 30 .", "Inference produces empty sequences The model essentially learns to produce the blank token in the first step, and then copy the latest token that is fed into the decoder as input.", "During training, that input is indeed the target token.", "At inference, however, it is the model's prediction from the previous timestep.", "Since the first prediction is , the model will continue and predict the blank token until the end of the sequence.", "This exploit is not unique to AXE AXE is not the only latent alignment objective that the model can game when coupled with teacher forcing.", "We would see a similar phenomenon if we were to use CTC with a longer prediction sequence; for example, if we doubled the prediction length (Li-bovick and Helcl, 2018) and applied a version of teacher forcing that feeds each target token twice in a row.", "In fact, every latent alignment objective that can align a prediction p j with a target y i where i < j will be subject to this exploit, and allow a model trained with teacher forcing to glimpse into the future.", "Restricting AXE to causal alignments leads to the trivial alignment We further limit AXE to allow only causal alignments, where a prediction p j may only align with a target y i if i j .", "After training with the restricted objective, we observe that AXE selects the trivial alignment ( i = j ) in 98% of the validation set sentences, whereas the remaining 2% contain only minor deviations from the trivial alignment, typically one delimiter quickly followed by one clone prediction .", "This work elaborates why latent alignment objectives are incompatible with autoregressive models", "trained with teacher forcing.", "That said, teacher forcing might not be the best way to train a machine translation model (Bengio et al., 2015; Lamb et al., 2016; Ghazvininejad et al., 2020b), and perhaps a future alternative could reopen the discussion on applying latent alignment objectives to autoregressive models.", "This work was supported in part by Len Blavatnik and the Blavatnik Family foundation, the Alon Scholarship, and the Tel Aviv University Data Science Center." ]
[ "abstain", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "result", "abstain", "abstain", "other", "other", "objective", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "objective", "abstain", "abstain", "other" ]
[ "We consider the problem of using observational data to estimate the causal effects of linguistic properties.", "For example, does writing a complaint politely lead to a faster response time?", "How much will a positive product review increase sales?", "This paper addresses two technical challenges related to the problem before developing a practical method.", "First, we formalize the causal quantity of interest as the effect of a writer's intent , and establish the assumptions necessary to identify this from observational data.", "Second, in practice, we only have access to noisy proxies for the linguistic properties of intereste.g., predictions from classifiers and lexicons.", "We propose an estimator for this setting and prove that its bias is bounded when we perform an adjustment for the text.", "Based on these results, we introduce TEXTCAUSE , an algorithm for estimating causal effects of linguistic properties.", "The method leverages (1) distant supervision to improve the quality of noisy proxies, and (2) a pre-trained language model (BERT) to adjust for the text.", "We show that the proposed method outperforms related approaches when estimating the effect of Amazon review sentiment on semi-simulated sales figures.", "Finally, we present an applied case study investigating the effects of complaint politeness on bureaucratic response times.", "Social scientists have long been interested in the causal effects of language, studying questions like:", "How should political candidates describe their personal history to appeal to voters (Fong and Grimmer, 2016)?", "How can business owners write product descriptions to increase sales on e-commerce platforms (Pryzant et al., 2017, 2018a)?", "How can consumers word their complaints to receive faster responses (Egami et al., 2018)?", "What conversational strategies can mental health counselors use to have more successful counseling sessions (Zhang et al., 2020)?", "To study the causal effects of linguistic properties, we must reason about interventions: what would the response time for a complaint be if we could make that complaint polite while keeping all other properties (topic, pragmatics, etc.) fixed?", "Although it is sometimes feasible to run such experiments where text is manipulated and outcomes are recorded (Grimmer and Fong, 2020), analysts typically have observational data consisting of texts and outcomes obtained without intervention.", "This paper formalizes the estimation of causal effects of linguistic properties in observational settings.", "Estimating causal effects from observational data requires addressing two challenges.", "First, we need to formalize the causal effect of interest by specifying the hypothetical intervention to which it corresponds.", "The first contribution of this paper is articulating the causal effects of linguistic properties; we imagine intervening on the writer of a text document and telling them to use different linguistic properties.", "The second challenge of causal inference is identification: we need to express causal quantities in terms of variables we can observe.", "Often, instead of the true linguistic property of interest we have access to a noisy measurement called the proxy label .", "Analysts typically infer these values from text with classifiers, lexicons, or topic models (Grimmer and Stewart, 2013; Lucas et al., 2015; Prabhakaran et al., 2016; Voigt et al., 2017; Luo et al., 2019; Lucy et al., 2020).", "The second contribution of this paper is establishing the assumptions we need to recover the true effects of a latent linguistic property from these noisy proxy labels.", "In particular, we propose an adjustment for the confounding information in a text document and prove that this bounds the bias of the resulting estimates.", "an algorithm for estimating the causal effects of linguistic properties.", "The algorithm uses distantly supervised label propagation to improve the proxy label (Zhur and Ghahramani, 2002; Mintz et al., 2009; Hamilton et al., 2016), then BERT to adjust for the bias due to text (Devlin et al., 2018; Veitch et al., 2020).", "We demonstrate the method's accuracy with partially-simulated Amazon reviews and sales data, perform a sensitivity analysis in situations where assumptions are violated, and show an application to consumer finance complaints.", "Data and a package for performing text-based causal inferences is available at https:// github.com/rpryzant/causal-text .", "Causal inference from observational data is well-studied (Pearl, 2009; Rosenbaum and Rubin, 1983, 1984; Shalizi, 2013).", "In this setting, analysts are interested in the effect of a treatment T (e.g., a drug) on an outcome Y (e.g., disease progression).", "For ease, we consider binary treatments.", "The average treatment effect (ATE) on the outcome Y is, = E [ Y ; do( T = 1)] E [ Y ; do( T = 0)] , (1) where the operation do( T = t ) means that we hypothetically intervene and set the treatment T to some value (Pearl, 2009).", "Typically, the ATE is not the simple difference in average conditional outcomes, E [ Y | T = 1] E [ Y | T = 0] .", "This is because confounding variables C are associated with both the treatment and outcome, inducing non-causal associations between them, referred to as open backdoor paths (Pearl, 2009).", "When all the confounding variables are observed, we can write the ATE in terms of observed variables using the backdoor-adjustment formula (Pearl, 2009), = EC (cid:104) E [ Y | T = 1 , C ] E [ Y | T = 0 , C ] (cid:105) .", "For example, if the confounding variable C is discrete, we group the data into values of C , calculate the average difference in outcomes between the treated and untreated samples of each group, and take the average over groups.", "We are interested in the causal effects of linguistic properties.", "To formalize this as a treatment, we Figure 1: The proposed causal model of text and outcomes.", "imagine intervening on the writer of a text, e.g., telling people to write with a property (or not).", "We show that to estimate the effect of using a linguistic property, we must consider how a reader of the text perceives the property.", "These dual perspectives of the reader and writer are well studied in linguistics and NLP; 1 we adapt the idea for causal inference.", "Figure 1 illustrates a causal model of the setting.", "Let W be a text document and let T (binary) be whether or not a writer uses a particular linguistic property of interest.", "2 For example, in consumer complaints, the variable T can indicate whether the writer intends to be polite or not.", "The outcome is a variable Y , e.g., how long it took for this complaint to be serviced.", "Let Z be other linguistic properties that the writer communicated (consciously or unconsciously) via the text W , e.g. topic, brevity or eloquence.", "The linguistic properties T and Z are typically correlated, and both variables affect the outcome Y .", "1 Literary theory argues that language is subject to two perspectives: the artistic pole the text as intended by the author and the aesthetic pole the text as interpreted by the reader (Iser, 1974, 1979).", "The noisy channel model (Yuret and Yatbaz, 2010; Gibson et al., 2013) connects these poles by supposing that the reader perceives a noisy version of the author's intent.", "This duality has also been modeled in linguistic pragmatics as the difference between speaker meaning and literal or utterance meaning (Potts, 2009; Levinson, 1995, 2000).", "Gricean pragmatic models like RSA (Goodman and Frank, 2016) similarly formalize this as the reader using the literal meaning to help make inferences about the speaker's intent.", "where we imagine intervening on writers and telling them to use the linguistic property of interest (setting T = 1 , write politely ) or not ( T = 0 ).", "This causal effect is appealing because the hypothetical intervention is well-defined it corresponds to an intervention we could perform in theory.", "However, without further assumptions, wri.", "is not identified from the observational data.", "The reason is that we would need to adjust for the unobserved linguistic properties Z , which create open backdoor paths because they are correlated with both the treatment T and outcome Y (Fig-ure 1).", "To solve this problem, we observe that the reader is the one who produces outcomes.", "Readers use the text W to perceive a value for the property of interest (captured by the variable T ) as well as other properties (captured by Z ) then produce the outcome Y based on these perceived values.", "For example, a customer service representative reads a consumer complaint, judges whether (among other things) the complaint is polite or not, and chooses how quickly to respond based on this.", "Consider the average treatment effect, rea.", "where we imagine intervening on the reader's perception of a linguistic property T .", "The following result shows that we can identify the causal effect of interest, wri.", ", by exploiting this ATE rea.", ".", "Theorem", "1. Let Z = f ( W ) be a function of the words W such that E [ Y | W ] = E (cid:104) Y | T , Z (cid:105) .", "Suppose that the following assumptions hold:", "1. (no unobserved confounding) W blocks backdoor paths between T and Y , 2. (agreement of intent and perception) T = T .", "3. (overlap) For some constant (cid:15) > 0 , (cid:15) < P ( T = 1 | Z ) < 1 (cid:15) with probability", "1. 3 3 Informally, it must be possible to perceive a property ( T =1) for all settings of Z , and Z cannot perfectly predict T .", "Then the ATE rea.", "is identified as, rea.", "= EW (cid:20) E (cid:104) Y | T = 1 , Z = f ( W ) (cid:105) (5) E (cid:104) Y | T = 0 , Z = f ( W ) (cid:105) (cid:21) .", "(6) Moreover, the ATE rea.", "is equal to wri.", ".", "The proof is in Appendix A. Intuitively, the result says that the information in the text W that the reader uses to determine the outcome Y splits into two parts: the information the reader uses to perceive the linguistic property of interest ( T ), and the information used to perceive other properties ( Z = f ( W ) ).", "The information captured by the variable Z is confounding; it affects the outcome and is also correlated with the treatment T .", "Under certain assumptions, adjusting for the function of text Z that captures confounding suf-fices to identify the rea.", "; in Figure 1, the backdoor path T W Z Y is blocked.", "4 Moreover, if we assume that readers correctly perceive the writer's intent, the effect rea.", ", which can expressed in terms of observed variables, is equivalent to the effect that we want, wri.", ".", "If we observed T , the reader's perception of the linguistic property of interest, then we could proceed by estimating the effect rea.", "(equivalently, wri. ).", "However, in most settings, one does not observe the linguistic properties that a writer intends to use ( T and Z ) or that a reader perceives ( T and the information in Z ).", "Instead, one uses a classifier or lexicon to predict values for this property from the text, producing a proxy label T (e.g. predicted politeness).", "For this setting, where we only have access to proxy labels, we introduce the estimand proxy which substitutes the proxy T for the unobserved treatment T in the effect rea.", ": proxy = EW (cid:20) E (cid:104) Y | T = 1 , Z = f ( W ) (cid:105) (7) E (cid:104) Y | T = 0 , Z = f ( W ) (cid:105) (cid:21) .", "4 Grimmer and Fong (2020) studied a closely related setting where text documents are randomly assigned to readers who produce outcomes.", "From this experiment, they discover text properties that cause the outcome.", "Their causal identification result requires an exclusion restriction assumption, which is related to the no unobserved confounding assumption that we make.", "This estimand only requires an adjustment for the confounding information Z .", "We show how to extract this information using pretrained language models in Section 5.", "Prior work on causal inference with proxy treatments (Wood-Doughty et al., 2018) requires an adjustment using the measurement model P ( T | T ) , i.e. the true relationship between the proxy label T and its target T , which is typically unobserved.", "In contrast, the estimand proxy does not require the measurement model.", "The following result shows that the estimand proxy only attenuates the ATE that we want, rea.", ".", "That is, the bias due to proxy treatments is benign; it can only decrease the magnitude of the effect but it does not change the sign.", "Theorem", "2. Let (cid:15) 0 = Pr( T = 0 | T = 1 , Z ) and let (cid:15) 1 = Pr( T = 1 | T = 0 , Z ) .", "Then, proxy = rea.", "The proof is in Appendix E. This result shows that the proposed estimand proxy , which we can estimate, is equal to the ATE rea.", "that we want, minus a bias term related to measurement error.", "In particular, if the classifier is better than chance and the treatment effect sign is homogeneous across possible texts i.e., it always helps or always hurts, an assumption the analyst must carefully assess then the bias term is positive with the degree of attenuation dependent on the error rate of the proxy label T .", "The result tells us to construct the most accurate proxy treatment T possible, so long as we adjust for the confounding part of the text.", "5 This is a novel result for causal inference with proxy treatments and sidesteps the need for the measurement model.", "We introduce a practical algorithm for estimating the causal effects of linguistic properties.", "Motivated by Theorem 2, we first describe an approach for improving the accuracy of proxy labels.", "We then use the improved proxy labels, text and outcomes to fit a model that extracts and adjusts for the confounding information in the text ( Z ).", "In practice, one may observe additional covariates 5 We prove in Appendix F that without the adjustment for confounding information Z , estimates of the ATE rea.", "C that capture confounding properties, e.g., the product that a review is about or complaint type.", "We will include these covariates in the estimation algorithm.", "The first stage of TEXTCAUSE is motivated by Theorem 2, which said that a more accurate proxy can yield lower estimation bias.", "Accordingly, this stage uses distant supervision to improve the fidelity of lexicon-based proxy labels T .", "In particular, we exploit an inductive bias of frequently used lexicon-based proxy treatments: the words in a lexicon correctly capture the linguistic property of interest (i.e., high precision, Tausczik and Pennebaker, 2010), but can omit words and discourse-level elements that also map to the desired property (i.e., low recall, Kim and Hovy, 2006; Rao and Ravichandran, 2009).", "Motivated by work on lexicon induction and label propagation (Hamilton et al., 2016; An et al., 2018), we improve the recall of proxy labels, training a classifier P to predict the proxy label T , then using that classifier to relabel examples which were labeled T = 0 but look like T = 1 .", "Formally, given a dataset of tuples { ( Y i , W i , C i , T i ) } ni =1 the algorithm is:", "1. Train a classifier to predict P ( T | W ) , e.g., logistic regression trained with bag-of-words features and T labels.", "2. Relabel some T = 0 examples (we experiment with ablating this in Appendix C): T i = (cid:40) 1 if T i = 1 1 [ P ( T i = 1 | W i ) > 0 . 5] otherwise", "The second stage of TEXTCAUSE estimates the effect proxy using the text W , improved proxy labels T , and outcomes Y .", "This stage is motivated by Theorem 1, which described how to adjust for the confounding parts of the text.", "We approximate this confounding information in the text, Z = f ( W ) , with a learned representation b ( W ) that predicts the expected outcomes E [ Y | T = t, b ( W ) , C ] for t = 0 , 1 (Eq. 7).", "We use DistilBERT (Sanh et al., 2019) to produce a representation of the text b ( W ) by embedding the text then selecting the vector corresponding to a prepended [CLS] token.", "We proceed Figure 2: The second stage of TEXTCAUSE adapts word embeddings to predict both of Y 's potential outcomes.", "to optimize the model so that the representation b ( W ) directly approximates the confounding information Z = f ( w ) .", "In particular, we train an estimator for the expected conditional outcome Q ( t, b ( W ) , C ) = E [ Y | T = t, b ( W ) , C ] : Q ( t, b ( W ) , C ) = ( M bt b ( W ) + M ct c + b )) , where the vector c is a one-hot encoding of the covariates C , the vectors M bt R 768 and M ct R | C | are learned, one for each value t of the treatment, and the scalar b is a bias term.", "Letting be all parameters of the model, our training objective is to minimize, min n (cid:88) i =1 L ( Y i , Q ( T i , b ( W i ) , C i ) + R ( W i ) , where L ( ) is the cross-entropy loss and R ( ) is the original BERT masked language modeling objective, which we include following Veitch et al. (2020).", "The hyperparameter is a penalty for the masked language modeling objective.", "The parameters M t are updated on examples where T i = t .", "Once Q ( ) is fitted, an estimator proxy for the effect proxy (Eq. 7) is, proxy = 1 n (cid:88) i (cid:104) Q (1 , b ( W i ) , C i ) Q (0 , b ( W i ) , C i ) (cid:105) , (9) where we approximate the outer expectation over the text W with a sample average.", "Intuitively, this procedure works because the representation b ( W ) extracts the confounding information Z = f ( W ) ; it explains the outcome Y as well as possible given the proxy label T .", "We evaluate the proposed algorithm's ability to recover causal effects of linguistic properties.", "Since ground-truth causal effects are unavailable without randomized controlled trials, we produce a semisynthetic dataset based on Amazon reviews where only the outcomes are simulated.", "We also conduct a real-world study using real-world complaints and bureaucratic response times.", "Our key findings are More accurate proxies combined with text adjustment leads to more accurate ATE estimates.", "Naive proxy-based procedures significantly underestimate true causal effects.", "ATE estimates can lose fidelity when the proxy is less than 80% accurate.", "Dataset.", "Here we use real world and publicly available Amazon review data to answer the question, how much does a positive product review affect sales?", "We create a scenario where positive reviews increase sales, but this effect is confounded by the type of product.", "Specifically: The text W is a publicly available corpus of Amazon reviews for digital music products (Ni et al., 2019).", "For simplicity, we only include reviews for mp3, CD, or Vinyl.", "We also exclude reviews for products worth more than $100 or fewer than 5 words.", "The observed covariate C is a binary indicator for whether the associated review is a CD or not, and we use this to simulate a confounded outcome.", "The treatment T = T is whether that review is positive (5 stars) or not (1 or 2 stars).", "Hence, we omit reviews with 3 or 4 stars.", "Note that here it is reasonable to assume writer's intention ( T ) equals the reader's perception ( T ), as the author is deliberately communicating their sentiment (or a very close proxy) with the stars.", "We use this variable to (1) simulate outcomes and (2) calculate ground truth causal effects for evaluation.", "The proxy treatment T is computed via two strategies: (1) a randomly noised version of T fixed to 93% accuracy (to resemble a reasonable classifier's output, later called proxy-noised ), and (2) a binary indicator for whether any words in W overlap with a positive sentiment lexicon (Liu et al., 2010).", "The outcome Y Bernoulli( ( c ( ( C ) o ) + t T + N (0 , ))) represents whether a product received a click or not.", "The parameter c controls confound strength, t controls treatment strength, o is an offset and the propensity ( C ) = P ( T = 1 | C ) is estimated from data.", "The final data set consists of 17,000 examples.", "Protocol.", "All nonlinear models were implemented using PyTorch (Paszke et al., 2019).", "We use the transformers 6 implementation of Distill-BERT and the distilbert-base-uncased model, which has 66M parameters.", "To this we added 3,080 parameters for text adjustment (the M bt and M ct vectors).", "Models were trained in a cross-validated fashion, with the data being split into 12,000, 2,000, and 4,000-example train, validation, and test sets.", "7 BERT was optimized for 3 epochs on each fold using Adam (Kingma and Ba, 2014), a learning rate of 2 e 5 , and a batch size of 32.", "The weighting on the potential outcome and masked language modeling heads was 0.1 and 1.0, respectively.", "Linear models were implemented with sklearn .", "For T-boosting, we used a vocab size of 2,000 and L2 regularization with a strength of c = 1 e 4 .", "Each experiment was replicated using 100 different random seeds for robustness.", "Each trial took an average of 32 minutes with three 1.2 GHz CPU cores and one TITAN X GPU.", "Baselines.", "The unadjusted baseline is naive = E [ Y | T = 1] E [ Y | T = 0] , the expected difference in outcomes conditioned on T .", "8 The 6 https://huggingface.co/transformers 7 See Egami et al. (2018) for an investigation into train/test splits for text-based causal inference.", "the observed covariate C and are based on Sridhar and Getoor (2019): naive+C = 1 | C | (cid:80) c ( E [ Y | T = 1 , C = c ] E [ Y | T = 0 , C = c ]) , using randomly drawn and lexicon-based T proxies.", "We also compare against semi-oracle, matrix , an estimator which assumes additional access to the ground truth measurement model P ( T | T ) (Wood-Doughty et al., 2018); see Appendix G for derivation.", "Note that for clarity, we henceforth refer to the treatment-boosting and text-adjusting stages of TEXTCAUSE as T-boost and W-Adjust .", "Our primary results are summarized in Table", "1. Individually, T-boost and W-Adjust perform well, generating estimates which are closer to the oracle than the naive unadjusted and proxy-lex' baselines. However, these components fail to outperform the highly accurate proxy-noised baseline unless they are combined (i.e., the TEXTCAUSE algorithm).", "Only the full T extCause algorithm consistently outperformed (i.e. produced higher quality ATE estimates) than the baselines.", "This result is robust to varying levels of noise and treat-ment/confound strength.", "Indeed TEXTCAUSE 's estimates were on average within 2% of the semi-oracle.", "Furthermore, these results support Theorem 2: methods which adjusted for the text always attenuated the true ATE.", "Adjusting for the confounding parts of text is crucial: the results show that estimators that adjust for the covariates C but not the text perform poorly, sometimes even worse than the unadjusted estimator naive .", "Does it always help to adjust for the text?", "We consider the case where confounding information in the text causes a naive estimator which does not adjust for this information ( naive ) to have the opposite sign of the true effect .", "Does our proposed text adjustment help in this situation?", "Theorem 2 says it should, because proxy estimates are bounded in [0, ].", "This ensures that the most important of bits, the bit of directional information, is preserved.", "Table 2 shows results from such a scenario.", "We see that the true ATE of T , , has a strong negative effect, while the naive estimator naive+C produces a positive effect.", "Adding an adjustment for the confounding parts of the text with TEXTCAUSE Noise: Low High Treatment: Low High Low High Mean delta Confounding: Low High Low High Low High Low High from oracle oracle ( ) 9.92 10.03 18.98 19.30 8.28 8.28 16.04 16.19 0.0 semi-oracle ( matrix ) 9.73 9.82 18.77 19.08 8.25 8.28 16.02 16.21 0.13 unadjusted ( naive ) 6.84 7.66 13.53 14.50 5.79 6.42 11.51 12.26 3.58 proxy-lex ( naive+C ) 6.67 6.73 12.88 13.09 5.65 5.67 10.98 11.12 4.43 proxy-noised ( naive+C ) 8.25 8.27 15.90 16.12 6.69 6.72 13.22 13.33 2.35 + T-boost ( naive+C ) 8.11 8.16 15.53 15.73 6.78 6.80 13.19 13.32 2.51 + W-Adjust ( proxy ) 7.82 8.57 14.96 16.13 6.62 7.22 12.95 13.76 2.39 + T-boost + W-Adjust 9.42 10.27 18.20 19.32 7.85 8.53 15.45 16.30 0.11 (TEXTCAUSE , proxy ) Table 1: ATE estimates: expected change in click probabilities if one were to manipulate the sentiment of a review from negative to positive.", "successfully brings the proxy-based estimate to 0, which is indicative of the bounded behavior that Theorem 2 suggests.", "Sensitivity analysis.", "In Figure 3 we synthetically vary the accuracy of a proxy T by dropping random subsets of the data.", "This is to evaluate the robustness of various estimation procedures.", "We would expect (1) methods that do not adjust for the text to behave unpredictably, and (2) methods that do adjust for the text to be more robust.", "These results support our first hypothesis: boosting treatment labels without text adjustment can behave unpredictably, as proxy-lex and T-boost both overestimate the true ATE.", "In other words, the predictions of both estimators grow further from the oracle as T 's accuracy increases.", "the text ( W-Adjust and TEXTCAUSE ) consistently attenuate the true ATE, which is in line with Theorem", "2. However, we find that TEXTCAUSE , which makes use of T-boost and W-Adjust , may not always provide the highest quality ATE estimates in finite data regimes.", "Notably, when T is less than 90% accurate, both proxy-lex and T-boost can produce higher-quality estimates than the proposed TEXTCAUSE algorithm.", "Note that all estimates quickly lose fidelity as the proxy T becomes noisier.", "It rapidly becomes difficult for any method to recover the true ATE when the proxy T is less than 80% accurate.", "We proceed to offer an applied pilot study which seeks to answer, how does the perceived politeness of a complaint affect the time it takes for that complaint to be addressed?", "We consider complaints filed with the Consumer Financial Protection Bureau (CFPB).", "9 This is a government agency which solicits and handles complaints about finan-cial products.", "When they receive a complaint it is forwarded to the relevant company.", "The time it takes for that company to process the complaint is recorded.", "Some submissions are handled quickly ( < 15 days) while others languish.", "This 15-day threshold is our outcome Y .", "We additionally adjust for an observed covariate C that captures what product and company the complaint is about (mort-gage or bank account).", "To reduce other potentially confounding effects, we pair each Y = 1 complaint with the most similar Y = 0 complaint according to cosine similarity of TF-IDF vectors (Mozer et al., 2020).", "From this we select the 4,000 most similar pairs for a total of 8,000 complaints.", "For our treatment (politeness), we use a state-of-the-art politeness detection package geared towards social scientists (Yeomans et al., 2018).", "This package reports a score from a trained classifier using expert features of politeness and a hand-labeled dataset.", "We take examples in the top and bottom 25% of the scoring distribution to be our T = 1 and T = 0 examples and throw out all others.", "The final dataset consists of 4,000 complaints, topics, and outcomes.", "We use the same training procedure and hyper-parameters as Section 6.1, except now W-Adjust is trained for 9 epochs and each cross validation fold is of size 2,000.", "Results are given in Figure 3 and suggest that perceived politeness may have an effect on reducing response time.", "We find that the effect size increases as we adjust for increasing amounts of information.", "The unadjusted approach which does not perform any adjustment produces the smallest ATE.", "proxy-lex, which only adjusts for covariates, indicated the second-smallest ATE.", "The W-Adjust and TEXTCAUSE methods, which adjust for covariates and text, produced the largest ATE 9 https://www.consumer-action.org/ downloads/english/cfpb_full_dbase_report.pdf/ Estimator ATE SE unadjusted ( naive ) 3.01 0.3 proxy-lex ( naive+C ) 4.03 0.4 + T-boost ( naive+C ) 9.64 0.5 + W-Adjust ( proxy ) 6.30 1.6 + T-boost + W-Adjust 10.30 2.1 TEXTCAUSE , ( proxy ) Table 3: Effect size can vary across estimation methods, with methods that adjust for more information producing larger ATEs.", "estimates.", "This suggests that there is a significant amount of confounding in real world studies, and the choice of estimator can yield highly varying conclusions.", "Our focus fits into a body of work on text-based causal inference that includes text as treatments (Egami et al., 2018; Fong and Grimmer, 2016; Grimmer and Fong, 2020; Wood-Doughty et al., 2018), text as outcomes (Egami et al., 2018), and text as confounders (Roberts et al. (2020); Veitch et al. (2020); see Keith et al. (2020) for a review of that space).", "We build on Veitch et al. (2020), which proposed a BERT-based text adjustment method similar to our W-Adjust algorithm.", "This paper is related to work by Grimmer and Fong (2020), which discusses assumptions needed to estimate causal effects of text-based treatments in randomized controlled trials.", "There is also work on discovering causal structure in text, as topics with latent variable models (Fong and Grimmer, 2016) and as words and n-grams with adversarial learning (Pryzant et al., 2018b) and residualization (Pryzant et al., 2018a).", "There is also a growing body of applications in the social sciences (Hall, 2017; Olteanu et al., 2017; Saha et al., 2019; Mozer et al., 2020; Karell and Freedman, 2019; Sobolev, 2019; Zhang et al., 2020).", "This paper also fits into a long-standing body of work on measurement error and causal inference (Pearl, 2012; Kuroki and Pearl, 2014; Buonaccorsi, 2010; Carroll et al., 2006; Shu and Yi, 2019; Oktay et al., 2019; Wood-Doughty et al., 2018).", "Most of this work deals with proxies for confounding variables.", "The present paper is most closely related to Wood-Doughty et al. (2018), which also deals with proxy treatments, but instead proposes an adjustment using the measurement model.", "This paper addressed a setting of interest to NLP and social science researchers: estimating the causal effects of latent linguistic properties from observational data.", "We clarified critical ambiguities in the problem, showed how causal effects can be interpreted, presented a method, and demonstrated how it offers practical and theoretical advantages over the existing practice.", "We also release a package for performing text-based causal inferences.", "10 This work opens new avenues for further conceptual, methodological, and theoretical refinement.", "This includes improving non-lexicon based treatments, heterogeneous effects, overlap violations, counterfactual inference, ethical considerations, extensions to higher-dimensional outcomes and covariates, and benchmark datasets based on paired randomized controlled trials and observational studies.", "This project recieved partial funding from the Stanford Data Science Institute, NSF Award IIS-1514268 and a Google Faculty Research Award.", "We thank Justin Grimmer, Stefan Wager, Percy Liang, Tatsunori Hashimoto, Zach Wood-Doughty, Katherine Keith, the Stanford NLP Group, and our anonymous reviewers for their thoughtful comments and suggestions." ]
[ "method", "abstain", "abstain", "objective", "objective", "abstain", "objective", "result", "abstain", "objective", "objective", "abstain", "objective", "objective", "objective", "objective", "method", "abstain", "objective", "abstain", "objective", "objective", "result", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "objective", "other", "other", "other", "method", "abstain", "other", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "method", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "other", "abstain", "abstain", "other", "abstain", "method", "other", "other", "abstain", "abstain", "other", "abstain", "objective", "method", "objective", "abstain", "other", "other" ]
[ "Research has shown that neural models implicitly encode linguistic features, but there has been no research showing how these encodings arise as the models are trained.", "We present the first study on the learning dynamics of neural language models, using a simple and flexible analysis method called Singular Vector Canonical Correlation Analysis (SVCCA), which enables us to compare learned representations across time and across models, without the need to evaluate directly on annotated data.", "We probe the evolution of syntactic, semantic, and topic representations and find that part-of-speech is learned earlier than topic; that recurrent layers become more similar to those of a tagger during training; and embedding layers less similar.", "Our results and methods could inform better learning algorithms for NLP models, possibly to incorporate linguistic information more effectively.", "Large neural networks have a notorious capacity to memorize training data (Zhang et al., 2016), but their high accuracy on many NLP tasks shows that they nonetheless generalize.", "One apparent explanation for their performance is that they learn linguistic generalizations even without explicit supervision for those generalizationsfor example, that subject and verb number agree in English (Linzen et al., 2016); that derivational suffixes attach to only specific parts of speech (Kementched-jhieva and Lopez, 2018); and that short segments of speech form natural clusters corresponding to phonemes (Alishahi et al., 2017).", "These studies tell us that neural models learn to implicitly represent linguistic categories and their interactions.", "But how do they learn these representations?", "One clue comes from the inspection of multilayer models, which seem to encode lexical categories in lower layers, and more contextual categories in higher layers.", "For example, Blevins et al. (2018) found that a word's part of speech (POS) is encoded by lower layers, and the POS of its syntactic parent is encoded by higher layers; while Belinkov et al. (2018) found that POS is encoded by lower layers and semantic category is encoded by higher layers.", "More generally, the most useful layer for an arbitrary NLP task seems to depend on how high-level the task is (Peters et al., 2018).", "Since we know that lower layers in a multi-layer model converge to their final representations more quickly than higher layers (Raghu et al., 2017), it is likely that models learn local lexical categories like POS earlier than they learn higher-level linguistic categories like semantic class.", "How and when do neural representations come to encode specific linguistic categories?", "Answers could explain why neural models work and help us improve learning algorithms.", "We investigate how representations of linguistic structure are learned over time in neural language models (LMs), which are central to NLP: on their own, they are used to produce contextual representations of words for many tasks (e.g. Peters et al., 2018); while conditional LMs power machine translation, speech recognition, and dialogue systems.", "We use a simple and flexible method, Singular Vector Canonical Correlation Analysis (SVCCA; Raghu et al., 2017), which allows us to compare representations from our LM at each epoch of training with representations of other models trained to predict specific linguistic categories.", "We discover that lower layers initially discover features shared by all predictive models, but lose these features as the LM explores more specific clusters.", "We demonstrate that different aspects of linguistic structure are learned at different rates within a single recurrent layer, acquiring POS tags early but continuing to learn global topic information later in training.", "We model the probability distribution over a sequence of tokens x 1 . . . x | x | with a conventional two-layer LSTM LM.", "The pipeline from input x t at time step t to a distribution over x t +1 is described in Formulae (1)(4).", "At time step t , input word x t is embedded as (1) h et , which is input to a two-layer LSTM, producing outputs (2) h 1 t and (3) h 2 t at these layers, along with cell states c 1 t and c 2 t .", "A softmax layer converts h 2 t to a distribution from which (4) x t +1 is sampled.", "h et = embedding ( x t ) (1) h 1 t , c 1 t = LSTM 1 ( h e t , h 1 t 1 , c 1 t 1 ) (2) h 2 t , c 2 t = LSTM 2 ( h 1 t , h 2 t 1 , c 2 t 1 ) (3) x t +1 softmax ( h 2 t ) (4) Each function can be thought of as a representation or embedding of its discrete input; hence h et is a representation of x t , anddue to the recursion in (2) h 1 t is a representation of x 1 . . . x t .", "To inspect our language model for learned linguistic categories, we will use a collection of tagging models, designed to mimic the behavior of our language model but predicting the next tag rather than the next word.", "Given x 1 . . . x | x | , we model a corresponding sequence of tags y 1 . . . y | x | using a one-layer LSTM.", "(Our limited labeled data made this more accurate on topic tagging than another two-layer LSTM, so this architecture does not directly parallel the LM.) h et (cid:48) = embedding (cid:48) ( x t ) (5) h 1 t (cid:48) , c 1 t (cid:48) = LSTM (cid:48) ( h et (cid:48) , h 1 t 1 (cid:48) , c 1 t 1 (cid:48) ) (6) y t +1 softmax (cid:48) ( h 1 t (cid:48) ) (7) We will also discuss input taggers , which share this architecture but instead sample y t , the tag of the most recently observed word.", "SVCCASVCCA is a general method to compare the correlation of two vector representations.", "Let d A and d B be their dimensions.", "For N data points we have two distinct views, given by matrices A RN d A and B RN d B .", "We project these views onto a shared subspace in two steps:", "1. Use Singular Value Decomposition (SVD) to reduce matrices A and B to lower dimensional matrices A (cid:48) and B (cid:48) , respectively.", "This is necessary because many dimensions in the representations are noisy, and in fact cancel each other out (Frankle and Carbin, 2018).", "SVD removes dimensions that were likely to be less important in the original representations from A and B , and in keeping with Raghu et al. (2017), we retain enough dimensions to keep 99% of the variance in the data.", "2. Use Canonical Correlation Analysis (CCA) to project A (cid:48) and B (cid:48) onto a shared subspace, maximizing the correlation of the projections.", "Formally, CCA identifies vectors w, v to maximize = <w (cid:62) A (cid:48) ,v (cid:62) B (cid:48) > (cid:107) w (cid:62) A (cid:48) (cid:107)(cid:107) v (cid:62) B (cid:48) (cid:107) .", "We treat these w, v as new basis vectors, computing the top d C (a hyperparameter) such basis vectors to form projection matrices W R d C d A (cid:48) , V R d C d A (cid:48) .", "The resulting projections W A (cid:48) and V B (cid:48) map onto a shared subspace where the representations of each datapoint from A (cid:48) and B (cid:48) are maximally correlated.", "Intuitively, the correlation will be high if both representations encode the same information, and low if they encode unrelated information.", "Figure 1 illustrates how we use SVCCA to compare representation h 2 t of our language model with the recurrent representation of a tagger, h 1 t (cid:48) .", "In practice, we run over all time steps in a test corpus, rather than a single time step as illustrated.", "We trained our LM on a corpus of tok-enized, lowercased English Wikipedia (70/10/20 train/dev/test split).", "To reduce the number of unique words in the corpus, we excluded any sentence with a word type appearing fewer than 100 times.", "Words appearing fewer than 100 times in the resulting training set are replaced with an unknown token.", "The resulting training set has over 227 million tokens of 20K types.", "We train for 50 epochs to maximize cross-entropy, using a batch size of 40, dropout ratio of 0.2, and sequence length of 35.", "The optimizer is standard SGD with clipped gradients at 0.25, x t embedding h et LSTM h 1 t LSTM h 2 t softmax x t +1 Language model x t embedding h et (cid:48) LSTM h 1 t (cid:48) softmax y t +1 Tag Predictor SVD( h 2 ) SVD( h 1 (cid:48) ) SVD SVD max W,V ( W SVD ( h 2 ) , V SVD ( h 1 (cid:48) )) Figure 1: SVCCA used to compare the layer h 2 of a language model and layer h 1 (cid:48) of a tagger.", "with the learning rate quartered when validation loss increases.", "The result of training is shown in Figure 2, which illustrates the dips in loss when learning rate changes.", "All experiments on the LM throughout training are conducted by running the model at the end of each epoch in inference mode over the test corpus.", "To understand the representations learned by our LM, we compare them with the internal representations of tagging models, using SVCCA.", "Where possible, we use coarse-grained and fine-grained tagsets to account for effects from the size of the tagset.", "Table 1 illustrates our tagsets.", "POS tagging For syntactic categories, we use POS tags, as in Belinkov et al. (2017).", "As a coarse-grained tagset, we use silver Universal Dependency Parse (UDP) POS tags automatically added to our Wikipedia corpus with spacy.", "1 We also use a corpus of fine-grained human annotated Penn Treebank POS tags from the Groningen Meaning Bank (GMB; Bos et al., 2017).", "Semantic tagging We follow Belinkov et al. (2018) in representing word-level semantic information with silver SEM tags (Bjerva et al., 2016).", "SEM tags disambiguate POS tags in ways that are relevant to multilingual settings.", "For example, the comma is not assigned a single tag as punctuation, but has distinct tags according to its function: conjunction, disjunction, or apposition.", "The 66 fine-grained SEM tag classes fall under 13 coarse-grained tags, and an unknown' tag.", "Global topic For topic, we classify each word of the sequence by its source Wikipedia article; for example, every word in the wikipedia article on Trains is labeled Trains.", "This task assesses whether the network encodes the global topic of the sentence.", "1 https://spacy.io/ Figure 3: SVCCA score between representations at each epoch and from the final trained LM.", "UDP silver POS and topic information use the same corpus, taken from the 100 longest articles in Wikipedia randomly partitioned in a 70/10/20 train/dev/test split.", "Each token is tagged with POS and with the ID of the source article.", "The corpus is taken from the LM training data, which may increase the similarity between the tag model and LM.", "Because both tag predictors are trained and tested on the same domain as the LM, they can be easily compared in terms of their similarity to the LM representation.", "Though the SEM corpus and the PTB corpus are different domains from the Wikipedia training data, we compare their activations on the same 191K-token 100-article test corpus.", "Table 2 describes the training and validation corpus statistics for each tagging task.", "Note that topic and UDP POS both apply to the same en-wikipedia corpus, but PTB POS and SEM use two different unaligned sets from the GMB corpus.", "A benefit of SVCCA is its flexibility: it can compute the correlation of a hidden representation to any other vector.", "Raghu et al. (2017) used it to understand learning dynamics by comparing a learned representation to snapshots of the same representation at different epochs during training.", "We use a similar experiment to establish the basic learning dynamics of our model.", "In our shallow 2-level model, activations at h 1 converge slightly after h 2 (Figure 3).", "This differs from the results of Raghu et al. (2017), who found that a 5-layer stacked LSTM LM exhibits faster convergence at lower layers, but this difference may be attributed to our much larger training data, which gives our model sufficient training data at early epochs.", "Empirical upper bounds.", "Our main experiments will test the rate at which different linguistic categories are learned by different layers, but to interpret the results, we need to understand the behaviour of SVCCA for these models.", "In theory, SVCCA scores can vary from 0 for no correlation to 1 for perfect correlation.", "But in practice, these extreme cases will not occur.", "To establish an empirical upper bound on correlation, we compared the similarity at each epoch of training to the frozen final state of a LM with identical architecture but different initialization, trained on the same data (Figure 4).", "2 The correlations increase over time as expected, but to a maximum near 0.64; we don't expect correlations between our LM and other models to exceed this value.", "We explore corresponding lower bounds in our main experiments below.", "Correlations between different layers.", "Next we examine the correlation between different layers of the same model over time (Figure 5).", "We observe that, while over time correlation increases, in general closer layers are more similar, and they are less correlated than they are with the same layer of a differently initialized model.", "This supports the idea that we should compare recurrent layers with recurrent layers because their representations play similar roles within their respective architectures.", "SVCCA vs. Diagnostic classifiers A popular method to analyze learned representations is to use a diagnostic classifier (Belinkov et al., 2017; Hup-kes et al., 2018) or probe (Conneau et al., 2018), a separate model that is trained to predict a linguistic category of interest, y t , from an arbitrary hidden layer h t .", "Diagnostic classifiers are widely used (Belinkov et al., 2018; Giulianelli et al., 2018).", "2 This experiment is similar to the comparisons of randomly initialized models by Morcos et al. (2018).", "But if a diagnostic classifier is trained on enough examples, then random embeddings as input representations often outperform any pretrained intermediate representation (Wieting and Kiela, 2019; Zhang and Bowman, 2018).", "This suggests that diagnostic classifiers may work simply by memorizing the association between an embedding and the most frequent output category associated with that embedding; since for many words their category is (empirically) unambiguous, this may give an inflated view of just how much a model under-stands about that category.", "Our use of SVCCA below will differ from the use of diagnostic classifiers in an important way.", "Diagnostic classifiers use the intermediate representations of the LM as inputs to a tagger.", "A representation is claimed to encode, for example, POS if the classifier accurately predicts itin other words, whether it can decode it from the representation.", "We will instead evaluate the similarity between the representations in an LM and in an independently-trained tagger.", "The intuition behind this is that, if the representation of our LM encodes a particular category, then it must be similar to the representation of model that is specifically trained to predict that category.", "A benefit of the approach is that similarity can be evaluated on any dataset, not only one that has been labeled with the linguis-Figure 6: Learning dynamics interpreted with diagnostic classifiers labeling input word tag y t .", "Another distinction from the typical use of diagnostic classifiers is that probes are usually used to decode tag information about the context or most recent input from the hidden state at the current step.", "Because the hidden representation at time t is meant to encode predictive information about the target word at time t +1 , we treat it as encoding a prediction about the tag of the target word.", "To understand the empirical strengths and weaknesses of these approaches, we compare the use of SVCCA and diagnostic classifiers in understanding learning dynamics.", "In other words, we ask: is our first conceptual shift (to SVCCA) necessary?", "To test this, we use the same model as Belinkov et al. (2017), which classifies an arbitrary representation using a ReLU followed by a softmax layer.", "To be consistent with Belinkov et al. (2017), we use y t as their target label.", "We repeat their method in this manner (Figure 6) as well as applying our second modification, in which we instead target the label y t +1 (Figure 7).", "the results in Figures 2 and 3, which suggest that representations change substantially during training in ways that materially affect the accuracy of the LM.", "This suggests that diagnostic classifiers are not illustrating improvements in word representations throughout training, and we conclude that they are ineffective for understanding learning dynamics.", "Our remaining experiments use only SVCCA.", "We applied SVCCA to each layer of our LM with the corresponding layer of each tag predictor in order to find the correlation between the LM representation and the tag model representation at each level (Figure 8).", "To establish empirical lower bounds on correlation, we also trained our taggers on the same data with randomly shuffled labels, as in Zhang et al. (2016).", "These latter experiments, denoted by the dotted lines of Figure 8, show how much of the similarity between models is caused by their ability to memorize arbitrary associations.", "Note that the resulting scores are nonzero, likely because the linguistic structure of the input shapes representations even when the output is random, due to the memorization phase of training (Shwartz-Ziv and Tishby, 2017).", "The strongest similarity at recurrent layers belongs to the most local property, the UDP POS tag.", "Both coarseand fine-grained semantic tags, which rely on longer range dependencies, fall below UDP POS consistently.", "Topic, which is global to an entire document, is the least captured and the slowest to stabilize.", "Indeed, correlation with true Figure 8: SVCCA correlation scores between the LM predicting x t +1 and the tag model predicting y t +1 .", "topic falls consistently below the score for a model trained on randomized topic tags, implying that early in training the model has removed the context necessary to identify topic (below even the inadequate contextual information memorized by a model with random labels), which depends on the general vocabulary in a sentence rather than a local sequence.", "Over time correlation rises, possibly because the model permits more long-distance context to be encoded.", "Khandelwal et al. (2018) found that LSTMs remember content words like nouns for more time steps than they remember function words like prepositions and articles.", "We hypothesize that the LM's slower stabilization on topic is related to this phenomenon, since it must depend on content words, and its ability to remember them increases throughout training.", "The encoder layer exhibits very different patterns.", "Because the representation produced by the encoder layer is local to the word, the nuances that determine how a word is tagged in context cannot be learned.", "The encoder layers are all highly similar to each other, which suggests that the unigram representations produced by the encoder are less dependent on the particular end task of the neural network.", "Similarity between the encoders declines over time as they become more specialized towards the language modeling task.", "This decline points to some simple patterns which are learned for all language tasks, but which are gradually replaced by representations more useful for language modeling.", "This process may even be considered a naturally occurring analog to the common practice of initializing the encoder layer as word embeddings pretrained an unrelated task such as skipgram or CBOW (Mikolov et al., 2013).", "It seems that the easy' word properties, which immediately improve performance, are similar regardless of the particular language task.", "At h 1 , the correlation shows a clear initial decline in similarity for all tasks.", "This seems to point to an initial representation that relies on simple shared properties, which in the first stage of training is gradually dissolved before the layer begins to converge on a structure shared with each tag predictor.", "It may also be linked to the information bottleneck learning phases explored by Shwartz-Ziv and Tishby (2017).", "They suggest that neural networks learn by first maximizing the mutual information between the input and internal representation, then minimizing the mutual information between the internal representation and output.", "The network thus initially learns to effectively represent the input, then compresses this representation, keeping only the elements relevant to the output.", "3 If the LM begins by maximizing mutual information with input, because the input is identical for the LM and tag models it may lead to these similar initial representations, followed by a decline in similarity as the compression narrows to properties specific to each task.", "Our second conceptual shift is to focus on output tag predictionasking what a representation encodes about the next output word, rather than what it has encoded about words it has already observed in the input.", "What effect does this have?", "Since we already studied output tags in the previous set of experiments, here we consider input tags, in the style of most diagnostic classifier analysis (Figure 9).", "The learning dynamics are similar to those for tag prediction, but the UDP POS tagger decreases dramatically in all correlations while the GMB-trained taggers 4 often increase slightly.", "While the shapes of the lines are similar, UDP POS no longer consistently dominates the other tasks in recurrent layer correlation.", "Instead, we find the more granular PTB POS tags lead to the most similar representations.", "We find clear patterns in the encoding of linguistic structure with SVCCA, in contrast to the weaker results from a less responsive diagnostic classifier.", "Because SVCCA proves so much more sensitive than the diagnostic classifiers currently in use, we believe that future work on measuring the encoding of linguistic structure should use the similarity of individual modules from independently trained tag predictors rather than the performance of tag predictors trained on a particular representation.", "This system should also be of interest because it is efficient.", "To train a diagnostic classifier, we must run a forward pass of the LM for each forward pass of the auxiliary model, while SVCCA only requires the LM to run on the test set.", "A fur-3 This memorizationcompression learning pattern parallels the memorizationgeneralization of the first half of the U-shaped curve exhibited by human children learning irregular word forms.", "Kirov and Cotterell (2018) observe similar patterns when artificially modeling inflection.", "4 PTB POS, SEM (fine), and SEM (coarse) Figure 9: SVCCA correlation scores between LM activations when predicting x t +1 and tagger activations when labeling y t .", "ther efficiency gain is particular to studying learning dynamics: we train only one tagger and compare it to different versions of the LM over training, but for standard probing, we must train a new version of each layer's tagger at each epoch.", "Our SVCCA experiments in Figure 8 ran in hours, while the diagnostic classifier experiments in Figure 7 ran for days.", "Our method holds another, more subtle advantage.", "Our analysis provides an alternative view of what it means for a model to encode some linguistic trait.", "The literature on analyzing neural networks includes a broad spectrum of interpretations about what it means to encode a property.", "At one end of the spectrum lies the purely informational view (e.g., mutual information; Noshad and Hero III, 2018).", "Mutual information is a very flexible view, but it requires us to compute theoretical information content, which in practice can only be estimated.", "Furthermore, information can be represented without being used, as shown by Van-massenhove et al. (2017), who found that NMT systems often predicted tense according to a diagnostic classifier but did not produce the correct tense as output.", "The other end of the spectrum is focused on the structure of the representation space (e.g., the features and the property in question are linearly similar; Alishahi et al., 2017).", "Analyzing structural similarity should remedy the shortcomings of the informational view, but most intermediate representations are not targeted to extract the property in question through a linear transformation, and failing to be interpretable through such simple extraction should not be equated to a failure to encode that property.", "Most of the literature on analyzing representations, by probing with a more complex architecture, seeks the flexibility of mutual information with the concreteness and tractability of the structural view but instead obscures the strict information view without offering interpretable information about the structure, because the architecture of a diagnostic classifier affects its performance.", "It should not be surprising that representational quality as measured by such systems is a poor indicator of translation quality (Cfka and Bojar, 2018).", "SVCCA, in contrast, is a structural view that does not directly compare an activation that targets word prediction with a particular tag, but instead compares that activation with one targeting the prediction of the tag.", "Let us consider a specific common probing method.", "What do we learn about the LM when a feedforward network cannot extract tag information directly from the embedding layer, but can from a recurrent layer?", "It may be tempting to conclude that tag information relies heavily on context, but consider some alternative explanations.", "If the embedding encodes the tag to be interpreted by a recurrent layer, a feedforward network may not be capable of representing the function to extract that tag because it does not have access to a context vector for aiding interpretation of the hidden layer.", "Perhaps its activation functions cover a different range of outputs.", "By directly comparing LSTM layers to LSTM layers and embedding layers to embedding layers, we respect the shape of their outputs and the role of each module within the network in our analysis.", "The results of our analysis imply that early in training, representing part of speech is the natural way to get initial high performance.", "However, as training progresses, it increasingly benefits the model to represent categories with longer-range dependencies, such as topic.", "One direction for future work is exploring how generalization interacts with the correlations between LMs and tag predictors.", "It may be that a faithful encoding of a property like POS tag indicates that the LM is relying more on linguistic structure than on memorizing specific phrases, and therefore is associated with a more general model.", "If these measurements of structure encoding are associated with more general models, we might introduce regularizers or other modifications that explicitly encourage correlation with a tagging task.", "Combes et al. (2018) identified the phenomenon of gradient starvation , meaning that while frequent and unambiguous features are learned quickly in training, they slow down the learning of rarer features.", "For example, artificially brightening images according to their class leads to a delay in learning to represent the more complex natural class features.", "Although it is tempting to claim that semantic structure is learned using syntactic structure as natural scaffolding, it is possible that the simple predictive power of POS is acting as an attractor and starving semantic features that are rarer and more ambiguous.", "A possible direction for future work would be to explore which of these explanations is true, possibly by decorrelat-ing particular aspects of linguistic structure from language modeling representations.", "The techniques in this paper could be applied to better understand the high performance of a system like ELMo (Peters et al., 2018).", "Different layers in such a system are useful for different tasks, and this effect could be understood in terms of the gradual divergence between the layers and their respective convergence to representations geared toward a single task.", "We thank Denis Emelin, Sameer Bansal, Toms Bergmanis, Maria Corkery, Sharon Goldwater, Sorcha Gilroy, Aibek Makazhanov, Yevgen Ma-tusevych, Kate McCurdy, Janie Sinclair, Ida Szu-bert, Nikolay Bogoychev, Clara Vania, and the anonymous reviewers for helpful discussion and comments on drafts.", "We thank Matthew Summers for assistance with visualisations." ]
[ "abstain", "objective", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "method", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "other", "method", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "objective", "method", "abstain", "method", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "result", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "other" ]
[ "Showing items that do not match search query intent degrades customer experience in e-commerce.", "These mismatches result from counterfactual biases of the ranking algorithms toward noisy behavioral signals such as clicks and purchases in the search logs.", "Mitigating the problem requires a large labeled dataset, which is expensive and time-consuming to obtain.", "In this paper, we develop a deep, end-to-end model that learns to effectively classify mismatches and to generate hard mismatched examples to improve the classifier.", "We train the model end-to-end by introducing a latent variable into the cross-entropy loss that alternates between using the real and generated samples.", "This not only makes the classifier more robust but also boosts the overall ranking performance.", "Our model achieves a relative gain compared to baselines by over 26% in F-score, and over 17% in Area Under PR curve.", "On live search traffic, our model gains significant improvement in multiple countries.", "Deep learning models have shown excellent performance in the natural language domain, and this success has inspired practitioners to adapt these models to information retrieval tasks (Mitra et al., 2017; Huang et al., 2013).", "However, deep learning has not succeeded in these tasks due to the lack of massive labeled datasets (Dehghani et al., 2017).", "Another reason is that word-based representations (Mikolov et al., 2013; Pennington et al., 2014) are less useful in representing complex, informal search queries (Xiong et al., 2017) and hence provide limited understanding of the search intent.", "In the absence of explicit knowledge of which documents are matched with a search query and which are mismatched, it is hard to learn robust deep learning models that understand the query intent and find high-quality, relevant documents.", "Text-based product search is even more challenging.", "Simple modifications to the input query (or a product title) can completely change the search intent (or the product type, respectively).", "Take, for example, the query gray iPhone X by which a user is looking for a specific phone.", "Slightly modified queries such as iPhone X charger and case for iPhone X refer to different products.", "Therefore, it is hard for distributed representations to capture the nuances.", "Moreover, noisy user-behavioral signals from clicks and purchases (e.g., users purchased a phone while searching for a charger) can lead to biases in the ranking algorithms.", "As such, even top-ranked items may not match the search intent.", "In this paper, we consider the problem of identifying query-item mismatches to enhance the ranking performance in product search.", "This task typically requires a large labeled dataset of matches and mismatches that we will respectively refer to as negative and positive samples.", "Even if we can partly afford the expensive and time-consuming labeling, acquired datasets are unbalanced and lack hard positive samples, preventing the classifier from learning a robust decision boundary.", "However, the above examples gray Iphone X and Iphone X charger motivate that meaningful positive samples can be artificially generated by leveraging the labeled data.", "In fact, we can heuristically construct a large number of negatives by observing which items are commonly purchased in response to the corresponding query.", "The question is that can we use such negatives to synthesize hard-to-classify positives to robustify the classifier?", "We illustrate the goal of the generation in Figure 1. To this end, we develop a deep, end-to-end model that learns to identify mismatched query-item pairs and is also capable of generating mismatched queries given an item.", "The task of the generator is twofold: it has to be able to generate hard-to-classify samples so that the classifier Figure 1: (Best seen in color) The query is running shoes for men .", "learns a more robust decision boundary; it also needs to generate realistic queries.", "Using matched query-item pairs allows the generator to synthesize hard-to-classify mismatches based on an efficient encoder-decoder architecture.", "This has a distinct advantage over generating samples from noise, as in Generative Adversarial Networks (Goodfellow et al., 2014; Wang et al., 2017) or via dithering the learned representations to make the model more robust (Miyato et al., 2018).", "We include our classifier and generator in an end-to-end model.", "The classifier only requires continuous representations of the generated query as the second input instead of a discrete text sequence.", "This key property enables us to use efficient gradient-based optimization techniques and bypass reinforcement learning-based methods (Jia and Liang, 2017), which are significantly more complex, and also recently developed heuristic approaches to generate adversarial text samples (Alzantot et al., 2018).", "To achieve this, we modify the objective function in a way that makes the end-to-end training possible via sampling a binary latent variable, avoiding the min-max optimization for GANs (Miyato et al., 2018; Wang et al., 2017).", "We perform extensive experiments on a mismatch dataset in an e-commerce company.", "The proposed model outperforms deep learning baselines by over 26% in F-score and 17% in relative AUPR score and performs significantly better than GBDT models, which are widely used in practice.", "Including the query generator helps achieve higher gains than merely dithering the vector representation of the query.", "We also show that the generative model can indeed generate hard-to-classify mismatches.", "When integrated with the ranking component of a real-world product search engine, our model outperforms the baseline methods in multiple countries on an online A/B test evaluation.", "Let x = ( I, Q ) denote a pair of item title and textual query and y ( I, Q ) denote its corresponding label.", "y = 1 if the pair is mismatched or y = 0 otherwise.", "Assume we can obtain from search logs many matched samples, which we use to generate more positives.", "These samples are not human-labeled but instead inferred by considering behavioral signals such as frequent purchases.", "We aim to build a deep classifier that takes two text sequences in x i = ( I i , Q i ) and classifies whether the pair is mismatched or not.", "At the same time, we want the model to generate a new sample ( I, Q gen ) with y gen = 1 given ( I, Q ) with y = 0 .", "Next, we discuss our proposed model.", "We present our proposed model, namely QUARTS (QUery-based Adversarial learning for Robust Textual Search) in Figure 2. QUARTS is composed of three components:", "(i) an LSTM and attention-based classifier,", "(ii) a variational encoder-decoder query generator (VED) and", "(iii) a state combiner.", "Due to space constraints, we defer the details of", "(i) and", "(ii) in the appendix.", "The LSTM classifier", "(i) is adapted from the entailment model in (Rocktaschel et al., 2015), with some changes to fit the product search task (see Appendix A.1).", "The VED generator", "(ii) takes a matched pair ( I, Q ) as input and outputs a new query Q gen so that the pair ( I, Q gen ) is mismatched while Q gen stays lexically similar to Q .", "As an example, if I = Apple Iphone X, space gray and Q = gray Iphone X is a matched pair, we can generate Q gen = Iphone X case given I .", "In this case, Q gen is similar to Q , but ( I, Q gen ) constitutes a product mismatch.", "To have an end-to-end model, we combine the query representations computed by the classifier and the generator to form a proper input to the attention layer.", "We need to make sure that the modifications still allow us to efficiently backpropagate the gradients of the loss function during training.", "To achieve this, we add a merging layer shown by the orange box in Figure 2. This layer computes sH gen + (1 s ) H, s = (1 y ( I, Q )) z where H, H gen are the corresponding LSTM representations of the input Q and Q gen , and z Bernoulli ( p ) is a random binary variable that controls whether the input query Q or the generated query Q gen is used.", "When z = 0 , QUARTS essentially computes the probability of mismatch.", "Let us explain how the real label y and the switch z combine to yield the desired outputs.", "As y = 1 where the sample ( I, Q ) is a real positive, we want to leverage it to train the classifier f ( ) .", "In this case, s = 0 and the attention layer only takes H as input.", "When y = 0 , we can either use this sample to train the classifier or use it to generate adversarial representations H gen .", "This process is controlled by z .", "When z = 1 , we use H gen , else H .", "The value of z determines whether we want to use the datapoint as-is for training, or instead use the fake query via the VED module.", "A second consideration is how to enable efficient training on f ( ) and the generator g ( ) .", "Let x gen = ( I, Q gen ) be the datapoint we will use to train f ( ) using the output from g ( ) .", "In this case, since y = 0 , z = 1 , we use z as a proxy label to train f ( ) .", "For samples i = 1 , 2 , . . . , N , we sample z i Bernoulli ( p ) for some p [0 , 1) to decide which negative samples have labels flipped. We modify the cross entropy loss as below, with L being the weighted cross-entropy loss: 1 NN (cid:88) i =1 (1 s i ) L ( x i , y i )+ s i L ( g ( x i ) , z i )) . (1) Note that (1) is differentiable in , and notably H gen the generated representations of Q gen . Since we do not use the actual generated query, we need not resort to heuristics or policy gradient-based optimization methods to minimize (1). Before training QUARTS end-to-end, we pre-train the classifier and the VED on proper data. The pseudocode of the end-to-end training is shown in Algorithm 1. Algorithm 1 QUARTS training procedure Require: N samples of labeled data ( I, Q, y ( I, Q )) , M negative samples from search log, and sampling probability p 1: Using labeled data, pre-train the classifier 2: Create ( I, Q, Q mis ) tuples T using labeled data so that y ( I, Q ) = 0 and y ( I, Q mis ) = 1 3: Initialize the VED encoder with the trained classifier, and use the above created tuples to pre-train the VED generator 4: Concatenate the human annotated and logs data to form M + N samples D 5: Perform end to end training on D , where in each epoch 6: for i [ M + N ] do 7: Sample z Bernoulli ( p ) 8: Set s = (1 y i ( I, Q )) z 9: Use s and I, Q, y ( I, Q ) to perform one step of learning on the end-to-end model 10: end for 3 Experiments and Results We used a human-labeled dataset of query-item pairs, obtained from an e-commerce search platform.", "There are in total N = 3 .", "2 M pairs of which only a small fraction are mismatches.", "A separate test set of 100 K labeled pairs was used to evaluate all methods.", "We further have 3 M query-item pairs that are deemed matched by considering items that are purchased frequently in response to those queries from the search logs.", "This acts as the augmentation dataset for the QUARTS model.", "For all encoders and decoders, we use an LSTM with hidden size of 300.", "The inputs to the encoder are 300 dimensional word embeddings trained separately for queries and item titles.", "The word embeddings were trained using word2vec on a corpus of anonymized search engine queries, as well as item titles from the catalog.", "The models were trained using Adam (Kingma and Ba, 2014) and we tuned the classification part (i.e. excluding the variational decoder) on a validation dataset.", "We obtained the performance with initial learning rate 10 4 , and learning rate decay 0 .", "8 after 10 epochs.", "The dropout probability and the batch size were respectively 0 .", "1 and 128 .", "Because the imbalanced nature of the labeled data, we up-weighted the positive samples.", "In the cross-entropy loss for classification, we set = 5 .", "To pretrain the VED, we used the annotated training data and generated I, Q, Q gen tuples as explained in Section A.2.", "Since we are explicitly interested in training the VED to generate Q gen : y ( I, Q gen ) = 1 given I, Q : y ( I, Q ) = 0 , we consider only the annotated items that have both positive and negatively annotated queries, and generate the tuples.", "The previously pretrained encoder was fixed, and only the decoder was trained using Adam with an initial learning rate of 10 3 .", "We finally merged the LSTM encoder for query and item, the VED decoder for query with the other layers described in the previous sections to train the model end to end.", "The classifier f ( ) is pretrained on the human annotated data.", "For the end-to-end model, we use the pretrained classifier and generator, modify the loss function as in (1), and further append the dataset with M = 3 MM well matched items from anonymized user logs, where we assume items that were purchased in response to a query are matched ( y ( I, Q ) = 0) .", "We evaluated our models using Area under the Precision-Recall curve (APR), and the F1-score at the best operating point, all evaluated on the test set.", "To evaluate the generation task, we used BLEU scores.", "In addition, we had human annotators to judge generated item-query pairs.", "These annotators were trained to identify whether a generated pair is a match or a mismatch.", "We used a GBDT model as a baseline.", "We used user-item features for this model similarly to traditional ranking and relevance models.", "We also applied a DSSM-style model (namely DSSM) where query and item word embeddings were concatenated as input to a stack of dense layers.", "We also used the BERT (Devlin et al., 2018) embeddings for the query and item title sequences and passed them through the aforementioned model.", "A final baseline we evaluated against was the MatchPyramid (Pang et al., 2016), which has shown to outperform several baselines for matching and question-answering tasks.", "All hyperparameters were chosen via a simple grid search on a validation set.", "All the results are reported on the test set.", "The classification results of all considered models are shown in Table 1. We also compare our model trained on the original training data and one augmented by naively adding the 3 M matched pairs.", "For confidentiality reasons, we report the performance relative to some baseline.", "We see from Table 1 that purely augmenting the training data with the matched samples does not improve but worsens the base classifier.", "Table 2 shows the performance of the QUARTS compared with MatchPyramid models and the DSSM model initialized with pretrained BERT embeddings.", "The end-to-end QUARTS model beats the BERT DSSM baseline by over 17% in APR, and over 26% in F-score.", "To validate the effectiveness of QUARTS in improving the ranking performance for the search task, we performed an A/B test on live search traffic in two countries, to account for varying traffic patterns.", "Compared to the existing baselines, the QUARTS model yielded a 12 .", "2% and 5 .", "75% increase in online metrics for the two countries respectively, which are significant given the task.", "We used a held-out 10% of the ( I, Q, Q gen ) data to evaluate the VED generator.", "In order to make a fair evaluation, we ensured that the items that appeared in training set were not in the validation set.", "The validation BLEU scores are shown in Table 4.", "BLEU scores do not indicate whether or not a generated queries is a realistic modification of the original query.", "Therefore, we also had 2500 generated pairs annotated by human experts who were specifically trained to decide if a query-item Item title ( I ) Query ( Q ) Generated query ( Q gen ) ESR iPhone 8/7 screen protector tempered glass... iPhone 8 curved screen protector iPhone 8 plus cases JETech case for iPad Pro 12.9 inch ipad pro 12.9 speck shell iPad pro 12.9 Mounting dream full motion wall mounts bracket lg oled tv mount 55 inch flat screen tv Intel core i7-8700K desktop processor 6 cores core i7 8700k GTX 1080 Chicco pocket snack booster seat peg perego high chair baby dining set Comfy sheets ultra luxury 100% Egyptian cotton sheet set king size sheets king size beds for sale Table 3: Examples of adversarial query generations from the VED query generator.", "pair is matched or not.", "The accuracy 82% in Table 4 suggests that most of generated pairs are meaningful.", "Here, the accuracy is the fraction of the pairs that were actually labeled as mismatches Model BLEU-1 BLEU-2 BLEU-3 BLEU-4 Acc VED 35.15 31.40 24.84 20.76 0.82 Table 4: Validation BLEU scores of generated queries from the variational encoder-decoder generator, and misclassification accuracy as reported by humans.", "We provide some qualitative results from the VED in Table 3.", "The generator's goal is to slightly modify the input query Q , so that the resultant ( I, Q gen ) sample is realistic.", "A source query for screen protector is mapped to a query for phone case , and a source query for tv mount is mapped to one for flat screen tv .", "The goal of the word-by-word attention layer is to understand what parts of the user query and item titles are important to understand whether to match or not.", "Importantly, item titles are typically long, and have information such as brand, color and size.", "All of these facets might not be relevant for a particular user query.", "Figure 3 shows the performance of the word-by-word attention layer, for a matched and a mismatched pair.", "In both cases, we see that the correct words are attended to, helping the classifier make the distinction between a matched and a mismatched pair.", "Figure 4 shows another example.", "We developed an end-to-end model with hard to classify query generation for retrieval in e-commerce product search.", "We built upon ideas for textual entailment, and used a word by word attention layer to help create item representations conditioned on an input query.", "We trained a generator that yields representations of queries that are mismatched to a source item, while at the same time being realistic.", "This allows us to address the Figure 3: Word-by-word attention for a mismatched (top) and matched (bottom) query-item pair.", "To train the model end to end, we modified the cross-entropy loss, allowing us to avoid optimizing a minimax objective.", "Experiments on an offline dataset and live product search traffic showed that our method improves significantly over baselines." ]
[ "abstain", "abstain", "abstain", "objective", "method", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "result", "method", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "result", "result", "abstain", "abstain", "method", "abstain", "objective", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "other", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "method", "method", "other", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "method", "abstain", "objective", "result" ]
[ "The supervised training of high-capacity models on large datasets containing hundreds of thousands of document-summary pairs is critical to the recent success of deep learning techniques for abstractive summarization.", "Unfortunately, in most domains (other than news) such training data is not available and cannot be easily sourced.", "In this paper we enable the use of supervised learning for the setting where there are only documents available (e.g., product or business reviews) without ground truth summaries.", "We create a synthetic dataset from a corpus of user reviews by sampling a review, pretending it is a summary, and generating noisy versions thereof which we treat as pseudo-review input.", "We introduce several linguistically motivated noise generation functions and a summarization model which learns to denoise the input and generate the original review.", "At test time, the model accepts genuine reviews and generates a summary containing salient opinions, treating those that do not reach consensus as noise.", "Extensive automatic and human evaluation shows that our model brings substantial improvements over both abstractive and extractive baselines.", "The proliferation of massive numbers of online product, service, and merchant reviews has provided strong impetus to develop systems that perform opinion mining automatically (Pang and Lee, 2008).", "The vast majority of previous work (Hu and Liu, 2006) breaks down the problem of opinion aggregation and summarization into three interrelated tasks involving aspect extraction (Mukher-jee and Liu, 2012), sentiment identification (Pang et al., 2002; Pang and Lee, 2004), and summary creation based on extractive (Radev et al., 2000; Lu et al., 2009) or abstractive methods (Ganesan et al., 2010; Carenini et al., 2013; Gerani et al., 2014; Di Fabbrizio et al., 2014).", "Although potentially more challenging, abstractive approaches seem more appropriate for generating informative and concise summaries, e.g., by performing various rewrite operations (e.g., deletion of words or phrases and insertion of new ones) which go beyond simply copying and rearranging passages from the original opinions.", "Abstractive summarization has enjoyed renewed interest in recent years thanks to the availability of large-scale datasets (Sandhaus, 2008; Hermann et al., 2015; Grusky et al., 2018; Liu et al., 2018; Fabbri et al., 2019) which have driven the development of neural architectures for summarizing single and multiple documents.", "Several approaches (See et al., 2017; Celikyilmaz et al., 2018; Paulus et al., 2018; Gehrmann et al., 2018; Liu et al., 2018; Perez-Beltrachini et al., 2019; Liu and Lapata, 2019; Wang and Ling, 2016) have shown promising results with sequence-to-sequence models that encode one or several source documents and then decode the learned representations into an abstractive summary.", "The supervised training of high-capacity models on large datasets containing hundreds of thousands of document-summary pairs is critical to the recent success of deep learning techniques for abstractive summarization.", "Unfortunately, in most domains (other than news) such training data is not available and cannot be easily sourced.", "For instance, manually writing opinion summaries is practically impossible since an annotator must read all available reviews for a given product or service which can be prohibitively many.", "Moreover, different types of products impose different restrictions on the summaries which might vary in terms of length, or the types of aspects being mentioned, rendering the application of transfer learning techniques (Pan and Yang, 2010) problematic.", "Motivated by these issues, Chu and Liu (2019) consider an unsupervised learning setting where there are only documents (product or business reviews) available without corresponding summaries.", "They propose an end-to-end neural model to perform abstractive summarization based on", "(a) an autoencoder that learns representations for each review and", "(b) a summarization module which takes the aggregate encoding of reviews as input and learns to generate a summary which is semantically similar to the source documents.", "Due to the absence of ground truth summaries, the model is not trained to reconstruct the aggregate encoding of reviews, but rather it only learns to reconstruct the encoding of individual reviews.", "As a result, it may not be able to generate meaningful text when the number of reviews is large.", "Furthermore, autoencoders are constrained to use simple decoders lacking attention (Bahdanau et al., 2014) and copy (Vinyals et al., 2015) mechanisms which have proven useful in the supervised setting leading to the generation of informative and detailed summaries.", "Problematically, a powerful decoder might be detrimental to the reconstruction objective, learning to express arbitrary distributions of the output sequence while ignoring the encoded input (Kingma and Welling, 2014; Bowman et al., 2016).", "In this paper, we enable the use of supervised techniques for unsupervised summarization.", "Specifically, we automatically generate a synthetic training dataset from a corpus of product reviews, and use this dataset to train a more powerful neural model with supervised learning.", "The synthetic data is created by selecting a review from the corpus, pretending it is a summary, generating multiple noisy versions thereof and treating these as pseudo-reviews .", "The latter are obtained with two noise generation functions targeting textual units of different granularity: segment noising introduces noise at the wordand phrase-level, while document noising replaces a review with a semantically similar one.", "We use the synthetic data to train a neural model that learns to denoise the pseudo-reviews and generate the summary.", "This is motivated by how humans write opinion summaries, where denoising can be seen as removing diverging information.", "Our proposed model consists of a multi-source encoder and a decoder equipped with an attention mechanism.", "Additionally, we introduce three modules:", "(a) explicit denoising guides how the model removes noise from the input encodings,", "(b) partial copy enables to copy information from the source reviews only when necessary, and", "(c) a discriminator helps the decoder generate topically consistent text.", "We perform experiments on two review datasets representing different domains (movies vs businesses) and summarization requirements (short vs longer summaries).", "Results based on automatic and human evaluation show that our method outperforms previous unsupervised summarization models, including the state-of-the-art abstractive system of Chu and Liu (2019) and is on the same par with a state-of-the-art supervised model (Wang and Ling, 2016) trained on a small sample of (genuine) review-summary pairs.", "Most previous work on unsupervised opinion summarization has focused on extractive approaches (Carenini et al., 2006; Ku et al., 2006; Paul et al., 2010; Angelidis and Lapata, 2018) where a clustering model groups opinions of the same aspect, and a sentence extraction model identifies text representative of each cluster.", "Ganesan et al. (2010) propose a graph-based abstractive framework for generating concise opinion summaries, while Di Fabbrizio et al. (2014) use an extractive system to first select salient sentences and then generate an abstractive summary based on hand-written templates (Carenini and Moore, 2006).", "As mentioned earlier, we follow the setting of Chu and Liu (2019) in assuming that we have access to reviews but no gold-standard summaries.", "Their model learns to generate opinion summaries by reconstructing a canonical review of the average encoding of input reviews.", "Our proposed method is also abstractive and neural-based, but eschews the use of an autoencoder in favor of supervised sequence-to-sequence learning through the creation of a synthetic training dataset.", "Concurrently with our work, Bra zinskas et al. (2019) use a hierarchical variational autoencoder to learn a latent code of the summary.", "While they also use randomly sampled reviews for supervised training, our dataset construction method is more principled making use of linguistically motivated noise functions.", "Our work relates to denoising autoencoders (DAEs; Vincent et al., 2008), which have been effectively used as unsupervised methods for various NLP tasks.", "Earlier approaches have shown that DAEs can be used to learn high-level text representations for domain adaptation (Glorot et al., 2011) and multimodal representations of textual and visual input (Silberer and Lapata, 2014).", "Recent work has applied DAEs to text generation tasks, specifically to data-to-text generation (Freitag and Roy, 2018) and extractive sentence compression (Fevry and Phang, 2018).", "Our model differs from these approaches in two respects.", "Firstly, while previous work has adopted trivial noising methods such as randomly adding or removing words (Fevry and Phang, 2018) and randomly corrupting encodings (Silberer and Lapata, 2014), our noise generators are more linguistically informed and suitable for the opinion summarization task.", "Secondly, while in Freitag and Roy (2018) the decoder is limited to vanilla RNNs, our noising method enables the use of more complex architectures, enhanced with attention and copy mechanisms, which are known to improve the performance of summarization systems (Rush et al., 2015; See et al., 2017).", "Let X = { x 1 , ..., x N } denote a set of reviews about a product (e.g., a movie or business).", "Our aim is to generate a summary y of the opinions expressed in X .", "We further assume access to a corpus C = { X 1 , ..., XM } containing multiple reviews about M products without corresponding opinion summaries.", "Our method consists of two parts.", "We first create a synthetic dataset D = { ( X , y ) } consisting of summary-review pairs.", "Specifically, we sample review x i from C , pretend it is a summary, and generate multiple noisy versions thereof (i.e., pseudo-reviews).", "At training time, a denoising model learns to remove the noise from the reviews and generate the summary.", "At test time, the same denoising model is used to summarize actual reviews.", "We use denoising as an auxiliary task for opinion summarization to simulate the fact that summaries tend to omit opinions that do not represent consensus (i.e., noise in the pseudo-review), but include salient opinions found in most reviews (i.e., non-noisy parts of the pseudo-review).", "We sample a review as a candidate summary and generate noisy versions thereof, using two functions:", "(a) segment noising adds noise at the token and chunk level, and", "(b) document noising adds noise at the text level.", "The noise functions are illustrated in Figure 1. Summary Sampling Summaries and reviews follow different writing conventions.", "ple, reviews are subjective, and often include first-person singular pronouns such as I and my and several unnecessary characters or symbols.", "They may also vary in length and detail.", "We discard reviews from corpus C which display an excess of these characteristics based on a list of domain-specific constraints (detailed in Section 4).", "We sample a review y from the filtered corpus, which we use as the candidate summary.", "Segment Noising Given candidate summary y = { w 1 , ..., w L } , we create a set of segment-level noisy versions X ( c ) = { x ( c ) 1 , ..., x ( c ) N } .", "Previous work has adopted noising techniques based on random n -gram alterations (Fevry and Phang, 2018), however, we instead rely on two simple, linguistically informed noise functions.", "Firstly, we train a bidirectional language model (BiLM; Peters et al., 2018) on the review corpus C .", "For each word in y , the BiLM predicts a softmax word distribution which can be used to replace words.", "Secondly, we utilize FLAIR 1 (Akbik et al., 2019), an off-the-shelf state-of-the-art syntactic chunker that leverages contextual embeddings, to shallow parse each review r in corpus C .", "This results in a list of chunks C r = { c 1 , ..., c K } with corresponding syntactic labels G r = { g 1 , ..., g K } for each review r , which we use for replacing and rearranging chunks.", "level alterations.", "Token-level alterations are performed by replacing tokens in y with probability p R .", "Specifically, we replace token w j in y , by sampling token w (cid:48) j from the BiLM predicted word distribution (see in Figure 1).", "We use nucleus sampling (Holtzman et al., 2019), which samples from a rescaled distribution of words with probability higher than a threshold p N , instead of the original distribution.", "This has been shown to yield better samples in comparison to topk sampling, mitigating the problem of text degeneration (Holtzman et al., 2019).", "Chunk-level alterations are performed by removing and inserting chunks in y , and rearranging them based on a sampled syntactic template.", "Specifically, we first shallow parse y using FLAIR, obtaining a list of chunks C y , each of which is removed with probability p R .", "We then randomly sample a review r from our corpus and use its sequence of chunk labels G r as a syntactic template, which we fill in with chunks in C y (sampled without replace-ment), if available, or with chunks in corpus C , otherwise.", "This results in a noisy version x ( c ) (see Figure 1 for an example).", "Repeating the process N times produces the noisy set X ( c ) .", "We describe this process step-by-step in the Appendix.", "Document Noising Given candidate summary y = { w 1 , ..., w L } , we also create another set of document-level noisy versions X ( d ) = { x ( d ) 1 , ..., x ( d ) N } .", "Instead of manipulating parts of the summary, we altogether replace it with a similar review from the corpus and treat it as a noisy version.", "Specifically, we select N reviews that are most similar to y and discuss the same product.", "To measure similarity, we use IDF-weighted ROUGE-1 F1 (Lin, 2004), where we calculate the lexical overlap between the review and the candidate summary, weighted by token importance: overlap = (cid:88) w j x (cid:0) IDF ( w j ) 1( w j y ) (cid:1) P = overlap/ | x | R = overlap/ | y | F 1 = (2 P R ) / ( P + R ) where x is a review in the corpus, 1( ) is an indicator function, and P, R, and F 1 are the ROUGE-1 precision, recall, and F 1 , respectively.", "The reviews with the highest F 1 are selected as noisy versions of y , resulting in the noisy set X ( d ) (see Figure 1).", "We create a total of 2 N noisy versions of y , i.e., X = X ( c ) X ( d ) and obtain our synthetic training data D = { ( X , y ) } by generating | D | pseudo-review-summary pairs.", "Both noising methods are necessary to achieve aspect diversity amongst input reviews.", "Segment noising creates reviews which may mention aspects not found in the summary, while document noising creates reviews with content similar to the summary.", "Relying on either noise function alone decreases performance (see the ablation studies in Section 5).", "We show examples of these noisy versions in the Appendix.", "We summarize (aka denoise) the input X with our model which we call DENOISESUM , illustrated in Figure 2. A multi-source encoder produces an encoding for each pseudo-review.", "The encodings are further corrected via an explicit denoising module, and then fused into an aggregate encoding for each type of noise.", "Finally, the fused encodings are passed to a decoder with a partial copy mechanism to generate the summary y .", "Multi-Source Encoder For each pseudo-review x j X where x j = { w 1 , ..., w L } and w k is the k th token in x j , we obtain contextualized token encodings { h k } and an overall review encoding d j with a BiLSTM encoder (Hochreiter and Schmid-huber, 1997): h k = LSTM f ( w k , h k 1 ) h k = LSTM b ( w k , h k +1 ) h k = [ h k ; h k ] d j = [ h L ; h 1 ] where h k and h k are forward and backward hidden states of the BiLSTM at timestep k , and ; denotes concatenation (see module", "(a) in Figure 2).", "Explicit Denoising The model should be able to remove noise from the encodings before decoding the text.", "While previous methods (Vincent et al., 2008; Freitag and Roy, 2018) implicitly assign the denoising task to the encoder, we propose an explicit denoising component (see module", "(b) in Figure 2).", "Specifically, we create a correction vector c ( c ) j for each pseudo-review d ( c ) j which resulted from the application of segment noise.", "c ( c ) j represents the adjustment needed to denoise each dimension of d ( c ) j and is used to create d ( c ) j , a denoised \"($) &($) '($) ... ( a ) E n c o d e r ( c ) N o i s e -S p e c i f i c F u s i o n ( c ) N o i s e -S p e c i f i c F u s i o n ...", "encoding of d ( c ) j : q = N (cid:88) j =1 d ( c ) j /N c ( c ) j = tanh( W ( c ) d [ d ( c ) j ; q ] + b ( c ) d ) d ( c ) j = d ( c ) j + c ( c ) j where q represents a mean review encoding and functions as a query vector, W and b are learned parameters, and superscript ( c ) signifies segment noising.", "We can interpret the correction vector as removing or adding information to each dimension when its value is negative or positive, respectively.", "Analogously, we obtain d ( d ) j for pseudo-reviews d ( d ) j which have been created with document noising.", "Noise-Specific Fusion For each type of noise (segment and document), we create a noise-specific aggregate encoding by fusing the denoised encodings into one (see module", "(c) in Figure 2).", "Given { d ( c ) j } , the set of denoised encodings corresponding to segment noisy inputs, we create aggregate encoding s ( c ) 0 : ( c ) j = softmax ( W ( c ) f d ( c ) j + b ( c ) f ) s ( c ) 0 = (cid:88) j d ( c ) j ( c ) j where j is a gate vector with the same dimensionality as the denoised encodings.", "Analogously, we obtain s ( d ) 0 from the denoised encodings { d ( d ) j } corresponding to document noisy inputs.", "Decoder with Partial Copy Our decoder generates a summary given encodings s ( c ) 0 and s ( d ) 0 as input.", "An advantage of our method is its ability to incorporate techniques used in supervised models, such as attention (Bahdanau et al., 2014) and copy (Vinyals et al., 2015).", "Pseudo-reviews created using segment noising include various chunk permutations, which could result to ungrammatical and incoherent text.", "Using a copy mechanism on these texts may hurt the fluency of the output.", "We therefore allow copy on document noisy inputs only (see module", "(d) in Figure 2).", "We use two LSTM decoders for the aggregate encodings, one equipped with attention and copy mechanisms, and one without copy mechanism.", "We then combine the results of these decoders using a learned gate.", "Specifically, token w t at timestep t is predicted as: s ( c ) t , p ( c ) ( w t ) = LSTM att ( w t 1 , s ( c ) t 1 ) s ( d ) t , p ( d ) ( w t ) = LSTM att+copy ( w t 1 , s ( d ) t 1 ) t = ( W p [ w t 1 ; s ( c ) t ; s ( d ) t ] + b p ) p ( w t ) = t p ( c ) ( w t ) + (1 t ) p ( d ) ( w t ) where s t and p ( w t ) are the hidden state and predicted token distribution at timestep t , and ( ) is the sigmoid function.", "We use a maximum likelihood loss to optimize the generation probability distribution based on summary y = { w 1 , ..., w L } from our synthetic dataset:", "The decoder depends on L gen to generate meaningful, denoised outputs.", "As this is a rather indirect way to optimize our denoising module, we additionally use a discriminative loss providing direct supervision.", "The discriminator operates at the output of the fusion module and predicts the category distribution p ( z ) of the output summary y (see module", "(e) in Figure 2).", "The type of categories varies across domains.", "For movies, categories can be information about their genre (e.g., drama, comedy), while for businesses their specific type (e.g., restaurant, beauty parlor).", "This information is often included in reviews but we assume otherwise and use an LDA topic model (Blei et al., 2003) to infer p ( z ) (we present experiments with human labeled and automatically induced categories in Section 5).", "An MLP classifier takes as input aggregate encodings s ( c ) and s ( d ) and infers q ( z ) .", "The discriminator is trained by calculating the KL divergence between predicted and actual category distributions q ( z ) and p ( z ) : q ( z ) = MLP d ( s ( c ) , s ( d ) ) L disc = DKL ( p ( z ) (cid:107) q ( z )) The final objective is the sum of both loss functions: L = L gen + L disc At test time, we are given genuine reviews X as input instead of the synthetic ones.", "We generate a summary by treating X as X ( c ) and X ( d ) , i.e., the outcome of segment and document noising.", "Dataset We performed experiments on two datasets which represent different domains and summary types.", "The Rotten Tomatoes dataset 2 (Wang and Ling, 2016) contains a large set of reviews for various movies written by critics.", "Each set of reviews has a gold-standard consensus summary written by an editor.", "We follow the partition 2 http://www.ccs.neu.edu/home/luwang/ data.html Rotten Tomatoes Train* Dev Test #movies 25k 536 737 #reviews/movie 40.0 98.0 100.3 #tokens/review 28.4 23.5 23.6 #tokens/summary 22.7 23.6 23.8 corpus size 245,848 Yelp Train* Dev Test #businesses 100k 100 100 #reviews/business 8.0 8.0 8.0 #tokens/review 72.3 70.3 67.8 #tokens/summary 64.8 70.9 67.3 corpus size 2,320,800 Table 1: Dataset statistics; Train* column refers to the synthetic data we created through noising (Section 3.1).", "of Wang and Ling (2016) but do not use ground truth summaries during training to simulate our unsupervised setting.", "The Yelp dataset 3 in Chu and Liu (2019) includes a large training corpus of reviews without gold-standard summaries.", "The latter are provided for the development and test set and were generated by an Amazon Mechanical Turker.", "We follow the splits introduced in their work.", "A comparison between the two datasets is provided in Table 1. As can be seen, Rotten Tomatoes summaries are generally short, while Yelp reviews are three times longer.", "Interestingly, there are a lot more reviews to summarize in Rotten Tomatoes (approximately 100 reviews) while input reviews in Yelp are considerably less (i.e., 8 reviews).", "Implementation To create the synthetic dataset, we sample candidate summaries using the following constraints: (1) the number of non-alphanumeric symbols must be less than 3, (2) there must be no first-person singular pronouns (not used for Yelp), and (3) the number of tokens must be between 20 to 30 (50 to 90 for Yelp).", "We set p R to 0.8 and 0.4 for token and chunk noise, and p N to 0.9.", "For each review-summary pair, the number of reviews N is sampled from the Gaussian distribution N ( , 2 ) where and are the mean and standard deviation of the number of reviews in the development set.", "We created 25k (Rotten Tomatoes) and 100k (Yelp) pseudo-reviews for our synthetic datasets (see Table 1).", "We set the dimensions of the word embeddings to 300, the vocabulary size to 50k, the hidden di-3 https://github.com/sosuperic/MeanSum Model METEOR RSU4 R1 R2 RL ORACLE 12.10 12.01 30.94 10.75 24.95 LEXRANK * 5.59 3.98 WORD 2V EC 6.14 4.04 13.93 2.10 10.81 SENTINEURON 7.02 4.77 15.90 2.01 11.74 OPINOSIS * 6.07 4.90 MEANSUM 6.07 4.41 15.79 1.94 12.26 DENOISESUM 8.30 6.84 21.26 4.61 16.27 Best Supervised* 8.50 7.39 21.19 7.64 17.80 Table 2: Automatic evaluation on Rotten Tomatoes .", "mensions to 256, the batch size to 8, and dropout (Srivastava et al., 2014) to 0.1.", "For our discriminator, we employed an LDA topic model trained on the review corpus, with 50 (Rotten Tomatoes) and 100 (Yelp) topics (tuned on the development set).", "The LSTM weights were pretrained with a language modeling objective, using the corpus as training data.", "For Yelp, we additionally trained a coverage mechanism (See et al., 2017) in a separate training phase to avoid repetition.", "We used the Adam optimizer (Kingma and Ba, 2015) with a learning rate of 0.001 and l 2 constraint of 3.", "At test time, summaries were generated using length normalized beam search with a beam size of 5.", "We performed early stopping based on the performance of the model on the development set.", "Our model was trained on a single GeForce GTX 1080 Ti GPU and is implemented using PyTorch.", "4 Comparison Systems We compared DENOISESUM to several unsupervised extractive and abstractive methods.", "Extractive approaches include", "(a) LEXRANK (Erkan and Radev, 2004), an algorithm similar to PageRank that generates summaries by selecting the most salient sentences,", "(b) WORD 2V EC (Rossiello et al., 2017), a centroid-based method which represents the input as IDF-weighted word embeddings and selects as summary the review closest to the centroid, and", "(c) SENTINEURON , which is similar to WORD 2 VEC but uses a language model called Sentiment Neuron (Radford et al., 2017) as input representation.", "As an upper bound, ORACLE selects as summary the review which maximizes the ROUGE-1/2/L F1 score against the gold summary.", "Abstractive methods include", "(d) OPINOSIS (Ganesan et al., 2010), a graph-based summarizer that generates concise summaries of highly redundant opinions, and", "(e) MEANSUM (Chu and Liu, 2019), a neural model that generates a summary by reconstructing text from aggregate encodings of reviews.", "Finally, for Rotten Tomatoes, we also compared with the state-of-the-art supervised model proposed in Amplayo and Lapata (2019) which used the original training split.", "Examples of system summaries are shown in the Appendix.", "Automatic Evaluation Our results on Rotten Tomatoes are shown in Table 2. Following previous work (Wang and Ling, 2016; Amplayo and Lapata, 2019) we report five metrics: METEOR (Denkowski and Lavie, 2014), a recall-oriented metric that rewards matching stems, synonyms, and", "paraphrases; ROUGE-SU4 (Lin, 2004), the recall of unigrams and skip-bigrams of up to four words; and the F1-score of ROUGE-1/2/L, which respectively measures word-overlap, bigram-overlap, and the longest common subsequence between system and reference summaries.", "Results on Yelp are given in Table 3 where we compare systems using ROUGE-1/2/L F1, following Chu and Liu (2019).", "As can be seen, DENOISESUM outperforms all competing models on both datasets.", "When compared to MEANSUM , the difference in performance is especially large on Rotten Tomatoes, where we see a 4.01 improvement in ROUGE-L.", "We believe this is because MEANSUM does not learn to reconstruct encodings of aggregated inputs, and as a result it is unable to produce meaningful summaries when the number of input reviews is large, as is the case for Rotten Tomatoes.", "In fact, the best extractive model, SENTINEURON , slightly outperforms MEANSUM on this dataset across metrics with the exception of ROUGE-L.", "When compared to the best supervised system, DENOISESUM performs comparably on several metrics, specifically METEOR and ROUGE-1, however there is still a gap on ROUGE-2, showing the limitations of systems trained without gold-standard summaries.", "Table 4 presents various ablation studies on Rotten Tomatoes (RT) and Yelp which assess the contribution of different model components.", "Our experiments confirm that increasing the size of the synthetic data improves performance, and that both segment and document noising are useful.", "We also show that explicit denoising, partial copy, and the discriminator help achieve best results.", "Finally, human-labeled categories (instead of LDA topics) decrease model performance, which suggests that more useful labels can be approximated by automatic means.", "Human Evaluation We also conducted two judgment elicitation studies using the Amazon Mechanical Turk (AMT) crowdsourcing platform.", "The first study assessed the quality of the summaries using Best-Worst Scaling (BWS; Louviere et al., 2015), a less labor-intensive alternative to paired comparisons that has been shown to produce more reliable results than rating scales (Kiritchenko and Mohammad, 2017).", "Specifically, participants were shown the movie/business name, some basic background information, and a gold-standard summary.", "They were also presented with three system summaries, produced by SENTINEURON (best extractive model), MEANSUM (most related unsupervised model), and DENOISESUM .", "Participants were asked to select the best and worst among system summaries taking into account how much they deviated from the ground truth summary in terms of: Informativeness (i.e., does the summary present opinions about specific aspects of the movie/business in a concise manner?), Coherence (i.e., is the summary easy to read and does it follow a natural ordering of facts?), and Grammaticality (i.e., is the summary fluent and grammati-cal?).", "We randomly selected 50 instances from the test set.", "We collected five judgments for each comparison.", "The order of summaries was randomized per participant.", "A rating per system was computed as the percentage of times it was chosen as best minus the percentage of times it was selected as worst.", "Results are reported in Table 5, where Inf, Coh, and Gram are shorthands for Informativeness, Coherence, and Grammaticality.", "DENOISESUM was ranked best in terms of informativeness and coherence, while the extractive system SENTINEURON was ranked best on grammaticality.", "This is not entirely surprising since extractive summaries written by humans are by definition grammatical.", "Our second study examined the veridicality of the generated summaries, namely whether the facts mentioned in them are indeed discussed in the input reviews.", "Participants were shown reviews and the corresponding summary and were asked to verify for each summary sentence whether it was fully supported by the reviews, partially supported, or not at all supported.", "We performed this experiment on Yelp only since the number of reviews is small and participants could read them all in a timely fashion.", "We used the same 50 instances as in our first study and collected five judgments per instance.", "Participants assessed the summaries produced by MEANSUM and DENOISESUM .", "We also included GOLD -standard summaries as an upper bound but no output from an extractive system as it by default contains facts mentioned in the reviews.", "Table 5 reports the percentage of fully (Full-Supp), partially (PartSupp), and un-supported (No-Supp) sentences.", "Gold summaries display the highest percentage of fully supported sentences (63.3%), followed by DENOISESUM (55.1%), and MEANSUM (41.7%).", "These results are encouraging, indicating that our model hallucinates to a lesser extent compared to MEANSUM .", "We consider an unsupervised learning setting for opinion summarization where there are only reviews available without corresponding summaries.", "Our key insight is to enable the use of supervised techniques by creating synthetic review-summary pairs using noise generation methods.", "Our summarization model, DENOISESUM , introduces explicit denoising, partial copy, and discrimination modules which improve overall summary quality, outperforming competitive systems by a wide margin.", "In the future, we would like to model aspects and sentiment more explicitly as well as apply some of the techniques presented here to unsupervised single-document summarization.", "We thank the anonymous reviewers for their feedback.", "We gratefully acknowledge the support of the European Research Council (Lapata, award number 681760).", "The first author is supported by a Google PhD Fellowship." ]
[ "abstain", "abstain", "abstain", "method", "method", "abstain", "result", "abstain", "other", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "method", "result", "other", "other", "method", "other", "objective", "method", "method", "method", "other", "other", "objective", "method", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "other", "other", "method", "method", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "method", "abstain", "method", "method", "other", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "other", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "other", "abstain", "method", "abstain", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "result", "abstain", "result", "abstain", "result", "result", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "method", "method", "result", "method", "other", "other", "other" ]
[ "Regularization of neural machine translation is still a significant problem, especially in low-resource settings.", "To mollify this problem, we propose regressing word embeddings (ReWE) as a new regularization technique in a system that is jointly trained to predict the next word in the translation (categorical value) and its word embedding (continuous value).", "Such a joint training allows the proposed system to learn the distributional properties represented by the word embeddings, empirically improving the generalization to unseen sentences.", "Experiments over three translation datasets have showed a consistent improvement over a strong baseline, ranging between 0 .", "91 and 2 .", "54 BLEU points, and also a marked improvement over a state-of-the-art system.", "The last few years have witnessed remarkable improvements in the performance of machine translation (MT) systems.", "These improvements are strongly linked to the development of neural machine translation (NMT): based on encoder-decoder architectures (also known as seq2seq), NMT can use recurrent neural networks (RNNs) (Sutskever et al., 2014; Cho et al., 2014; Wu et al., 2016), convolutional neural networks (CNNs) (Gehring et al., 2017) or transformers (Vaswani et al., 2017) to learn how to map a sentence from the source language to an adequate translation in the target language.", "In addition, attention mechanisms (Bahdanau et al., 2015; Luong et al., 2015) help soft-align the encoded source words with the predictions, further improving the translation.", "pointed out by (Elbayad et al., 2018), MLE suffers from two obvious limitations: the first is that it treats all the predictions other than the ground truth as equally incorrect.", "As a consequence, synonyms and semantically-similar words which are often regarded as highly interchangeable with the ground truth are completely ignored during training.", "The second limitation is that MLE-trained systems suffer from exposure bias (Ben-gio et al., 2015; Ranzato et al., 2015) and do not generalize well over the large output space of translations.", "Owing to these limitations, NMT systems still struggle to outperform other traditional MT approaches when the amount of supervised data is limited (Koehn and Knowles, 2017).", "In this paper, we propose a novel regularization technique for NMT aimed to influence model learning with contextual properties.", "The technique nicknamed ReWE from regressing word em-bedding consists of modifying a conventional seq2seq decoder to jointly learn to", "a) predict the next word in the translation (categorical value), as usual, and", "b) regress its word embedding (numer-ical value).", "Figure 1 shows the modified decoder.", "Both predictions are incorporated in the training objective, combining standard MLE with a continuous loss function based on word embeddings.", "The rationale is to encourage the system to learn to co-predict the next word together with its context (by means of the word embedding representation), in the hope of achieving improved generalization.", "At inference time, the system operates as a standard NMT system, retaining the categorical prediction and ignoring the predicted embedding.", "We qualify our proposal as a regularization technique since, like any other regularizers, it only aims to influence the model's training, while leaving the inference unchanged.", "We have evaluated the proposed system over three translation datasets of different size, namely English-French (en-fr), Czech-English (cs-en), and Basque-English (eu-en).", "In each case, ReWE has significantly outperformed its baseline, with a marked improvement of up to 2.54 BLEU points for eu-en, and consistently outperformed a state-of-the-art system (Denkowski and Neubig, 2017).", "A substantial literature has been devoted to improving the generalization of NMT systems.", "Fadaee et al. (2017) have proposed a data augmentation approach for low-resource settings that generates synthetic sentence pairs by replacing words in the original training sentences with rare words.", "Kudo (2018) has trained an NMT model with different subword segmentations to enhance its robustness, achieving consistent improvements over low-resource and out-of-domain settings.", "Zhang et al. (2018) have presented a novel regularization method that encourages target-bidirectional agreement.", "Other work has proposed improvements over the use of a single ground truth for training: Ma et al. (2018) have augmented the conventional seq2seq model with a bag-of-words loss under the assumption that the space of correct translations share similar bag-of-words vectors, achieving promising results on a Chinese-English translation dataset; Elbayad et al. (2018) have used sentence-level and token-level reward distributions to smooth the single ground truth.", "Chousa et al. (2018) have similarly leveraged a token-level smoother.", "In a recent paper, Denkowski and Neubig (2017) have achieved state-of-the-art translation accuracy by leveraging a variety of techniques which include: dropout (Srivastava et al., 2014), lexicon bias (Arthur et al., 2016), pre-translation (Niehues et al., 2016), data bootstrapping (Chen et al., 2016), byte-pair encoding (Sennrich et al., 2016) and ensembles of independent models (Rokach, 2010).", "However, to our knowledge none of the mentioned approaches have explicitly attempted to leverage the embeddings of the ground-truth tokens as targets.", "For this reason, in this paper we explore regressing toward pre-trained word embeddings as an attempt to capture contextual properties and achieve improved model regularization.", "The model is a standard NMT model with attention in which we use RNNs for the encoder and decoder.", "Following the notation of (Bahdanau et al., 2015), the RNN in the decoder generates a sequence of hidden vectors, { s 1 , . . . , s m } , given the context vector, the previous hidden state s j 1 and the previous predicted word y j 1 : s j = dec rnn ( s j 1 , y j 1 , c j ) j = 1 , . . . , m (1) where y 0 and s 0 are initializations for the state and label chains.", "Each hidden vector s j (of parameter size S ) is then linearly transformed into a vector of vocabulary size, V , and a softmax layer converts it into a vector of probabilities (Eq. 2), where W (a matrix of size V S ) and b (a vector of size V 1 ) are learnable parameters.", "The predicted conditional probability distribution over the words in the target vocabulary, p j , is given as: p j = softmax ( Ws j + b ) (2) As usual, training attempts to minimize the negative log-likelihood (NLL), defined as: NLL loss = m (cid:88) j =1 log( p j ( y j )) (3) where p j ( y j ) notes the probability of ground-truth word y j .", "The NLL loss is minimized when the probability of the ground truth is one and that of all other words is zero, treating all predictions different from the ground truth as equally incorrect.", "Pre-trained word embeddings (Pennington et al., 2014; Bojanowski et al., 2017; Mikolov et al., 2013) capture the contextual similarities of words, typically by maximizing the probability of word w t + k to occur in the context of center word w t .", "This probability can be expressed as: p ( w t + k | w t ) , c k c, k (cid:54) = 0 t = 1 , . . . , T (4) where c is the size of the context and T is the total number of words in the training set.", "Traditionally, word embeddings have only been used as input representations.", "In this paper, we instead propose using them in output as part of the training objective, in the hope of achieving regularization and improving prediction accuracy.", "Building upon the baseline model presented in Section 3.1, we have designed a new joint learning setting: our decoder still predicts the probability distribution over the vocabulary, p j (Eq. 2), while simultaneously regressing the same shared s j to the ground-truth word embedding, e ( y j ) .", "The ReWE module consists of two linear layers with a Rectified Linear Unit (ReLU) in between, outputting a vector e j of word embedding size (Eq. 5).", "Please note that adding this extra module adds negligible computational costs and training time.", "Full details of this module are given in the supplementary material.", "In the experiment, we have explored two cases for the ReW E loss : the minimum square error (MSE) 1 and the cosine embedding loss (CEL) 2 .", "Finally, the NLL loss and the ReW E loss are combined to form the training objective using a positive trade-off coefficient, : Loss = NLL loss + ReW E loss (7) As mentioned in the Introduction, at inference time we ignore the ReWE output, e j , and the model operates as a standard NMT system.", "1 https://pytorch.org/docs/stable/nn.html#torch.nn.", "MSELoss 2 https://pytorch.org/docs/stable/nn.html#torch.nn.", "CosineEmbeddingLoss Dataset Size Sources IWSLT16 en-fr 219 , 777 TED talks IWSLT16 cs-en 114 , 243 TED talks WMT16 eu-en 89 , 413 IT-domain data Dataset Validation set Test set en-fr TED test 2013+2014 TED test 2015+2016 cs-en TED test 2012+2013 TED test 2015+2016 eu-en Sub-sample of PaCo IT-domain test Table 1: Top: parallel training data.", "We have developed our models building upon the OpenNMT toolkit (Klein et al., 2017) 3 .", "For training, we have used the same settings as (Denkowski and Neubig, 2017).", "We have also explored the use of sub-word units learned with byte pair encoding (BPE) (Sennrich et al., 2016).", "All the preprocessing steps, hyperparameter values and training parameters are described in detail in the supplementary material to ease reproducibility of our results.", "We have evaluated these systems over three publicly-available datasets from the 2016 ACL Conference on Machine Translation (WMT16) 4 and the 2016 International Workshop on Spoken Language Translation (IWSLT16) 5 .", "Table 1 lists the datasets and their main features.", "Despite having nearly 90,000 parallel sentences, the eu-en dataset only contains 2,000 human-translated sentences; the others are translations of Wikipedia page titles and localization files.", "Therefore, we regard the eu-en dataset as very low-resource.", "In addition to the seq2seq baseline, we have compared our results with those recently reported by Denkowski and Neubig for non-ensemble models (2017).", "For all models, we report the BLEU scores (Papineni et al., 2002), with the addition of selected comparative examples.", "Two contrastive experiments are also added in supplementary notes.", "As a preliminary experiment, we have carried out a sensitivity analysis to determine the optimal value of the trade-off coefficient, (Eq. 6), using the en-fr validation set.", "The results are shown in Figure 2, where each point is the average of three runs trained with different seeds.", "The figure shows that 3 Our code can be found at: https://github.com/ijauregiCMCRC/ReWE NMT 4 WMT16: http://www.statmt.org/wmt16/ 5 IWSLT16: https://workshop2016.iwslt.org/ Models en-fr cs-en eu-en Word BPE Word BPE Word BPE (Denkowski and Neubig, 2017) 33.60 34.50 21.00 22.60 (Denkowski and Neubig, 2017) + Dropout 34.5 34.70 21.4 23.60 (Denkowski and Neubig, 2017) + Lexicon 33.9 34.80 20.6 22.70 (Denkowski and Neubig, 2017) + Pre-translation N/A 34.90 N/A 23.80 (Denkowski and Neubig, 2017) + Bootstrapping 34.40 35.20 21.60 23.60 Our baseline 34.16 34.09 20.57 22.69 12.14 17.17 Our baseline + ReWE (CEL) ( = 20 ) 35.52 35.22 21.83 23.60 13.73 19.71 Table 2: BLEU scores over the test sets.", "the MSE loss has outperformed slightly the baseline for small values of ( < 1 ), but the BLEU score has dropped drastically for larger values.", "Conversely, the CEL loss has increased steadily with , reaching 38 .", "23 BLEU points for = 20 , with a marked improvement of 1 .", "53 points over the baseline.", "This result has been encouraging and therefore for the rest of the experiments we have used CEL as the ReW E loss and kept the value of to 20 .", "In Section 4.3, we further discuss the behavior of CEL and MSE.", "Table 2 reports the results of the main experiment for all datasets.", "The values of our experiments are for blind runs over the test sets, averaged over 10 independent runs with different seeds.", "The results show that adding ReWE has significantly improved the baseline in all cases, with an average of 1 .", "46 BLEU points.", "In the case of the eu-en dataset, the improvement has reached 2 .", "54 BLEU points.", "We have also run unpaired t-tests between our baseline and ReWE, and the differences have proved statistically significant ( p -values < 0 . 05 ) in all cases.", "Using BPE has proved beneficial for the cs-en and eu-en pairs, but not for the en-fr Src : Hautatu Kontrol panela Programa lehenetsiak , eta aldatu bertan .", "pair.", "We speculate that English and French may be closer to each other at word level and, therefore, less likely to benefit from the use of sub-word units.", "Conversely, Czech and Basque are morphologically very rich, justifying the improvements with BPE.", "Table 2 also shows that our model has outperformed almost all the state-of-the-art results reported in (Denkowski and Neubig, 2017) (dropout, lexicon bias, pre-translation, and bootstrapping), with the only exception of the pre-translation case for the cs-en pair with BPE.", "This shows that the proposed model is competitive with contemporary NMT techniques.", "To further explore the improvements obtained with ReWE, we have qualitatively compared several translations provided by the baseline and the baseline + ReWE (CEL), trained with identical seeds.", "Overall, we have noted a number of instances where ReWE has provided translations with more information from the source (higher ad-equacy).", "For reasons of space, we report only one example in Table 3, but more examples are available in the supplementary material.", "In the example, the baseline has chosen a generic word, pro-gram, while ReWE has been capable of correctly predicting Default Program and being specific about the object, it.", "To further explore the behaviour of the ReWE loss, Figure 3 plots the values of the NLL and ReWE (CEL) losses during training of our model over the en-fr training set.", "The natural values of the ReWE (CEL) loss (blue, dashed) are much lower than those of the NLL loss (red, + ), and thus its contribution to the gradient is likely to be limited.", "However, when scaled up by a factor of = 20 (magenta, ), its influence on the gradient becomes more marked.", "Empirically, both the NLL and ReWE (CEL) losses decrease as the training progresses and the total loss (green, ) decreases.", "As shown in the results, this combined training objective has been able to lead to improved translation results.", "Conversely, the MSE loss has not exhibited a similarly smooth behaviour (supplementary mate-rial).", "Even when brought to scale with the NLL loss, it shows much larger fluctuations as the training progresses.", "In particular, it shows major increases at the re-starts of the optimizer for the simulated annealing that are not compensated for by the rest of the training.", "It is easy to speculate that the MSE loss is much more sensitive than the cosine distance to the changes in the weights caused by dropout and the re-starts.", "As such, it seems less suited for use as training objective.", "In this paper, we have proposed a new regularization technique for NMT (ReWE) based on a joint", "learning setting in which a seq2seq model simultaneously learns to", "a) predict the next word in the translation and", "b) regress toward its word embedding.", "The results over three parallel corpora have shown that ReWE has consistently improved over both its baseline and recent state-of-the-art results from the literature.", "As future work, we plan to extend our experiments to better understand the potential of the proposed regularizer, in particular for unsupervised NMT (Artetxe et al., 2018; Lample et al., 2018).", "We would like to acknowledge the financial support received from the Capital Markets Cooperative Research Centre (CMCRC), an industry-led research initiative of the Australian Government.", "We would also like to thank Ben Hachey, Michael Nolan and Nadia Shnier for their careful reading of our paper and their insightful comments.", "Finally, we are grateful to the anonymous reviewers for all their comments and suggestions." ]
[ "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "other", "other", "other", "other", "other", "other", "other", "abstain", "objective", "method", "other", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other" ]
[ "Pre-trained language models derive substantial linguistic and factual knowledge from the massive corpora on which they are trained, and prompt engineering seeks to align these models to specific tasks.", "Unfortunately, existing prompt engineering methods require significant amounts of labeled data, access to model parameters, or both.", "We introduce a new method for selecting prompt templates without labeled examples and without direct access to the model .", "Specifically, over a set of candidate templates, we choose the template that maximizes the mutual information between the input and the corresponding model output.", "Across 8 datasets representing 7 distinct NLP tasks, we show that when a template has high mutual information, it also has high accuracy on the task.", "On the largest model, selecting prompts with our method gets 90% of the way from the average prompt accuracy to the best prompt accuracy and requires no ground truth labels.", "It is well-known that large pre-trained language models (LMs) learn substantial linguistic (Liu et al., 2019; Amrami and Goldberg, 2018) and factual world knowledge (Petroni et al., 2020; Bosselut et al.; Bouraoui et al.; Zuo et al., 2018), achieving state-of-the-art performance on classic NLP tasks like closed-book question-answering, sentiment analysis, and many other tasks (Radford et al., 2019; Devlin et al., 2019; Raffel et al., 2019).", "The largest models can do this in a few-shot waythat is, being trained only with generic, semi-supervised objectives and taught tasks with just instructions and a few examples of the task provided via a natural language prompt in the context window (Brown et al., 2020).", "This suggests that pre-training equips them to potentially do many tasks that can be formulated as natural language generation, if only they can be primed in the right way.", "Such priming is not a trivial task.", "The few-shot learning breakthrough can give the impression that if the LM is given a sensible prompt, it will under-stand what is meant and perform well on the task if it has the capacity.", "However, LMs can generate substantially different output distributionsand thus textgiven two distinct prompts that appear semantically invariant (e.g., alternative orderings, lexical changes like capitalization, and general rephrasing (Zhao et al., 2021; Lu et al., 2021)).", "This can lead to surprisingly high variance in performance from prompt to prompt.", "Clearly, some prompts are better than others for aligning a model to a task.", "Prompt engineering is a nascent field that aims to find aligning prompts (Reynolds and McDonell, 2021).", "While prompt refers to any language passed to the model via the context window, a template refers to a natural language scaffolding filled in with raw data, resulting in a prompt.", "Thus, prompt engineering includes finding high-quality templates (i.e., those with high test accuracy).", "Generally, this is done by optimizing for accuracy over 819 a validation set: a template is chosen from a candidate set based on its performance on labeled examples.", "Such labeled examples can be challenging to procure for some tasks and impossible for others.", "Some recent methods optimize prompts using backpropagation, which requires access to model weights.", "In this paper, we propose a new method for selecting prompts by using mutual information, which allows prediction of a prompt's performance without labels or access to model parameters.", "Mutual information (MI) is a metric that quantifies the shared information between two random variables (see Section 3.2).", "We demonstrate that the mutual information between a prompt and a language model's output can serve as a useful surrogate for the test accuracy of a template.", "Specifically, for eight popular datasets representing seven classic NLP tasks, we generate a diverse set of 20 templates for each and show that template mutual information and template accuracy are highly correlated.", "These results are strongest on the largest models we study, for which our method chooses prompts that, on average, get 90% of the way from mean accuracy to maximum accuracy and even selects the best prompt on three of eight datasets.", "This suggests that, across a variety of NLP tasks, mutual information can be used to select one of the best prompts from a set of candidate prompts, even without making use of model weights or ground truth labels.", "In the following pages, we outline each step of our general method for generating and evaluating templates so that it can easily be ported to any other task.", "Code is available online.", "1 2 Related Work The promise of language models and the challenge of aligning them has given rise to the field of prompt engineering, which seeks to construct the best prompt given a task and a language model (Liu et al., 2021a).", "The best performance on prompt engineering is often achieved using backpropaga-tion in continuous prompt embedding space (Lester et al., 2021; Li and Liang, 2021; Gu et al., 2021; Liu et al., 2021b; Zhang et al., 2021) in contrast to generating a discrete set of prompts by hand and testing them.", "While optimizing in continuous prompt space via backprop allows for similar performance to model-tuning (at least at higher model sizes) (Lester et al., 2021), not all models are publicly available.", "Thus, these methods are 1 github.com/BYU-PCCL/information-theoretic-prompts only feasible for those who have direct access to the model and can perform backprop on it.", "Prompts optimized in continuous space are also not interpretable in natural language, making it harder to transfer insights from prompts that work well for one task to another task.", "Additionally, these methods require labeled examples, while ours does not.", "Other selection protocols not based on gradient descent can include cross-validation or minimum description length, as in (Perez et al., 2021).", "These methods yield prompts that perform marginally better than average in terms of test accuracy.", "Mutual information has been used in n-gram clustering, part-of-speech tagging, probing classifiers, and LM training objective reframing (Brown et al., 1992; Stratos, 2019; Voita and Titov, 2020; Kong et al., 2019).", "Ours is the first work of which we are aware to apply MI to prompt engineering.", "(Lu et al., 2021) make use of entropy statistics to determine performant orderings for few-shot examples in prompts.", "Our work is focused on selecting high quality templates with no special focus on example ordering or need for multiple examples to order (the few-shot case).", "Our method uses no artificial probing set, making our prompt selection much cheaper, and we also explore open-ended tasks.", "While the GlobalE and LocalE statistics they use are similar (and in the case of LocalE identical) to the two parts of our MI calculation (see 3.2), we use the two statistics jointly and choose prompts by minimizing, rather than maximizing, LocalE.", "At the most abstract, our method is as follows (see Appendix A for a more thorough description):", "1. Generate a set of K prompt templatizing functions.", "2. Playground a couple of examples to ensure that templates give roughly expected output.", "3. Estimate mutual information for each template given a set of inputs x 1 , x 2 , ..., x N where x i X, i .", "4. Choose template(s) based on mutual information and perform inference.", "Section 3.1.", "We also justify our use of mutual information as a surrogate for prompt quality and specify how we estimate it in Section 3.2.", "In order to demonstrate our method's widespread applicability and general effectiveness, we validate it across many datasets and tasks.", "This requires us to estimate MI and accuracy, and this is most straightforward in the case where, given a context, a language model produces just one probability distribution P ( t n | context = t 1 , t 2 , ..., t n 1 ) .", "This is in contrast to other experimental setups that use multi-token sampling methods (e.g., beam search), although our method is easily tractable in such setups.", "2 Any NLP task is tractable in this framework so long as the output space consists of a set of options that each start with a unique token.", "In this case, the language model can give an answer by assigning probability to tokens that begin giving each of these answers (invariant to lexical variation like capitalization and leading/trailing spaces).", "While, for open-ended tasks, this method might artificially inflate accuracy if the model starts to 2 The only difference: For each considered answer, simply calculate its unnormalized probability by multiplying the probabilities of the decisions taken at each branch in the sequence of tokens, then normalize the resulting probability scores.", "give a wrong answer that happens to start with the same token as the correct one, we find that this difference is small and does not affect our results.", "3 Irrelevant tokens (with which none of the desired answers begin) are ignored, and the resulting collapsed probabilities are normalized.", "We term this approach One-token Response (OTR).", "Although our method isn't limited to OTR tasks, we choose tasks that can be cast as OTR tasks for simplicity and to reduce computational expense.", "Many NLP tasks fit within this framework, although a few do not (e.g., machine translation and summarization).", "This basic approach is in common use (Brown et al., 2020), but we formalize it for clarity below.", "Generally, the OTR framework casts a natural language task as a classification problem with raw data input x i X and output P ( Y | x i ) , a probability distribution over targets.", "In order to use a language model for this task, a templatizing function f : X L is needed to map raw data 3 Our open-ended datasets are SQuAD, LAMBADA, and ROCStories, and none of these seemed more likely than ROCStories to exhibit this issue.", "We reran our experiment on ROCStories by sampling with temperature 0 until reaching a space, and only counted responses as accurate if they exactly matched the corresponding ground truth labels.", "Results were virtually unchanged: accuracy decreased by only 0.03 on average, and the correlation between mutual information and test accuracy increased by 0.04, from 0.68 to 0.72.", "into natural language prompts.", "g : L T maps prompts to a probability distribution over T , the token set represented by the model tokenizer.", "Finally, a collapsing function c : T P ( Y | x , , ) (see Appendix A) yields an estimate of P ( Y | X ) : P ( Y | x , , ) = c ( g ( f ( x ))) , x X (1) We also refer to P ( Y | x , , ) as P ( Y | f ( x )) .", "The above pipeline can be specified in many ways using different and (see Figure 2), which will result in different accuracies.", "Our ultimate aim is to select the best given .", "Whereas past prompt engineering methods rely on scores calculated by comparing model answers and ground truth, our method selects by maximizing mutual information, which requires no ground truth labels.", "Mutual information is a measure of the amount of shared information between two random variables (Cover and Thomas, 2006); in other words, it is the reduction in entropy that is observed in one random variable when the other random variable is known.", "We expect MI to serve as a good criterion for comparing prompts.", "Previous work has shown that large networks trained with cross-entropy loss are calibrated (e.g., a 60% confidence corresponds to a 60% chance of the model being correct) when in the early-stopped ( 1 epoch) regime (Ji et al., 2021), but become miscalibrated in the overfit regime (Nakkiran and Bansal, 2020).", "According to (Brown et al., 2020), GPT-3 was trained for a different number of epochs on each corpus in its training data.", "We calculate it was trained for an average of 1.57 epochs, so we have reason to believe that GPT-3 is generally well-calibrated.", "Thus, we postulate that a prompt that elicits a very confident response (high MI) from the language model is more likely than a less confident prompt to score well.", "We denote the mutual information between random variables X and Y as I ( X ; Y ) and the entropy of X as H ( X ) = (cid:82) x XP ( x ) log ( P ( x )) d x .", "The mutual information between X and Y is defined as DKL ( P ( X,Y ) || PX PY ) , and can be rewritten as H ( Y ) H ( Y | X ) (the reduction in entropy in Y given knowledge of X ).", "Using the OTR framework, we fix a model and generate a diverse set of K prompt templatizing functions f 1 , f 2 , ..., f K along with their corresponding collapsing functions c k (see Appendix A).", "Treating f ( X ) := { f ( x ) , x X } as a random variable, we can calculate I ( f ( X ); Y ) and use it as a criterion for selecting prompt templatizing functions with which to do inference.", "We hypothesize that a i with higher mutual information will align a language model to a task better than a j with lower mutual information.", "Formally, we select = argmax { I ( f ( X ); Y ) } .", "Mutual information is estimated as: I ( f ( X ); Y ) = H ( Y ) H ( Y | f ( X )) (2) where each term is estimated in expectation using draws x i X and Equation 1 as follows: H ( Y ) H (cid:32) 1 NN (cid:88) i =1 P ( Y | f ( x i )) (cid:33) (3) H ( Y | f ( X )) 1 NN (cid:88) i =1 H ( P ( Y | f ( x i )))) (4) The marginal entropy H ( Y ) is the entropy of the mean of the conditional distributions, and the conditional entropy H ( Y | f ( X )) is the mean of entropies of the individual conditional distributions.", "This definition gives us another reason to expect that mutual information will work well.", "Since mutual information is the marginal entropy minus the conditional entropy, maximizing mutual information is equivalent to maximizing marginal entropy and minimizing conditional entropy.", "Thus, MI is high for templates that are, on average, less biased towards any given answer (high marginal entropy) and templates with outputs the model is confident about (low conditional entropy).", "These attributes are desirable in constructing prompts, and we postulate that maximizing mutual information will yield a well-aligned template.", "Looking at it another way, by the data processing inequality (Cover and Thomas, 2006), I ( f ( X ); Y ) I ( X ; Y ) .", "Thus, I ( f ( X ); Y ) gives a lower bound for I ( X ; Y ) , and the highest mutual information is the tightest lower bound.", "The prompt corresponding to this lower bound preserves the most information between X and Y .", "We validate the efficacy of our prompt engineering method with experiments on eight well-known NLP datasets 4 SQuAD2.0 (Rajpurkar et al., 2018), LAMBADA (Paperno et al., 2016), ROCStories", "(Mostafazadeh et al., 2016), CommonsenseQA (CoQA) (Talmor et al., 2018), IMDB (Maas et al., 2011), BoolQ (Clark et al., 2019), COPA (Gor-don et al., 2012), and WiC (Pilehvar and Camacho-Collados, 2018))that span seven unique NLP tasks (see Table 1).", "We used a random sample of N = 500 samples from each dataset for our experiments.", "5 For ROCStories, which consists of a set of five sentence stories, we randomly masked a word from each story in order to use the data for masked word prediction (cloze).", "We made minor changes to two of the datasets in 5 We sampled from the train sets of CoQA and SQuAD; the train and validation sets of WIC, COPA, and BoolQ; the full datasets of ROCStories and IMDB; and the test set for LAMBADA.", "order to cast the associated tasks into OTR.", "For the SQuAD dataset, we dropped all questions that did not have a one word answer.", "For the CoQA dataset we dropped all questions with answer choices that started with a shared first word (e.g, the dog, the cat, the monkey).", "Both changes were to decrease ambiguity about which option the model was choosing given its output distribution for a single token.", "We assess our method on eight models ranging from 124 million to 175 billion parameters : These include GPT-2 124M & 1.5B (Radford et al., 2019), GPT-Neo 2.7B (Black et al., 2021), GPT-J (6B) (Wang and Komatsuzaki, 2021), and (Ada, Babbage, Curie, & Davinci) GPT-3 (Brown et al., 2020).", "We assume (per (Perez et al., 2021)) these models to correspond, respectively, to the 2.7B, 6.7B, 13B, and 175B models in (Brown et al., 2020).", "Each is a causal language model, and although we do not include masked language models, this is a promising area for future work.", "In this section, we analyze our experiments.", "First, we look at our method's ability to select high-accuracy prompts across models and datasets (Sec-tion 5.1).", "Next, we correlate template mutual information and accuracy in Section 5.2.", "After that, we compare our method and template selection using labeled examples in Section 5.3.", "In Section 5.4, we 823 GPT3 : 175 BGPT3 : 13 BGPT3 : 6 .", "explore the robustness of MI and use ensembling to improve it.", "Finally, we compare the tranferability of prompt templates selected with MI from model to model in Section 5.5.", "We first define baselines against which we compare our approach.", "Other prompt engineering methods generally require either access to model weights, labeled data (validation set selection), or both (back-prop/continuous prompt embedding methods).", "Our method does not require these, so we instead compare to random and oracle baselines.", "A random template selection method would give us the average accuracy of our template set (in expectation), while an oracle selection method would give us the best accuracy every time.", "To understand how our MI method compares to these two baselines for each dataset, refer to Figure 1, where we analyze performance on GPT-3 175B.", "On each of the eight datasets, mutual information selects a prompt template that outperforms both the mean and median accuracies (random baseline performance).", "In three of the eight datasets, mutual information selects the best (highest accuracy) template from the 20 proposed (equivalent to oracle performance).", "Given our method's promising performance with GPT-3 175B, it is natural to ask how it performs with smaller models.", "Figure 3 shows the accuracy distributions over prompt templates for each dataset/model pair.", "With every model, MI gives above-average performance on several datasets.", "Although MI is more likely to select a high ac-3 4 5 0.4 0.6 0.8 A cc u r ac y SQuAD 2 4 0.4 0.6 0.8 LAMBADA 2 4 0.2 0.4 A cc u r ac y ROCStories 0.0 0.5 0.3 0.4 0.5 0.6 CoQA 0.1 0.2 0.3 0.6 0.8 1.0 A cc u r ac y IMDB 0.0250.0500.075 0.5 0.6 0.7 0.8 BoolQ 0.000 0.025 0.5 0.6 0.7 A cc u r ac y COPA 0.01 0.02 0.03 0.450 0.475 0.500 WiC 0.0 0.2 0.4 0.6 0.8 1.0 Mutual Information (nats) 0.0 0.2 0.4 0.6 0.8 1.0 Mutual Information vs. Accuracy with GPT-3 175B Figure 5: Each dot represents a template and its average mutual information and accuracy over N = 500 task instances.", "curacy template for larger models, it is a good criterion even for smaller models on all but two datasets, COPA and WiC.", "Note that, for these two datasets, none of the templates do significantly better than chance ( 50%) besides the largest model on COPA, which is in line with previous work.", "6 Thus, we observe that mutual information performs best when there is a high-signal prompt to select, and worse when all prompts are low-signal.", "When considering all other datasets, MI selects an above average prompt 83% of the time for all models; for the largest two models, MI selects an above average template 100% of the time.", "In Section 5.1, we see how the mutual information selected template does in terms of accuracy compared to all other templates.", "We have not dis-6 Our template's best accuracy is 54% for WiC, and 78.2% for COPA, which is similar to previous work (WiC: (Brown et al., 2020) 49.4%, (Perez et al., 2021) 54.1%; COPA: (Brown et al., 2020) 92.0%, (Perez et al., 2021) 84.8%).", "cussed, however, how generally MI and accuracy are correlated, except that the highest MI template tends to have anomalously high accuracy.", "Here, we establish that their correlation is high across all templates for the largest LMs.", "Each of the K = 20 templates has two corresponding measures: average accuracy and average MI.", "We can use these pairs to correlate MI and accuracy via Pearson's R. We see in Figure 4 that the correlations are surprisingly high for the majority of models and datasets.", "For SQuAD, LAMBADA, ROCStories, and CoQA, this pattern holds across all model sizes; for the remainder, results are good on larger models and are much less reliable on smaller models.", "Overall, this is evidence that as mutual information increases, so does accuracy.", "In other words, mutual information can be used to make an educated guess about accuracy without having to use any ground truth labels, especially on larger models.", "Next, we ask: How does our method compare to selecting a template based on the accuracy of a few-labeled examples?", "Also, how many unlabeled examples does MI need to be able to perform well?", "Results with the largest model are reported in Figure 6. Note that with as few as N = 2 instances, MI selects a far better than average template, allowing performance gains even in the low-data, unlabeled regime.", "Additionally, for low N and across all eight datasets, MI even selects a better template on average than selecting based on labeled train set accuracy.", "This suggests that, even with labeled examples, selecting based off of MI may be preferable to test accuracy with few examples.", "Selecting by labeled train set accuracy often begins to perform better at higher N , but at the cost of requiring labeled data, while our method needs no labels.", "To explore our method's robustness we consider the question: what if we had included a different subset of templates, especially not including the top MI template?", "Figure 5 shows average MI/accuracy data for all K = 20 prompt templates on GPT-3 175B (similar plots for other models are found in Appendix B.1).", "For six of eight datasets, the results are robust; the top few prompt templates (by MI) are all high performers.", "The performance for COPA and WiC is more brittle; excluding the top-MI template would have resulted in a large drop in accuracy.", "This attests to the utility of generating a diverse slate of templates as recommended in Appendix A and also to the risk that outliers could compromise our method's effectiveness.", "an important concern.", "Considering the strength of MI/accuracy correlations, one simple approach is to ensemble the top 5 MI templates.", "To compare this principled top-5 ensemble to other possible ensembles of templates, we take all (cid:0) 205 (cid:1) subsets of 5 templates from all 20 templates and calculate the accuracy of each ensemble.", "For each dataset, we plot this distribution's kernel density estimate, which models the p.d.e. of the random variable accuracy of 5 random templates ensembled together.", "We then compare the top-5 MI ensemble to other possible ensembles.", "The results are shown in Figure 7. We found that the top-5 MI ensemble does at least as well as the top-20 ensemble in all but one case.", "Two reasons to use MI are, then, that 1) the MI ensemble gets as good or better a result as ensembling all prompt templates and 2) at a fourth of the experimental cost.", "In short, ensembling by MI is a cheap and effective way to guard against anomalous high MI/low accuracy templates.", "Finally, we explore how well-chosen templates generalize between models.", "Concretely, we choose templates by maximizing either test accuracy (or-acle) or mutual information (our method) using a selection model s , and then calculate test accuracy using a different inference model i .", "We calculate absolute test accuracy and then normalize it such that 0 and 100 correspond to the average and maximum scores across templates for a model/dataset pair.", "We average our results across datasets and present the results in Figure 8. Prompt transfer for 826 each dataset can be found in Appendix B.2.", "MI performance is best when the largest model (GPT-3 175B) is used as both the selection and inference model: on average, MI scores 90% on this normalized scale.", "Additionally, performance is most consistently high when the largest models are used either for selection or inference.", "But almost all transfer scores are well above 0 (only one negative average gain out of 64 transfer permutations), suggesting that transfer is often effective.", "Overall, we have observed that prompt selection by mutual information is surprisingly effective across a variety of datasets and model sizes.", "This method works best on larger models and for tasks that the LM is capable of performing.", "Given the high diversity of tasks that we have explored, we expect this method to transfer well to many other NLP tasks, including regimes with little labeled data.", "In this paper, we introduce a method for selecting prompts that effectively align language models to NLP tasks.", "Over a set of candidate prompts, our method selects the template that maximizes the mutual information between the input and the model output.", "We demonstrate that 1) mutual information is highly correlated with test accuracy and 2) selecting a prompt based on mutual information leads to significant accuracy gains over random choice, approaching oracle performance on GPT-3 175B, and it does so across model sizes and tasks.", "Whereas other methods rely on ground truth labels and/or direct model access, ours requires neither.", "Many applications characterized by lack of computational resources, limited model access (e.g., inference only), and lack of ground truth data prohibiting testing of candidate prompts become feasible with our method.", "There are many ways to prompt a language model poorly, and there still seem to be NLP tasks which are beyond alignment regardless of model size or prompt quality.", "This method cannot align a LM to a task if the entire set of prompts is poor or, obviously, if the model cannot be aligned.", "High mutual information does not necessarily imply high accuracy despite the strong correlation we found.", "Thus, our method should only be employed on a task if there is some understanding of how high MI needs to be on a domain or set of templates to imply a sufficiently high accuracy for safe use.", "Otherwise, we introduce no model, dataset, or other contribution that might warrant ethical concern.", "We thank the anonymous reviewers for their helpful feedback.", "This material is based upon work supported by the National Science Foundation under Grant No.", "RI 2141680." ]
[ "abstain", "abstain", "objective", "method", "result", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "result", "result", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "method", "objective", "result", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "method", "method", "method", "method", "other", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "other", "other" ]
[ "Detecting out-of-domain (OOD) intents is crucial for the deployed task-oriented dialogue system.", "Previous unsupervised OOD detection methods only extract discriminative features of different in-domain intents while supervised counterparts can directly distinguish OOD and in-domain intents but require extensive labeled OOD data.", "To combine the benefits of both types, we propose a self-supervised contrastive learning framework to model discriminative semantic features of both in-domain intents and OOD intents from unlabeled data.", "Besides, we introduce an adversarial augmentation neural module to improve the efficiency and robustness of contrastive learning.", "Experiments on two public benchmark datasets show that our method can consistently outperform the baselines with a statistically significant margin.", "1 1 Introduction Task-oriented dialog systems (Sarikaya, 2017; Akasaki and Kaji, 2017; Gnewuch et al., 2017; Shum et al., 2018; Tulshan and Dhage, 2018) such as Google's DialogFlow or Amazon's Lex have become ubiquitous to make people interact with machines using natural language.", "In the architecture of a dialogue system, detecting unknown or OOD (Out-of-Domain) intents from user queries is an essential component that aims to know when a user query falls outside their range of predefined supported intents.", "Different from traditional intent detection tasks, we do not know the exact number of unknown intents in practical scenarios and can barely annotate extensive OOD samples.", "Lack of real OOD examples always leads to poor prior knowledge about these unknown intents, making it Weiran Xu is the corresponding author.", "Previous methods of detecting OOD intents can be generally classified into two types: unsupervised and supervised OOD detection.", "Unsupervised OOD detection (Breunig et al., 2000; Bendale and Boult, 2016; Hendrycks and Gimpel, 2017; Shu et al., 2017; Lee et al., 2018; Ren et al., 2019a; Lin and Xu, 2019; Snell et al., 2017; Finn et al., 2017; Xu et al., 2020) means no labeled OOD samples except for labeled in-domain data.", "By contrast, supervised OOD detection (Scheirer et al., 2013; Fei and Liu, 2016; Kim and Kim, 2018; Larson et al., 2019; He et al., 2020b; Zheng et al., 2020) represents that there are extensive labeled OOD samples in the training data.", "Most of unsupervised OOD detection methods follow a two-stage framework: training and detecting.", "They first train an in-domain intent classifier to extract intent representations, then detect whether the test query belongs to OOD by estimating its probability density.", "For example, Hendrycks and Gimpel (2017); Shu et al. (2017) simply use a threshold on the in-domain classifier's probability estimate.", "Lin and Xu (2019) employs an unsupervised density-based novelty detection algorithm, local outlier factor (LOF) to detect unseen intents.", "However, such neural models can only extract discriminative features of different in-domain intents since they are trained on the in-domain data without access to OOD data.", "Therefore, these methods are known to produce highly overconfident posterior distributions even for such abnormal OOD samples (Guo et al., 2017; Liang et al., 2017, 2018).", "For supervised OOD detection, classical methods such as (Fei and Liu, 2016; Larson et al., 2019), form a ( N + 1) -class classification problem where the ( N + 1) -th class represents the unseen intents.", "Further, Zheng et al. (2020) uses labeled OOD data to generate an entropy regularization term to enforce the predicted distribution of OOD inputs closer Embedding User Utterance BiLSTM Encoder Loss Function Training LOF/GDA OOD Detection Unlabeled Text f Back Translate Augmented f f f Contrastive Loss Adversarial Contrastive Loss FGV Attack Text f Cross-EntropyLoss", "to the uniform distribution.", "However, collecting large-scale labeled OOD data is usually difficult and expensive.", "These drawbacks limit the broad application of supervised OOD detection.", "In this paper, we aim to capitalize on the benefits of both self-supervised and supervised OOD detection: (1) simultaneously modeling semantic features of both in-domain and OOD data; (2) inducing no labor-intensive OOD annotation.", "In this paper, we propose a self-supervised contrastive learning framework to model discriminative semantic features of both in-domain intents and OOD intents from unlabeled data.", "Without access to labeled OOD data, our method aims to learn representations that discriminate between all unlabeled intents in the instance level.", "When combined with supervised in-domain training, our method learns features that are both rich and semantically discriminative.", "Besides, to replace the stochastic data augmentation mechanisms like random cropping, random color distortions in the image processing field (Chen et al., 2020a), we propose an adversarial augmentation neural module to improve the diversity and complexity of pre-defined transformation functions.", "Specifically, we compute model-agnostic adversarial worst-case perturbations to the inputs in the direction that significantly increases the original contrastive loss.", "Intuitively, adversarial learning can generate pseudo hard positive pairs thus improve the efficiency and robustness of contrastive learning.", "Our contributions are three-fold: (1) We propose a self-supervised learning framework to simultaneously modeling semantic features of both in-domain and OOD data.", "(2) We apply an adversarial augmentation mechanism to improve the efficiency and robustness of self-supervised learning.", "(3) Experiments conducted on two benchmark OOD datasets show the effectiveness of our proposed method.", "Overall Architecture Fig", "1(a) shows the overall architecture of our proposed two-stage framework.", "We first train an in-domain intent classifier to extract intent representations using two objectives then use the detection algorithms MSP (Hendrycks and Gimpel, 2017), LOF (Lin and Xu, 2019) or GDA (Xu et al., 2020) to detect OOD.", "In the training stage, we first train a BiLSTM in-domain intent classifier similar to Lin and Xu (2019) using labeled in-domain data.", "Then we apply an adversarial contrastive objective to continue training on the unlabeled data.", "Self-Supervised Contrastive Learning To simultaneously model semantic features of both in-domain and OOD data, we propose a self-supervised contrastive learning framework to utilize unlabeled data.", "Following (Chen et al., 2020a; He et al., 2020a; Chen et al., 2020b; Winkens et al., 2020; Jiang et al., 2020), we formulate the contrastive loss for a positive pair of examples ( i, j ) as: (cid:96) i,j = log exp (sim ( z i , z j ) / ) (cid:80) 2 Nk =1 1 [ k (cid:54) = i ] exp (sim ( z i , z k ) / ) (1) where z i represents the feature vector of i -th sentence sample extracted by concatenating the first and final hidden states of BiLSTM, and 1 [ k (cid:54) = i ] { 0 , 1 } is an indicator function evaluating to 1 if k (cid:54) = i .", "denotes a temperature parameter.", "The final loss is computed across all positive pairs, both ( i, j ) and ( j, i ) in a mini-batch of N examples.", "Here we use back-translation as data augmentation to generate positive pairs.", "Previous work (Chen et al., 2020a) has shown the necessity of more data augmentations, thus we propose an adversarial neural augmentation as follows.", "Adversarial Neural Augmentation To improve the diversity of data augmentation and avoid handcrafted engineering, we apply adversarial attack (Goodfellow et al., 2015; Kurakin et al., 2016; Miyato et al., 2016; Jia and Liang, 2017; Zhang et al., 2019; Ren et al., 2019b) to generate pseudo positive samples.", "It should be noted that samples obtained by adversarial attack is in the form of embedding to ensure end-to-end training.", "Specifically, we need to compute the worst-case perturbation that maximizes the original contrastive loss L : = arg max (cid:107) (cid:48) (cid:107) (cid:15) L (cid:0) , x + (cid:48) (cid:1) , where represents the parameters of a model and x denotes a given sample.", "(cid:15) is the norm bound of the perturbation .", "In practical implementation, we apply Fast Gradient Value (FGV) (Rozsa et al., 2016) to approximate the perturbation : = (cid:15) g || g || ; where g = ( x i , x j ) L ( f ( x i , x j ; )) (2) where ( x i , x j ) represents the original positive pair generated by back-translation.", "We perform normalization to g and then use a small (cid:15) to ensure the approximate is reasonable.", "Finally, we can obtain the pseudo adversarial sample x advi = x i + as well as x advj .", "Therefore, we get ( x i , x j , x advi , x advj ) from the original positive pair ( x i , x j ) .", "We implement four different contrastive settings: (1) Standard-to-Standard (S2S): the original contrastive loss using ( x i , x j ) ; (2) Adversarial-to-Adversarial (A2A): the adversarial contrastive loss using ( x advi , x advj ) ; (3) Standard-to-Adversarial (S2A): the mixed contrastive loss using ( x i , x advi ) or ( x j , x advj ) ; (4) Dual Stream (DS): combining S2S and A2A as Fig", "1(c) shows.", "Experiment 3.4 shows that the last setting works best.", "We argue that DS capture better feature alignment in the latent space.", "2 Besides, we find only applying the contrastive loss leads to the worse in-domain intent detection metrics, therefore 2 We leave the comprehensive theoretical analysis to future work.", "we mix up the two kinds of objectives during training to avoid catastrophic forgetting (Kirkpatrick et al., 2017).", "We present an Algorithm section in the appendix.", "Datasets We perform experiments on two variants of the OOD benchmark dataset CLINC 3 (Larson et al., 2019), namely CLINC-OOS+ and CLINC-Small.", "Table 1 shows the detailed statistics of two datasets.", "They both contain 150 in-domain intents across 10 domains where CLINC-OOS+ contains 100 samples for each intent and CLINC-Small has 50 training samples for each intent.", "Besides, CLINC-OOS+ has 250 OOD examples in training set, while CLINC-Small contains 100.", "To construct the unlabeled data, we mix up 10% of in-domain data and all of the OOD data in the training set.", "The total amount of unlabeled data is equal to 1500 in CLINC-OOS+ and 750 in CLINC-Small, where the number of OOD data is 250 and 100, respectively.", "Note that during the self-supervised learning phase, we don't utilize label information of the unlabeled data and only perform contrastive learning at the instance-level.", "During the supervised learning phase, we use the other in-domain training data for cross-entropy loss.", "Metrics We report both in-domain metrics: Ac-curacy(ACC) and F1-score(F1), and OOD metrics: Recall and F1-score(F1).", "OOD Recall and F1-score are the main metrics in this paper.", "We compare our proposed self-supervised methods to two types of OOD detection methods, which are supervised and fully unsupervised.", "The former applies a supervised OOD entropy regularization.", "We use this setting as the reference upper bound for OOD detection results.", "The latter represents that we train the sentence feature extractor using only in-domain data.", "We treat this setting as the 3 https://github.com/clinc/oos-eval CLINC-OOS+ CLINC-Small Model in-domain OOD in-domain OOD ACC F1 Recall F1 ACC F1 Recall F1 N+1 88.6 91.46 19.12 32.00 85.23 88.58 17.46 29.99 Supervised Entropy+MSP(oracle) 87.38 85.71 44.82 57.48 84.52 84.07 27.23 36.81 OOD Entropy+LOF(oracle) 84.08 85.12 60.44 61.89 82.16 82.83 60.72 61.39 Entropy+GDA(oracle) 86.53 87.57 70.20 71.22 84.56 84.68 66.98 67.07 MSP 83.61 84.05 24.28 36.57 81.84 82.20 19.12 29.79 MSP+S2S(w/o adv) 84.11 84.93 37.36 45.52 83.98 83.65 22.40 33.06 MSP+DS(ours) 84.85 84.91 41.76 * 47.62 * 83.93 83.21 25.62 * 34.82 * Self-Supervised LOF 84.20 85.08 57.40 58.78 82.22 82.73 57.20 58.10 OOD LOF+S2S(w/o adv) 85.62 85.99 59.12 59.41 82.84 83.67 57.92 59.04 LOF+DS(ours) 85.87 86.06 59.96 * 61.20 * 82.89 83.85 59.68 * 60.77 * GDA 86.34 87.73 63.70 65.23 84.24 84.30 60.40 61.07 GDA+S2S(w/o adv) 88.56 88.10 64.92 67.22 85.76 86.20 62.80 64.20 GDA+DS(ours) 88.71 88.98 67.24 * 69.17 * 85.78 86.69 64.52 * 65.55 * Table 2: Performance comparison between our method and baselines on CLINIC-OSS+ and CLINIC-Small datasets.", "reference lower bound.", "For each training method, we use different OOD detection models to verify its performance.", "Therefore, the model proposed in this paper can be divided into two stages.", "Firstly, the feature extractor training is completed in the training stage, and then the OOD detection is conducted by using different models in detection stage.", "Training Stage On the basis of fully unsupervised setting, our proposed four types of adversarial self-supervised learning settings are added, respectively.", "Standard-to-Standard (S2S): Original setting.", "The contrastive loss is computed between origin and augmented data.", "The adversarial attack is not involved.", "Adversarial-to-Adversarial (A2A): The setting injecting two adversarial attacks to origin data and augmented data first, then compute contrastive loss between them.", "Standard-to-Adversarial (S2A): This setting divide contrastive loss into two parts.", "One uses origin data with adversarial attack and augmented data, the other uses augmented data with adversarial attack and origin data.", "Dual Stream (DS): The setting combining S2S and A2A.", "The contrastive loss contains two parts.", "One uses origin data and augmented data, the other uses corresponding data with adversarial attacks.", "Detection Stage As mentioned above, we compare three OOD detection models: MSP (Maxi-mum Softmax Probability)(Hendrycks and Gimpel, 2017) applies a threshold on the maximum softmax probability where the threshold is set as 0.5.", "LOF (Local Outlier Factor)(Lin and Xu, 2019) uses the local outlier factor to detect unknown intents.", "GDA (Gaussian Discriminant Analysis)(Xu et al., 2020) is a generative distance-based classifier for out-of-domain detection with Euclidean and Mahalanobis distances.", "In this paper, the experiments and analysis are mainly conducted around the training stage.", "Different detection models are used to verify the generalization of our proposed method.", "Table 2 displays the experiment results.", "Our method consistently outperforms all the unsupervised baselines in all settings, even close to the supervised oracles.", "Under the GDA setting, our proposed method outperforms the unsupervised method by 3.94%(OOD F1), 3.54%(OOD Recall) in CLINC-OOS+ and 4.48%(OOD F1), 4.12%(OOD Recall) in CLINC-Small.", "We also observe similar improvements on the MSP and LOF settings.", "The results confirm the effectiveness of our self-supervised learning method.", "Considering the effect of adversarial augmentation, our GDA+DS outperforms the standard contrastive learning (GDA+S2S(w/o adv)) by 1.95%(OOD F1), 2.32%(OOD Recall) in CLINC-OOS+ and 1.35%(OOD F1), 1.72%(OOD Recall) in CLINC-Small.", "The results demonstrate that adversarial attack can improve the efficiency and robustness of contrastive learning.", "For in-domain ACC and F1, our method also achieves slightly better performance, even close to N+1 which suffers from a severe drop in OOD metrics for unbalanced data.", "Effect of Unlabeled Data Size.", "Fig 2 shows the effect of different sizes of unlabeled data for contrastive learning.", "We extract each subsets of the total CLINC-OOS+ unlabeled dataset through random sampling, so that the expectation of OOD 0 300 600 900 1200 1500 Number of sampled unlabeled data 60 62 64 66 68 70 OODF 1 s c o r e ( m a c r o ) Upper Bound of GDA Lower Bound of GDA Upper Bound of LOF Lower Bound of LOFGDA LOF Figure 2: Relation between unlabeled data size and OOD detection F1-score.", "proportion in every subset is close to the full set (16.67%).", "We choose LOF and GDA for comparison.", "The lower bound and upper bound respectively represent unsupervised and supervised OOD.", "Our method achieves superior performance along with the increase of unlabeled data under two settings.", "It confirms that our proposed method can learn rich and semantically discriminative features via unlabeled data to facilitate OOD detection.", "Fig 3 shows the relative increment of the F1-score during the uniform increase of unlabeled data.", "Specifically, the difference between the current F1-score and the previous state F1-score is recorded for every 300 samples added.", "As the amount of data increases uniformly, the extent of increment of OOD F1-score decrease.", "It confirms that our proposed method can optimize the performance of OOD detection by taking full advantage of unlabeled data and achieve impressive performance with only a small amount of data.", "Generally, our proposed methods have strong robustness and generalization capability.", "Analysis of Norm of Adversarial Perturbation.", "Fig 4 displays the effect of norm (cid:15) of adversarial noise.", "(cid:15) controls the range of adversarial perturbation .", "In both LOF and GDA, (cid:15) (1 . 0 , 1 . 5) achieves better performances.", "A smaller or larger value both impair the capability of contrastive learning.", "We argue that small noise can not improve the complexity of augmentation and large noise may hurt the alignment of positive example pairs.", "In this paper, we focus on combining the benefits of both unsupervised and supervised OOD detection: simultaneously modeling semantic features of both in-domain and OOD data without requiring labor-intensive OOD annotation.", "We propose a self-supervised contrastive learning framework to learn rich and semantically discriminative representations from unlabeled data.", "Besides, we propose an adaptive end-to-end adversarial augmentation neural module to improve the diversity and complexity of pre-defined transformation functions.", "Experiments show that our method achieves better performance than unsupervised OOD baselines, even close to supervised OOD oracles.", "Task-oriented dialog systems have demonstrated remarkable performance across a wide range of applications, with the promise of a significant positive impact on human production mode and lifeway.", "However, in scenarios where information is complex and rapidly changing, models usually face input that is meaningfully different from typical examples encountered during training.", "Current models are prone to make unfounded but overconfident predictions on these inputs, which may affect human judgment and thus impair the safety of models in practical applications.", "In domains with the greatest potential for societal impacts, such as navigation or medical diagnosis, models should be able to detect potentially agnostic OOD and be robust to high-entropy inputs to avoid catastrophic errors.", "This work proposes a new adversarial self-supervised learning method for OOD detection.", "The overall robustness of the model is significantly improved by making full use of unlabeled data with potential threats through contrastive learning and adversarial attacks, which takes a step towards the ultimate goal of enabling the safe real-world deployment of task-oriented dialog systems in safety-critical domains.", "The experimental results have been reported on standard benchmark datasets for considerations of reproducible research.", "This work was partially supported by National Key RD Program of China No. 2019YFF0303300 and Subject II No. 2019YFF0303302, DOCOMO Beijing Communications Laboratories Co., Ltd, MoE-CMCC \"Artifical Intelligence\" Project No.", "MCM20190701." ]
[ "abstain", "abstain", "objective", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "method", "abstain", "method", "abstain", "objective", "result", "objective", "abstain", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "other", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "result", "result", "abstain", "result", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "method", "objective", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other" ]
[ "Human evaluations are typically considered the gold standard in natural language generation, but as models' fluency improves, how well can evaluators detect and judge machine-generated text?", "We run a study assessing non-experts' ability to distinguish between human-and machine-authored text (GPT2 and GPT3) in three domains (stories, news articles, and recipes).", "We find that, without training, evaluators distinguished between GPT3and human-authored text at random chance level.", "We explore three approaches for quickly training evaluators to better identify GPT3-authored text (detailed instructions, annotated examples, and paired examples) and find that while evaluators' accuracy improved up to 55%, it did not significantly improve across the three domains.", "Given the inconsistent results across text domains and the often contradictory reasons evaluators gave for their judgments, we examine the role untrained human evaluations play in NLG evaluation and provide recommendations to NLG researchers for improving human evaluations of text generated from state-of-the-art models.", "Human-quality text has long been a holy grail for the output of natural language generation (NLG) systems, serving as an upper bound on their performance.", "Since we lack a good way of encoding many aspects of what constitutes human-quality output in an automated method, we often must rely on human evaluation for our models.", "Though evaluations with end-users in an applied setting are encouraged (Belz and Reiter, 2006), in practice, most human evaluations instead ask people to rate generated text's intrinsic quality (van der Lee et al., 2019; Howcroft et al., 2020).", "Sometimes the generated text is explicitly compared to human-authored text (e.g., Liu et al., 2016; Zellers et al., 2021; Zhang Figure 1: Excerpts from human evaluators' explanations for why they believe a GPT3-generated story (also excerpted) was written by a human (left) or a machine (right).", "et al., 2020), but even when no human-authored text is evaluated, evaluators implicitly compare the generated text to their knowledge of language and norms within specific domains.", "Evaluators are often asked to assess a text holistically, e.g., based on its overall quality, naturalness, or humanlikeness (van der Lee et al., 2021; Howcroft et al., 2020), where the exact evaluation criteria is left to the discretion of the evaluator.", "Though other evaluations are broken down along specific dimensions of text quality (e.g., grammaticality, coherence, etc.), Novikova et al. (2017, 2018) and Callison-Burch et al. (2007) found that these dimensions are often correlated and may be conflated in some evaluation settings.", "This is concerning because, as NLG models improve, evaluators are asked to read longer passages of text conditioned on large amounts of context.", "In these cases, fluency-related aspects of quality (i.e., the ones that don't require careful reading of the context and meaning of the passage) are the easiest to assess, particularly in small-batch evaluations with non-expert evaluators where speed is incentivized.", "This poses a challenge when collecting human evaluations for state-of-the-art language models, as errors are often content-based (e.g., factual inaccuracies or inconsistencies with the context) rather than fluency-based (Brown et al., 2020), so a superficial read may not be sufficient to catch model errors.", "For accurate assessments of generated text, we need human evaluations that are designed to encourage a sufficiently careful reading of the text to examine these subtler aspects of text quality.", "We asked non-expert evaluators to assess the humanlikeness (operationalized as how believably human an evaluator finds a text) of text generated by current NLG models (GPT2 and GPT3) to test what current human evaluation practices can reveal about the models' quality ( 2).", "We found that evaluators were unable to distinguish between GPT3-and human-authored text across story, news, and recipe domains.", "However, when we categorized the aspects of text the evaluators used to make their judgments, we found they primarily focused on the grammar, spelling, and style of the text.", "The evaluators' responses also indicated that they underestimated the quality of text current models are capable of generating (as seen in Figure 1).", "To our knowledge, this paper is the first to evaluate human evaluations of GPT3-generated text across multiple domains.", "We then looked at three different evaluator training methodsproviding detailed instructions, annotated examples, and human-machine paired examplesto test whether we could improve evaluators' accuracy ( 3).", "While we found including examples in the task increased the set of texts evaluators thought could be machine-generated and increased their focus on textual content, no training method significantly increased evaluators' performance consistently across domains.", "Based on our results (discussed in 4), we recommend moving away from small-batch evaluations with little training when collecting human evaluations of NLG models ( 5).", "We also encourage practitioners to consider alternative evaluation frameworks that capture the usefulness of generated text in downstream settings rather than its humanlikeness.", "In our first study, we ask how well untrained evaluators can distinguish between humanand machine-generated text.", "This task format, inspired by the Turing (1950) Test, is used to compare the quality of machine-generated text to human-authored text and, as models' fluency improves, to analyze NLG models' ability to fool readers (Ippolito et al., 2020; Brown et al., 2020).", "By asking evaluators to assess the humanlikeness of the text with only minimal instructions (see Figure 2), we observe how well untrained evaluators can detect state-of-the-art machine-generated text and which attributes evaluators focus on and think are important for detecting machine-generated text.", "We gave evaluators 5 text passages, some of which were written by people and some generated by model.", "We asked them to rate the text on a 4-point scale (Ippolito et al., 2020):", "1. Definitely human-written", "2. Possibly human-written", "3. Possibly machine-generated", "4. Definitely machine-generated If they selected option 1, we asked them: Why did you select this rating?", "Otherwise, they were asked, What would you change to make it seem more human-like?", "The interface is shown in Figure", "2. 2.2 Data We considered humanand machine-generated text in three different domains: stories, news articles, and recipes.", "In all three cases, we collected 50 human-authored texts in English and generated 50 texts from both the 175B parameter GPT3 model (also known as Davinci; Brown et al., 2020) 1 and GPT2-XL (Radford et al., 2019).", "2 Evaluators were assigned to one domain and one model; the texts read by any given evaluator included some human-authored texts and some texts generated by their assigned model.", "We only considered texts 100 words 1 beta.openai.com/ 2 huggingface.co/gpt2-xl Figure 2: The task interface (story domain) or longer, and after reaching 100 words, all texts were truncated at the end of the next sentence.", "3 To generate text, we used the three-shot setting described in Brown et al. (2020), conditioning the text on three additional samples of in-domain, human-authored text, which we refer to as the priming texts (all priming texts are in the supplementary materials and at ark.cs.washington.edu/ human_evals_ACL21 ).", "While this setting is not typically how GPT2 is used in practice, we held this approach constant to directly compare how model quality changes evaluators' ability to distinguish between texts.", "For each domain, each generated text was conditioned on the same set of priming texts.", "The texts were delimited with an (cid:104) EOS (cid:105) token and generated using the default GPT3 generation settings (i.e., sampling with temperature = 0 . 7 ).", "The human-authored texts came from the Reddit WritingPrompts dataset (Fan et al., 2018).", "4 We collected all the stories that began with Once upon 3 Using NLTK; www.nltk.org/ 4 github.com/pytorch/fairseq/tree/ master/examples/stories a time (255 stories total) and randomly chose 50 human-authored stories from this set.", "For the machine-generated text, we conditioned the models on the three priming texts and on the phrase Once upon a time .", "We removed generated stories that directly copied a priming text (with > 80% overlap) and regenerated those texts (9 instances with GPT2, 2 with GPT3).", "This is the most open-ended of the three domains, as the story's content is virtually unrestricted, and the only creative domain.", "It is also the noisiest of the human-authored datasets, as the stories were originally collected from social media comments with no quality-based filtering.", "We collected 2,111 recent local news articles from 15 different newspapers using Newspaper3k 5 (de-tails in Appendix A.1).", "After filtering out articles under 100 words, we manually filtered out articles that weren't local news or that referenced the coronavirus pandemic.", "We randomly chose 50 articles to use as our human-authored news articles and another 50 to use as prompts for our generation models.", "We conditioned each generated text on the headline and first sentence from the prompt articles, along with the three priming texts.", "Because the title and the first sentence of a news article often summarize its contents, the generated content must adhere to the topics they introduce.", "By using local, recent news, we also limit the models' ability to copy from their training data.", "The models seemed to have the most trouble with this dataset structurally, e.g., generating new headlines without ending the current article or outputting invalid end-of-file tags.", "We collected 50 human-authored recipes from the RecipeNLG dataset (Bien et al., 2020), which contains 2,231,142 recipes scraped from the web.", "We randomly chose an additional 50 recipes and used their titles and ingredient lists as prompts, appending them to the end of the priming texts.", "This is the most closed of the three domains, as the recipe must incorporate the listed ingredients and result in the dish described by the title.", "Recipes are typically written in clear commands, leaving little room for surprising or unexpected text.", "We used Amazon Mechanical Turk (AMT) to col-lect the text evaluations with non-expert evaluators, commonly used in NLG evaluations (van der Lee et al., 2019).", "To have adequate power in our analyses (based on a power analysis with = 0 . 8 ; Card et al., 2020), we had 130 different evaluators for each of the 6 task settings (3 domains 2 models).", "Each participant evaluated 5 texts each, giving us a total of 780 participants and 3,900 text evaluations.", "We paid evaluators US$1.25 for completing the task.", "Following common best practice on AMT (Berinsky et al., 2012), evaluators had to have over a 95% acceptance rate, be in the United States, and have completed over 1,000 HITs (AMT tasks).", "We excluded evaluators' work if their explanations were directly copied text from the task, did not match their responses, did not follow the instructions, or were short, vague, or otherwise uninterpretable.", "Across experiments, 445 participants (18.6%) were rejected and not included in the 2 results (780 approved participants) and 3 results (1,170 approved participants).", "Overall, evaluators choosing between human and GPT2-generated text correctly identified the author of the text 57.9% of the time, 6 but the evaluators choosing between humanand GPT3-generated text only guessed correctly 49.9% of the time (Ta-ble 1), compared to 50% random chance.", "While the accuracy of classifying GPT2vs. human-authored text is significantly 7 different from chance, evaluators' accuracy distinguishing GPT3and human-authored text is not.", "8 This remains the case regardless of text domain; we failed to find any evidence that evaluators' accuracy on any one domain for GPT3 differs from the overall GPT3 accuracy of 50 %.", "9 The story texts saw the biggest drop in evaluator accuracy from GPT2 to GPT3 (62% to 48%, Cohen's d = 0 . 57 ).", "The distribution of evaluators' scores are shown in Appendix A.2.", "In Table 1, we see other statistics worsen as well between GPT2 and GPT3: how well evaluators identified the machine-generated text ( F 1 , precision, and recall), evaluators' agreement (Krippen-dorff's , a measure of annotator agreement that 6 Unless otherwise noted, all analyses binned the responses into 2 categories ( human and machine ).", "corrects for the probability of random agreement), and the percent of guesses that the text was human-written (% human).", "Given that the texts are equally likely to be humanand machine-written, there are disproportionately many human guesses, making up two thirds of the responses in the GPT3 experiments.", "Despite the significantly lower scores, evaluators' confidence (the percent of Definitely responses) remains fairly constant across conditions.", "Taken on its own, the evaluators' difficulty identifying GPT3-generated text compared to GPT2 points to the improvement of new NLG models.", "However, it also points to concerns about extending current human evaluation methodologies to state-of-the-art text generation.", "In particular, the evaluators' explanations reveal underlying confusion and misconceptions about state-of-the-art NLG.", "To better understand what untrained evaluators focused on in the text to make their decisions, the authors annotated 150 random responses from the evaluators who distinguished between humanand GPT3-generated text (see Appendix A.3 for annotation details).", "We divided the text annotation labels into three categories: form , content , and machine capabilities .", "Form qualities focus on the format, style, and tone of the text, while content focuses on the text's meaning.", "We also coded for comments that explicitly referenced people's perceptions of what types of language machines are capable (or incapable) of generating ( machine capabilities ).", "We found nearly twice as many comments about the form of the text than the content ( form : 47% of labels, content : 25%).", "Evaluators in our sample focused most on the spelling, grammar, or punctuation of the texts (45 out of 150 comments) and the style or tone of the text (24 out of 150 comments).", "However, these dimensions of text are unlikely to be helpful in identifying text generated by current models, considering that GPT3 has already been shown to generate fluent text and to adapt easily to new generation domains (Brown et al., 2020).", "We also found that the reasons evaluators gave for their answers often contradicted each other.", "The formality of the text, spelling and grammar errors, and clarity were all cited to justify both human and machine judgments.", "This was also reflected in the low agreement scores between evaluators, with Krippendorff's 0 across domains.", "Evaluators' expectations about what NLG mod-Model Overall Acc.", "els are capable of ranged from thinking their text is already indistinguishable from human-authored text (I have no idea if a human wrote anything these days. No idea at all.) to doubting machines' ability to use basic language (Usually AI has terrible grammer [sic] and messes up.).", "But overall we found most evaluators' beliefs about generated language underestimated or misunderstood current NLG models, as seen in Appendix A.4.", "Given evaluators' inability to distinguish GPT3-and human-authored text and their inconsistent reasoning for their decisions, we investigated whether there were simple ways of improving evaluators' ability to spot attributes of GPT3-generated text.", "Inspired by crowdsourcing research on guiding workers on writing or other subjective tasks (Kim et al., 2017; Mitra et al., 2015), we tested three lightweight evaluator-training methods to see if we could improve people's ability to identify machine-generated text while maintaining the short, low-cost nature of the evaluations.", "We considered 3 evaluator trainings that can be added to the beginning of a human evaluation task, at most requiring only 3 extra samples of human-and machine-generated text.", "To test the effectiveness of each type of training, we re-ran the experiments from 2, but this time, we prepended one of three evaluator-training methods to the evaluation task: an instruction-based training, an example-based training, and a comparison-based training.", "Screenshots of the training interfaces are in Appendix A.6; the full set of training materials are in the supplementary materials and at ark.cs.washington.edu/human_evals_ACL21 .", "Other than the training, the task setup was identical to the GPT3-based tasks in", "2. We again ran the task on Amazon Mechanical Turk across three domains (stories, news, and recipes), using the same texts.", "As each individual participant was only permitted to complete one set of evaluations, the set of evaluators who received these trainings was completely disjoint from the set of evaluators from our first study.", "The participants were subject to the same restrictions described in 2.3 and excluded according the same criteria; we did not use the trainings to filter out evaluators.", "For each domain and training method pair, we had 130 unique evaluators complete the task, giving us 5,850 text annotations from 1,170 evaluators.", "To give evaluators a better sense of which parts of the text to pay attention to, we extended the original task instructions to include dimensions of the text that could be helpful for identifying machine-generated text (repetition and factuality) and ones that could be misleading (grammar, spelling, and style).", "We chose these dimensions based on previous work (Ippolito et al., 2020) and evaluators' comments in a pilot study (see Appendix A.5).", "The Instructions training was the simplest of our 3 evaluator training methods.", "It was general enough to be applied across the 3 domains but provided little information about the quality and domain of text the evaluator would be rating.", "It did not increase the cost of collecting evaluations (US$1.25 per HIT) because it does not require any extra work on the part of the evaluator, though this also made it the easiest training to ignore.", "The instruction-based training is the most prescriptive of the training methods, as the researcher has to choose the dimensions they want the evaluators to focus on.", "Our Examples training consisted of 3 practice rounds of the actual task: given a text, guess if it is machineor human-authored.", "We collected 3 additional texts in the same manner described in 2.2 and wrote a short explanation of which aspects of the text hinted at its source.", "After an evaluator makes their guess, the correct answer and explanation are shown.", "Each domain had its own set of examples and explanations.", "By showing examples, this training helps set the evaluators' expectations about the quality of the humanand machine-generated text.", "We paid evaluators more for completing this task (US$1.75 per HIT) to compensate for the extra texts they needed to read.", "As with the instruction-based training, while pointing out specific text dimensions can help evaluators focus on important features, it may also restrict their search space.", "In the Comparison training, we took the example passages from the Examples training and paired them with a text from the opposite source (machine or human) that began with the same prompt.", "We asked evaluators to guess which of the two texts was the machine-generated one.", "We then provided the correct answer to the evaluator, along with the same explanations used in the Examples training.", "This training allows evaluators to directly compare human and machine texts written from the same prompt.", "It is also the most expensive training, as it required evaluators to read three more passages than the Examples training; we paid evaluators US$2.25 per HIT.", "We found that while all 3 training methods improved evaluators' accuracy at identifying machine-vs.", "human-authored text over the no-training accuracy, the Examples training was the only one that showed significant improvement (see Table 2).", "10 Breaking down the results by domain, however, we find the Examples accuracy did not significantly 10 Tukey's HSD adjusted p < 0 .", "003 for distinguishing between the Examples training and no training, d = 0 .", "25 increase over the no-training accuracy when considering any of the three domains individually.", "Even so, the significant difference in overall performance is mainly contributed by the story domain; when comparing evaluators' performance with no training to its Examples training counterpart, we see a change of 0.019 and 0.062 mean accuracy in the news and recipe domains, respectively, versus 0.086 on the story domain.", "This is perhaps due to the examples helping override the preconception that machines cannot generate creative text.", "Across all 3 domains, the Examples and Comparison trainings produced the highest recall and F 1 scores for evaluators' judgments and decreased the percentage of texts they guessed were human-written, which indicate that evaluators were willing to consider a broader set of texts to be machine-generated than the evaluators in", "2. However, despite the trainings and the increased proportion of confident responses, evaluator agreement remained low across domain and training settings ( 0 . 11) , and higher agreement did not correspond to higher accuracy.", "We again annotated 150 comments along the dimensions listed in Appendix A.3, divided into form , content , and machine capabilities categories, this time from evaluators who received the best-performing Examples training.", "As shown in Table 3, we found that the proportion of form comments dropped in the sample of evaluators who went through the Examples training, while the proportion of content comments doubled.", "We also saw a drop in the number of comments mentioning evaluators' expectations of machine-generated text.", "While this change in focus doesn't necessarily correspond to correct judgments, content reasons are more in-line with current NLG model capabilities (Brown et al., 2020).", "Overall, none of our three training methods significantly improved evaluators' ability to detect machine-generated text reliably across text domains while still maintaining the small-batch nature of Amazon Mechanical Turk.", "This speaks to the improving quality of NLG models, but we also found that untrained evaluators mainly focused on the format of the text, deciding if it was human or machine-generated based on whether Training Overall Acc.", "the text was grammatically or stylistically correct.", "This, combined with the high percentage of human guesses, the low recall scores for the machine guesses, and the evaluators' comments on their expectations of NLG models, indicates a systematic underestimation by the evaluators of the quality of machine-generated text.", "Evaluators who were trained with examples had higher expectations of machine-generated text and focused more on the text's content; however, the training was not sufficient to significantly raise evaluators' scores across all three domains.", "Many of the explanations given by evaluators included references to the text that reflected human attributes or intent that they suspected machines could not generate (e.g., personal description a machine wouldn't understand, [like a pirate] wanting to be home with his wife and son from Figure 1 and the examples in Appendix A.4).", "However, current NLG models are capable of generating text with at least superficial reference to human attributes or intent, as seen in the generated story in Figure", "1. This assumption that machines can't generate text with these aspects of humanlikeness led many evaluators astray, and we suspect it is one cause of the low accuracy we found.", "Crowdsourcing studies dealing only with human-authored texts often include extensive training, quality checks, or coordination (Kittur and Kraut, 2008; Kim et al., 2017; Bernstein et al., 2010).", "NLG evaluations usually forego such structures, based, we suspect, on the assumption that evaluating machine-generated text requires only fluency in the language the text is generated in.", "Our results suggest otherwise.", "Evaluators often mistook machine-generated text as human, citing superficial textual features that machine generation has surpassed (Brown et al., 2020).", "One potential remedy for this is to focus evaluator training on debunking this misconception.", "We did see evidence that the increase in accuracy we saw with our Examples training was associated with fewer explanations mistakenly referencing machine capabilities, even though the training did not specifically focus on this.", "Based on our findings, if NLG researchers must run human evaluations as small-batch evaluations", "on Amazon Mechanical Turk or similar platforms, we recommend they train evaluators with examples.", "This will help calibrate the evaluators' expectations of generated text and indicate the careful reading they may need to do to properly assess the text's quality.", "Our experiments also indicate the importance of confirming with evaluators why they have made the decisions they have, as the criteria they might implicitly be evaluating may be mismatched with researchers' intended criteria.", "However, other evaluation setups may be more successful on Amazon Mechanical Turk, such as long-term evaluations with qualified evaluators who have gone through an extended training (like those in Kittur and Kraut, 2008; Zellers et al., 2019a) or third-party evaluator quality tools (e.g., Positly, used by Brown et al., 2020).", "However, given the increasing length of text NLG models can handle and the careful reading needed to detect many errors in generated text, we encourage NLG researchers to move away from standalone, intrinsic human evaluation tasks.", "We found that, by default, our evaluators in this evaluation setting were most likely to focus on surface-level, fluency-related aspects of quality.", "We join past work (Belz and Reiter, 2006; van der Lee et al., 2021) in recommending a move towards evaluation settings where evaluators are better motivated to carefully consider the content and usefulness of generated text.", "For example, TuringAdvice (Zellers et al., 2021) asks evaluators to rate NLG models by their ability to generate helpful advice, and RoFT (Dugan et al., 2020) engages evaluators through a guessing game to determine the boundary between humanand machine-generated text.", "Other evaluation methods ask the evaluators to directly interact with the generated text; for example, Choose Your Own Adventure (Clark and Smith, 2021) and Sto-rium (Akoury et al., 2020) evaluate story generation models by having people write stories with the help of generated text.", "11 We see that GPT3 can successfully mimic human-authored text across several domains, renewing the importance of evaluations that push beyond surface-level notions of quality and consider whether a text is helpful in a down-11 Note that we initially tried a fourth training condition along these lines, where we asked evaluators to directly interact with the generated text by rewriting it be more humanlike.", "We found we were unable to successfully recruit evaluators to complete this task.", "The rate of retention was less than 30%, and the rejection rate was over 50%.", "We found AMT was not a good platform for this type of task, at least not for the format and the price point we explored in this work.", "stream setting or has attributes that people would want from machine-generated text.", "Finally, given the mixed effect we found different trainings can have on evaluators' performance and the lack of human evaluation details typically presented in NLG papers (van der Lee et al., 2019; Howcroft et al., 2020), we encourage NLG researchers to include details of any instructions and training they gave evaluators in their publications.", "This, along with efforts to standardize human evaluation design (Belz et al., 2020; Howcroft et al., 2020) and deployment (Khashabi et al., 2021; Gehrmann et al., 2021), will support future development of evaluator training procedures and the comparison of human evaluation results in future NLG evaluation work.", "A subfield of NLG analyzes the role of human evaluations, including discussions of the tradeoffs of human and automatic evaluations (Belz and Reiter, 2006; Hashimoto et al., 2019).", "There are critiques and recommendations for different aspects of human evaluations, like the evaluation design (Novikova et al., 2018; Santhanam and Shaikh, 2019), question framing (Schoch et al., 2020), and evaluation measures like agreement (Amidei et al., 2018), as well as analyses of past NLG papers' human evaluations (van der Lee et al., 2021; Howcroft et al., 2020).", "Additionally, crowdsourcing literature has work on effectively using platforms like Amazon Mechanical Turk (e.g., Daniel et al., 2018; Oppenheimer et al., 2009; Weld et al., 2014; Mitra et al., 2015).", "In this work, we focus on the role evaluator training can play for producing better accuracy at distinguishing humanand machine-generated text, though other quality control methods are worth exploring.", "Previous work has asked evaluators to distinguish between humanand machine-authored text; for example, Ippolito et al. (2020) found that trained evaluators were able to detect open-ended GPT2-L-generated text 71.4% of the time and Brown et al. (2020) found evaluators could guess GPT3-davinci-generated news articles' source with 52% accuracy, though these results are not directly comparable to ours due to differences in the evaluation setup, data, and participants.", "Finally, our findings that untrained evaluators are not well equipped to detect machine-generated text point to the importance of researching the safe deployment of NLG systems.", "Gehrmann et al. (2019) proposed visualization techniques to help readers detect generated text, and work like Zellers et al. (2019b), Ippolito et al. (2020), and Uchendu et al. (2020) investigated large language models' ability to detect generated text.", "We found that untrained evaluators were unable to distinguish between humanand GPT3-generated text from three domains.", "However, we also found that the evaluators focused on surface-level text qualities to make these decisions and underestimated current NLG models' capabilities.", "We experimented with three methods for training evaluators, and while example-based trainings led to increases in recall and the amount of content-based evaluations, they did not lead to significant improvements in accuracy across all domains.", "Given that evaluators struggled to distinguish between humanand machine-generated text in this setting, we should shift how we think about collecting human evaluations for current NLG models.", "This research was supported in part by the Office of Naval Research under the MURI grant N00014-18-1-2670.", "The authors would like to thank Katharina Reinecke, the members of the CSE 599 crowdsourcing class, and the ARK group for their feedback, the reviewers for their helpful comments, and the participants who took part in our study.", "Ethical considerations All experiments in this paper were approved by our institution's internal review board.", "Evaluators' responses were collected and stored anonymously.", "Evaluators were paid based on an estimated US$10 per hour rate; we raised the price of the task in proportion to the added difficulty of our 3 training methods.", "For each dataset we considered, its source and language are included, along with any other details we believed would be relevant to evaluators' ability to read and understand the text.", "Evaluators were warned about possible risks before starting the task, namely that NLG models can generate text with harmful language or themes, and were able to leave comments about their experience at the end of the study." ]
[ "abstain", "abstain", "result", "objective", "result", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "result", "result", "objective", "abstain", "objective", "result", "result", "result", "method", "objective", "abstain", "result", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "method", "method", "objective", "result", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "method", "result", "abstain", "abstain", "result", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "objective", "other", "method", "other", "result", "result", "result", "method", "other", "other", "other", "other", "other", "other", "other" ]
[ "Correctly resolving textual mentions of people fundamentally entails making inferences about those people.", "Such inferences raise the risk of systemic biases in coreference resolution systems, including biases that can harm binary and non-binary trans and cis stakeholders.", "To better understand such biases, we foreground nuanced conceptualizations of gender from sociology and sociolinguistics, and develop two new datasets for interrogating bias in crowd annotations and in existing coreference resolution systems.", "Through these studies, conducted on English text, we confirm that without acknowledging and building systems that recognize the complexity of gender, we build systems that lead to many potential harms.", "Coreference resolutionthe task of determining which textual references resolve to the same real-world entityrequires making inferences about those entities.", "Especially when those entities are people, coreference resolution systems run the risk of making unlicensed inferences, possibly resulting in harms either to individuals or groups of people.", "Embedded in coreference inferences are varied aspects of gender, both because gender can show up explicitly (e.g., pronouns in English, morphology in Arabic) and because societal expectations and stereotypes around gender roles may be explicitly or implicitly assumed by speakers or listeners.", "This can lead to significant biases in coreference resolution systems: cases where systems systematically and unfairly discriminate against certain individuals or groups of individuals in favor of others", "(Friedman and Nissenbaum, 1996, p. 332).", "Gender bias in coreference resolution can manifest in many ways; work by Rudinger et al.", "(2018), Zhao et al.", "(2018a), and Webster et al.", "(2018)", "focused largely on the case of binary gender discrimination in trained coreference systems, showing that current systems over-rely on social stereotypes when resolving HE and SHE pronouns 1", "(see 2).", "Contemporaneously, critical work in Human-Computer Interaction has complicated discussions around gender in other fields, such as computer vision", "(Keyes, 2018; Hamidi et al., 2018).", "Building on both lines of work, and inspired by Keyes's", "(2018)", "study of vision-based automatic gender recognition systems, we consider gender bias from a broader conceptual frame than the binary folk model.", "We investigate ways in which folk notions of gendernamely that there are two genders, assigned at birth, immutable, and in per-fect correspondence to gendered linguistic forms lead to the development of technology that is exclusionary and harmful of binary and non-binary trans and cis people.", "2 Addressing such issues is critical not just to improve the quality of our systems, but more pointedly to minimize the harms caused by our systems by reinforcing existing unjust social hierarchies", "(Lambert and Packer, 2019).", "There are several stakeholder groups who may easily face harms when coreference systems is used", "(Blodgett et al., 2020).", "Those harms includes several possible harms, both allocational and representation harms", "(Barocas et al., 2017), including quality of service, erasure, and stereotyping harms.", "Following Bender's", "(2019)", "taxonomy of stakehold-1 Throughout, we avoid mapping pronouns to a gender label, preferring to use the pronoun directly, include", "(in English)", "SHE , HE , the non-binary use of singular THEY , and neopronouns", "(e.g., ZE / HIR , XEY / XEM ), which have been in usage since at least the 1970s", "(Bustillos, 2011; Merriam-Webster, 2016; Bradley et al., 2019; Hord, 2016; Spivak, 1997).", "2 Following GLAAD", "(2007), transgender individuals are those whose gender differs from the sex they were assigned at birth.", "This is in opposition to cisgender individuals, whose assigned sex at birth happens to correspond to their gender.", "Transgender individuals can either be binary", "(those whose gender falls in the male/female dichotomy)", "or non-binary", "(those for which the relationship is more complex).", "ers and Barocas et", "al.'s", "(2017)", "taxonomy of harms, there are several ways in which trans exclusionary coreference resolution systems can cause harm:", "(cid:5)", "Indirect: subject of query.", "If a person is the subject of a web query, pages about xem may be missed if multiple mentions of query is a ranking feature, and the system cannot resolve xyr pronouns quality of service, erasure.", "(cid:5)", "Direct: by choice.", "If a grammar checker uses coreference, it may insist that an author writing hir third-person autobiography is repeatedly making errors when referring to hirself quality of service, stereotyping, denigration.", "(cid:5)", "Direct: not by choice.", "If an information extraction system run on resumes relies on cisnormative assumptions, job experiences by a candidate who has transitioned and changed his pronouns may be missed allocative, erasure.", "(cid:5)", "Many stakeholders.", "If a machine translation system uses discourse context to generate pronouns, then errors can results in directly misgendering subjects of the document being translated quality of service, denigration, erasure.", "To address such harms as well as understand where and how they arise, we need to complicate", "(a)", "what gender means and", "(b)", "how harms can enter into natural language processing", "(NLP)", "systems.", "Toward", "(a), we begin with a unifying analysis", "( 3)", "of how gender is socially constructed, and how social conditions in the world impose expectations around people's gender.", "Of particular interest is how gender is reflected in language, and how that both matches and potentially mismatches the way people experience their gender in the world.", "Then, in order to understand social biases around gender, we find it necessary to consider the different ways in which gender can be realized linguistically, breaking down what previously have been considered gendered words in NLP papers into finer-grained categories that have been identified in the sociolinguistics literature of lexical, referential, grammatical, and social gender.", "Toward", "(b), we focus on how bias can enter into two stages of machine learning systems: data annotation", "( 4)", "and model definition", "( 5).", "We construct two new datasets:", "(1)", "MAP", "(a similar dataset to GAP", "(Webster et al., 2018)", "but without binary gender constraints)", "on which we can perform counterfactual manipulations and", "(2)", "GICoref", "(a fully annotated coreference resolution dataset written by and about trans people).", "3 In all cases, we focus largely on harms due to overand underrepresentation", "(Kay et al., 2015), replicating stereotypes", "(Sweeney, 2013; Caliskan et al., 2017)", "(par-ticular those that are cisnormative and/or heteronor-mative), and quality of service differentials", "(Buo-lamwini and Gebru, 2018).", "The primary contributions of this paper are:", "(1)", "Connecting existing work on gender bias in NLP to sociological and sociolinguistic conceptions of gender to provide a scaffolding for future work on analyzing gender bias in NLP", "( 3).", "(2)", "Developing an ablation technique for measuring gender bias in coreference resolution annotations, focusing on the human bias that can enter into annotation tasks", "( 4).", "(3)", "Constructing a new dataset, the Gender Inclusive Coreference dataset", "(GICOREF ), for testing performance of coreference resolution systems on texts that discuss non-binary and binary transgender people", "( 5).", "There are four recent papers that consider gender bias in coreference resolution systems.", "Rudinger et al.", "(2018)", "evaluates coreference systems for evidence of occupational stereotyping , by constructing Winograd-esque", "(Levesque et al., 2012)", "test examples.", "They find that humans can reliably resolve these examples, but systems largely fail at them, typically in a gender-stereotypical way.", "In contemporaneous work, Zhao et al.", "(2018a)", "proposed a very similar, also Winograd-esque scheme, also for measuring gender-based occupational stereotypes.", "In addition to reaching similar conclusions to Rudinger et al.", "(2018), this work also used a similar counterfactual data process as we use in 4.1 in order to provide additional training data to a coreference resolution system.", "Webster et al.", "(2018)", "produced the GAP dataset for evaluating coreference systems, by specifically seeking examples where gender", "(left underspecified)", "could not be used to help coreference.", "They found that coreference systems struggle in these cases, also pointing to the fact that some success of current coreference systems is due to reliance on", "(binary)", "gender stereotypes.", "Finally, Ackerman", "(2019)", "presents an alternative breakdown of gender than we use", "( 3), and proposes matching criteria for model-3 Both datasets are released under a BSD license at github.com/TristaCao/into inclusivecoref with corresponding datasheets", "(Gebru et al., 2018).", "ing coreference resolution linguistically, taking a trans-inclusive perspective on gender.", "Gender bias in NLP has been considered more broadly than just in coreference resolution, including, natural language inference", "(Rudinger et al., 2017), word embeddings", "(e.g., Bolukbasi et al., 2016; Romanov et al., 2019; Gonen and Goldberg, 2019), sentiment analysis", "(Kiritchenko and Mohammad, 2018), machine translation", "(Font and Costa-juss`a, 2019; Prates et al., 2019; Dryer, 2013; Frank et al., 2004; Wandruszka, 1969; Nissen, 2002; Doleschal and Schmid, 2001), among many others", "(Blodgett et al., 2020, inter alia).", "Gender is also an object of study in gender recognition systems", "(Hamidi et al., 2018).", "Much of this work has focused on gender bias with a", "(usually implicit)", "binary lens, an issue which was also called out recently by Larson", "(2017b)", "and May", "(2019).", "The concept of gender is complex and contested, covering", "(at least)", "aspects of a person's internal experience, how they express this to the world, how social conditions in the world impose expectations on them", "(including expectations around their sex-uality), and how they are perceived and accepted", "(or not).", "When this complex concept is realized in language, the situation becomes even more complex: linguistic categories of gender do not even remotely map one-to-one to social categories.", "As observed by Bucholtz", "(1999): Attempts to read linguistic structure directly for information about social gender are often misguided.", "For instance, when working in a language like English which formally marks gender on pronouns, it is all too easy to equate recognizing the pronoun that corefers with this name with recognizing the real-world gender of referent of that name.", "Furthermore, despite the impossibility of a per-fect alignment with linguistic gender, it is generally clear that an incorrectly gendered reference to a person", "(whether through pronominalization or otherwise)", "can be highly problematic", "(Johnson et al., 2019; McLemore, 2015).", "This process of misgendering is problematic for both trans and cis individuals to the extent that transgender historian Stryker", "(2008)", "writes: [o]ne's gender identity could perhaps best be described as how one feels about being referred to by a particular pronoun. 3.1 Sociological Gender Many modern trans-inclusive models of gender recognize that gender encompasses many different aspects.", "These aspects include the experience that one has of gender", "(or lack thereof), the way that one expresses one's gender to the world, and the way that normative social conditions impose gender norms, typically as a dichotomy between masculine and feminine roles or traits", "(Kramarae and Tre-ichler, 1985; West and Zimmerman, 1987; Butler, 1990; Risman, 2009; Serano, 2007).", "Gender self-determination, on the other hand, holds that each person is the ultimate authority on their own gender identity", "(Zimman, 2019; Stanley, 2014), with Zimman", "(2019)", "further arguing the importance of the role language plays in that determination.", "Such trans-inclusive models deconflate anatomical and biological traits and the sex that a person had assigned to them at birth from one's gendered position in society; this includes intersex people, whose anatomical/biological factors do not match the usual designational criteria for either sex.", "Trans-inclusive views typically recognize that gender exists beyond the regressive female/male binary 4 ; additionally, one's gender may shift by time or context", "(often genderfluid), and some people do not experience gender at all", "(often agender)", "(Kessler and McKenna, 1978; Schilt and Westbrook, 2009; Darwin, 2017; Richards et al., 2017).", "In 5 we analyze the degree to which NLP papers make trans-inclusive or trans-exclusive assumptions.", "Social gender refers to the imposition of gender roles or traits based on normative social conditions", "(Kramarae and Treichler, 1985), which often includes imposing a dichotomy between feminine and masculine", "(in behavior, dress, speech, occupation, societal roles, etc.).", "Ackerman", "(2019)", "highlights a highly overlapping concept, bio-social gender, which consists of gender role, gender expression, and gender identity.", "Taking gender role as an example, upon learning that a nurse is coming to their hospital room, a patient may form expectations that this person is likely to be female, and may generate expectations around how their face or body may look, how they are likely to be dressed, how and where hair may appear, how to refer to them, and so on.", "This process, often referred to as gendering", "(Serano, 2007)", "occurs both in real world 4 Some authors use female/male for sex and woman/man for gender; we do not need this distinction", "(which is itself contestable)", "and use female/male for gender.", "interactions, as well as in purely linguistic settings", "(e.g., reading a newspaper), in which readers may use social gender clues to assign gender(s)", "to the real world people being discussed.", "Our discussion of linguistic gender largely follows", "(Corbett, 1991; Ochs, 1992; Craig, 1994; Corbett, 2013; Hellinger and Motschenbacher, 2015; Fuertes-Olivera, 2007), departing from earlier characterizations that postulate a direct mapping from language to gender", "(Lakoff, 1975; Silverstein, 1979).", "Our taxonomy is related but not identical to", "(Ackerman, 2019), which we discuss in 2.", "Grammatical gender , similarly defined in Ackerman", "(2019), is nothing more than a classification of nouns based on a principle of grammatical agreement .", "In gender languages there are typically two or three grammatical genders that have, for animate or personal references, considerable correspondence between a FEM", "(resp. MASC )", "grammatical gender and referents with female-", "(resp. male-)", "5 social gender.", "In comparison, noun class languages have no such correspondence, and typically many more classes.", "Some languages have no grammatical gender at all; English is generally seen as one", "(Nissen, 2002; Baron, 1971)", "(though this is contested", "(Bjorkman, 2017)).", "Referential gender", "(similar, but not identical to Ackerman's", "(2019)", "conceptual gender) relates linguistic expressions to extra-linguistic reality, typically identifying referents as female, male, or gender-indefinite.", "Fundamentally, referential gender only exists when there is an entity being referred to, and their gender (or sex) is realized linguistically.", "The most obvious examples in English are gendered third person pronouns ( SHE , HE ), including neopronouns ( ZE , EM ) and singular THEY 6 , but also includes cases like policeman when the intended referent of this noun has social gender male (though not when policeman is used non-referentially, as in every policeman needs to hold others accountable).", "Lexical gender refers to an extra-linguistic properties of female-ness or male-ness in a non-referential way, as in terms like mother as well 5 One difficulty in this discussion is that linguistic gender and social gender use the terms feminine and masculine differently; to avoid confusion, when referring to the linguistic properties, we use FEM and MASC .", "6 People's mental acceptability of singular THEY is still relatively low even with its increased usage (Prasad and Morris, 2020), and depends on context (Conrod, 2018).", "as gendered terms of address like Mrs.", "Importantly, lexical gender is a property of the linguistic unit, not a property of its referent in the real world, which may or may not exist.", "For instance, in Every son loves his parents, there is no real world referent of son (and therefore no referential gen-der), yet it still (likely) takes HIS as a pronoun anaphor because son has lexical gender MASC .", "The relationship between these aspects of gender is complex, and none is one-to-one.", "The referential gender of an individual (e.g., pronouns in English) may or may not match their social gender and this may change by context.", "This can happen in the case of people whose everyday life experience of their gender fluctuates over time (at any inter-val), as well as in the case of drag performers (e.g., some men who perform drag are addressed as SHE while performing, and HE when not (for Transgender Equality, 2017)).", "The other linguistic forms of gender (grammatical, lexical) also need not match each other, nor match referential gender (Hellinger and Motschenbacher, 2015).", "Social gender (societal expectations, in particular) captures the observation that upon hearing My cousin is a librarian, many speakers will infer fe-male for cousin, because of either an entailment of librarian or some sort of probabilistic inference (Lyons, 1977), but not based on either grammatical gender (which does not exist in English) or lexical gender.", "We focus on English, which has no grammatical gender, but does have lexical gender.", "English also marks referential gender on singular third person pronouns.", "Below, we use this more nuanced notion of different types of gender to inspect how bias play out in coreference resolution systems.", "These biases may arise in the context of any of these notions of gender, and we encourage future work to extend care over and be explicit about what notions of gender are being utilized and when.", "A possible source of bias in coreference systems comes from human annotations on the data used to train them.", "Such biases can arise from a combination of (possibly) underspecified annotations guidelines and the positionality of annotators themselves.", "In this section, we study how different aspects of linguistic notions impact an annotator's Mrs.", "(d) /0 Rebekah Johnson Bobbitt", "(b) M. Booth was the younger sister", "(c) sibling of Lyndon B. Johnson", "(b) T. Schneider , 36th President of the United States.", "Born in 1910 in Stonewall, Texas, she", "(a) they worked in the cataloging department of the Library of Congress in the 1930s before her", "(a) their brother", "(c) sibling entered politics.", "judgments of anaphora.", "This parallels Ackerman (2019) linguistic analysis, in which a Broad Matching Criterion is proposed, which posits that match-ing gender requires at least one level of the mental representation of gender to be identical to the candidate antecedent in order to match.", "Our study can be seen as evaluating which conceptual properties of gender are most salient in human judgments.", "We start with natural text in which we can cast the coreference task as a binary classification problem (which of these two names does this pronoun refer to?) inspired by Webster et al. (2018).", "We then generate counterfactual aug-mentations of this dataset by ablating the various notions of linguistic gender described in 3.2, similar to Zmigrod et al. (2019).", "We finally evaluate the impact of these ablations on human annotation behavior to answer the question: which forms of linguistic knowledge are most essential for human annotators to make consistent judgments.", "See Appendix A for examples of how linguistic gender may be used to infer social gender.", "In order to determine which cues annotators are using and the degree to which they use them, we construct an ablation study in which we hide various aspects of gender and evaluate how this impacts annotators' judgments of anaphoricity.", "We construct binary classification examples taken from Wikipedia pages, in which a single pronoun is selected, and two possible antecedent names are given, and the annotator must select which one.", "We cannot use Webster et", "al.'s GAP dataset directly, because their data is constrained that the gender of the two possible antecedents is the same 7 ; for us, we are specifically interested in how annotators make decisions even when additional gender information is available.", "Thus, we construct a dataset called Maybe Ambiguous Pronoun (MAP) follow-7 It is unclear from the GAP dataset what notion of gender is used, nor how it was determined to be the same. ing Webster et", "al.'s approach, but we do not restrict the two names to match gender.", "In ablating gender information, one challenge is that removing social gender cues (e.g., nurse tending female) is not possible because they can exist anywhere.", "Likewise, it is not possible to remove syntactic cues in a non-circular manner.", "For example in (1) , syntactic structure strongly suggests the antecedent of herself is Liang, making it less likely that He corefers with Liang later (though it is possible, and such cases exist in natural data due either to genderfluidity or misgendering).", "(1) Liang saw herself in the mirror...", "He ...", "Fortunately, it is possible to enumerate a high coverage list of English terms that signal lexical gender: terms of address (Mrs., Mr.) and semantically gendered nouns (mother).", "8 We assembled a list by taking many online lists (mostly targeted at English language learners), merging them, and manual fil-tering.", "The assembling process and the final list is published with the MAP dataset and its datasheet.", "To execute the hiding of various aspects of gender, we use the following substitutions:", "(a) PRO : Replace third person pronouns with gender neutral variants ( THEY , XEY , ZE ).", "(b) NAME : Replace names by random names with only a first initial and last name.", "(c) SEM : Replace semantically gendered nouns with gender-indefinite variants.", "(d) ADDR : Remove terms of address.", "9 See Figure 1 for an example of all substitutions.", "We perform two sets of experiments, one following a forward selection type ablation (start with everything removed and add each back in one-at-a-time) and one following backward selection (remove each separately).", "Forward selection is necessary in order to de-conflate syntactic cues from 8 These are, however, sometimes complex.", "For instance, actress signals lexical gender of female, while actor may signal social gender of male and, in certain varieties of English, may also signal lexical gender of male.", "9 An alternative suggested by Cassidy Henry that we did not explore would be to replace all with Mx.", "stereotypes; while backward selection gives a sense of how much impact each type of gender cue has in the context of all the others.", "We begin with ZERO , in which we apply all four substitutions.", "Since this also removes gender cues from the pronouns themselves, an annotator cannot substantially rely on social gender to perform these resolutions.", "We next consider adding back in the original pronouns (always HE or SHE here), yielding NAME SEM ADDR .", "Any difference in annotation behavior between ZERO and NAME SEM ADDR can only be due to social gender stereotypes.", "The next setting, SEM ADDR removes both forms of lexical gender (se-mantically gendered nouns and terms of address); differences between SEM ADDR and NAME SEM ADDR show how much names are relied on for annotation.", "Similarly, NAME ADDR removes names and terms of address, showing the impact of semantically gendered nouns, and NAME SEM removes names and semantically gendered nouns, showing the impact of terms of address.", "In the backward selection case, we begin with ORIG , which is the unmodified original text.", "To this, we can apply the pronoun filter to get PRO ; differences in annotation between ORIG and PRO give a measure of how much any sort of gender-based inference is used.", "Similarly, we get NAME by only removing names, which gives a measure of how much names are used (in the context of all other cues); we get SEM by only removing semantically gendered words; and ADDR by only removing terms of address.", "We construct examples using the methodology defined above.", "We then conduct annotation experiments using crowdworkers on Amazon Mechanical Turk following the methodology by which the original GAP corpus was created 10 .", "Because we wanted to also capture uncertainty, we ask the crowdworkers how sure they are in their choices, between definitely sure, probably sure and unsure.", "Figure 2 shows the human annotation results as binary classification accuracy for resolving the pronoun to the antecedent.", "We can see that removing pronouns leads to significant drop in accuracy.", "This indicates that gender-based inferences, especially social gender stereotypes, play the most significant 10 Our study was approved by the Microsoft Research Ethics Board.", "Workers were paid $1 to annotate ten contexts (the average annotation time was seven minutes).", "role when annotators resolve coreferences.", "This confirms the findings of Rudinger et al. (2018) and Zhao et al. (2018a) that human annotated data incorporates bias from stereotypes.", "Moreover, if we compare ORIG with columns left to it, we see that name is another significant cue for annotator judgments, while lexical gender cues do not have significant impacts on human annotation accuracies.", "This is likely in part due to the low appearance frequency of lexical gender cues in our dataset.", "Every example has pronouns and names, whereas 49% of the examples have semantically gendered nouns but only 3% of the examples include terms of address.", "We also note that if we compare NAME SEM ADDR to SEM ADDR and NAME ADDR , accuracy drops when removing gender cues.", "Though the differences are not statistically significant, we did not expect the accuracy drop.", "Finally, we find annotators' certainty values follow the same trend as the accuracy: annotators have a reasonable sense of when they are unsure.", "We also note that accuracy score are essentially the same for ZERO and PRO , which suggests that once explicit binary gender is gone from pronouns, the impact of any other form of linguistic gender in annotator's decisions is also removed.", "In addition to biases that can arise from the data that a system is trained on, as studied in the previous", "previous section, bias can also come from how models are structured.", "For instance, a system may fail to recognize anything other than a dictionary of fixed pronouns as possible referents to entities.", "Here, we analyze prior work in models for coreference resolution in three ways.", "First, we do a literature study to quantify how NLP papers discuss gender.", "Second, similar to Zhao et al. (2018a) and Rudinger et al. (2018), we evaluate five freely available systems on the ablated data from 4.", "Third, we evaluate these systems on the dataset we created: Gender Inclusive Coreference (GICOREF ).", "In our first study, we adapt the approach Keyes (2018) took for analyzing the degree to which computer vision papers encoded trans-exclusive models of gender.", "In particular, we began with a random sample of 150 papers from the ACL anthology that mention the word gender and coded them according to the following questions: Does the paper discuss coreference resolution?", "Does the paper study English?", "L.G : Does the paper deal with linguistic gender (grammatical gender or gendered pronouns)?", "S.G : Does the paper deal with social gender?", "L.G (cid:54) = S.G : (If yes to L.G and S.G:) Does the paper distinguish linguistic from social gender?", "S.G Binary : (If yes to S.G:) Does the paper explicitly or implicitly assume that social gender is binary?", "S.G Immutable : (If yes to S.G:) Does the paper explicitly or implicitly assume social gender is immutable?", "They/Neo : (If yes to S.G and to English:) Does the paper explicitly consider uses of definite singular they or neopronouns?", "The results of this coding are in Table 1 (the full annotation is in Appendix B).", "We see out of the 22 coreference papers analyzed, the vast majority conform to a folk theory of language: (cid:5) Only 5 .", "5% distinguish social from linguistic gender (despite it being relevant); (cid:5) Only 5 .", "6% explicitly model gender as inclusive of non-binary identities; (cid:5) No papers treat gender as anything other than completely immutable; 11 11 The most common ways in which papers implicitly assume that social gender is immutable is either 1) by relying on external knowledge bases that map names to gender; or 2) by scraping a history of a user's social media posts or emails and assuming that their gender today matches the gender of All Papers Coref Papers L.G?", "(cid:5)", "Only 7 .", "1% (one paper!) considers neopronouns and/or specific singular THEY .", "The situation for papers not specifically about coreference is similar (the majority of these papers are either purely linguistic papers about grammatical gender in languages other than English, or papers that do gender recognition of authors based on their writing; May (2019) discusses the (re)production of gender in automated gender recognition in NLP in much more detail).", "Overall, the situation more broadly is equally troubling, and generally also fails to escape from the folk theory of gender.", "In particular, none of the differences are significant at a p = 0 .", "05 level except for the first two questions, due to the small sample size (accord-ing to an n 1 chi-squared test).", "The result is that although we do not know exactly what decisions are baked in to all systems, the vast majority in our study (including two papers by one of the authors (Daume and Marcu, 2005; Orita et al., 2015)) come with strong gender binary assumptions, and exist within a broader sphere of literature which erases non-binary and binary trans identities.", "Next, we analyze the effect that our different ablation mechanisms have on existing coreference resolutions systems.", "In particular, we run five coreference resolution systems on our ablated data: the AI2 system (AI2; Gardner et al., 2017), hugging face (HF; Wolf, 2017), which is a neural system based on spacy, and the Stanford deterministic (SfdD; Raghunathan et al., 2010), statistical (SfdS; Clark and Manning, 2015) and neural (SfdN; Clark and Manning, 2016) systems.", "Figure 3 shows the results.", "We can see that the system accuracies mostly follow the same pattern as human accuracy scores, though all are significantly lower than human results.", "Accuracy scores for systems drop that historical record.", "dramatically when we ablate out referential gender in pronouns.", "This reveals that those coreference resolution systems reply heavily on gender-based inferences.", "In terms of each systems, HF and SfdN systems have similar results and outperform other systems in most cases.", "SfdD accuracy drops significantly once names are ablated.", "These results echo and extend previous observations made by Zhao et al. (2018a), who focus on detecting stereotypes within occupations.", "They detect gender bias by checking if the system accuracies are the same for cases that can be resolved by syntactic cues and cases that cannot, with original data and reversed-gender data.", "Similarly, Rudinger et al. (2018) focus on detecting stereotypes within occupations as well.", "They construct dataset without any gender cues other than stereotypes, and check how systems perform with different pronouns THEY , SHE , HE .", "Ideally, they should all perform the same because there is not any gender cues in the sentence.", "However, they find that systems do not work on they and perform better on he than she.", "Our analysis breaks this stereotyping down further to detect which aspects of gender signals are most leveraged by current systems.", "Finally, in order to evaluate current coreference resolution models in gender inclusive contexts we introduce a new dataset, GICOREF .", "Here we focused on naturally occurring data, but sampled specifically to surface more gender-related phenomena than may be found in, say, the Wall Street Journal.", "Our new GICOREF dataset consists of 95 doc-Precision Recall F1 AI2 40.4% 29.2% 33.9% HF 68.8% 22.3% 33.6% SfdD 50.8% 23.9% 32.5% SfdS 59.8% 24.1% 34.3% SfdN 59.4% 24.0% 34.2% Table 2: LEA scores on GICOREF (incorrect reference excluded) with various coreference resolution systems.", "uments from three types of sources: articles from English Wikipedia about people with non-binary gender identities, articles from LGBTQ periodicals, and fan-fiction stories from Archive Of Our Own (with the respective author's permission) 12 .", "These documents were each annotated by both of the authors and adjudicated.", "13 This data includes many examples of people who use pronouns other than SHE or HE (the dataset contains 27% HE , 20% SHE , 35% THEY , and 18% neopronouns, people who are genderfluid and whose names or pronouns change through the article, people who are mis-gendered, and people in relationships that are not heteronormative.", "In addition, incorrect references (misgendering and deadnaming 14 ) are explicitly annotated.", "15 Two example annotated documents, one from Wikipedia, and one from Archive of Our Own, are provided in Appendix C and Appendix D. We run the same systems as before on this dataset.", "Table 2 reports results according the standard coreference resolution evaluation metric LEA (Moosavi and Strube, 2016).", "Since no systems are implemented to explicitly mark incorrect references, and no current evaluation metrics address this case, we perform the same evaluation twice.", "One with incorrect references included as regular references in the ground truth; and other with incorrect references excluded.", "Due to the limited number of incorrect references in the dataset, the 12 See https://archiveofourown.org ; thanks to Os Keyes for this suggestion.", "13 We evaluate inter-annotator agreement by treating one annotation as gold standard and the other as system output and computing the LEA metric; the resulting F1-score is 92%.", "During the adjudication process we found that most of the disagreement are due to one of the authors missing/overlooking mentions, and rarely due to true disagreement. 14 According to Clements (2017) deadnaming occurs when someone, intentionally or not, refers to a person who's transgender by the name they used before they transitioned.", "Thanks to an anonymous reader of a draft version of paper for this suggestion.", "difference of the results are not significant.", "Here we only report the latter.", "The first observation is that there is still plenty room for coreference systems to improve; the best performing system achieves as F1 score of 34%, but the Stanford neural system's F1 score on CoNLL-2012 test set reaches 60% (Moosavi, 2020).", "Additionally, we can see system precision dominates recall.", "This is likely partially due to poor recall of pronouns other than HE and SHE .", "To analyze this, we compute the recall of each system for finding referential pronouns at all, regardless of whether they are correctly linked to their antecedents.", "We find that all systems achieve a recall of at least 95% for binary pronouns, a recall of around 90% on average for THEY , and a recall of around a paltry 13% for neopronouns (two systemsStanford deterministic and Stanford neuralnever identify any neopronouns at all).", "Our goal in this paper was to analyze how gender bias exist in coreference resolution annotations and models, with a particular focus on how it may fail to adequately process text involving binary and non-binary trans referents.", "We thus created two datasets: MAP and GICOREF .", "Both datasets show significant gaps in system performance, but perhaps moreso, show that taking crowdworker judgments as gold standard can be problematic.", "It may be the case that to truly build gender inclusive datasets and systems, we need to hire or consult experiential experts (Patton et al., 2019; Young et al., 2019).", "Moreover, although we studied crowdworkers on Mechanical Turk (because they are often employed as annotators for NLP resources), if other populations are used for annotation, it becomes important to consider their positionality and how that may impact annotations.", "This echoes a related finding in annotation of hate-speech that annotator positionality matters (Olteanu et al., 2019).", "More broadly, we found that trans-exclusionary assumptions around gender in NLP papers is made commonly (and implicitly), a practice that we hope to see change in the future because it fundamentally limits the applicability of NLP systems.", "The primary limitation of our study and analysis is that it is limited to English.", "This is particularly limiting because English lacks a grammatical gender system, and some extensions of our work to languages with grammatical gender are non-trivial.", "We also emphasize that while we endeavored to be inclusive, our own positionality has undoubtedly led to other biases.", "One in particular is a largely Western bias, both in terms of what models of gender we use and also in terms of the data we annotated.", "We have attempted to partially compensate for this bias by intentionally including documents with non-Western non-binary expressions of gender in the GICoref dataset 16 , but the dataset nonetheless remains Western-dominant.", "Additionally, our ability to collect naturally occurring data was limited because many sources simply do not yet permit (or have only recently permitted) the use of gender inclusive language in their articles.", "This led us to counterfactual text manipulation, which, while useful, is essentially impossible to do flawlessly.", "Moreover, our ability to evaluate coreference systems with data that includes incorrect references was limited as well, because current systems do not mark any forms of misgendering or deadnaming explicitly, and current metrics do not take this into account.", "Finally, because the social construct of gender is fundamentally contested, some of our results may apply only under some frameworks.", "We hope this paper can serve as a roadmap for future studies.", "In particular, the gender taxonomy we presented, while not novel, is (to our knowledge) previously unattested in discussions around gender bias in NLP systems; we hope future work in this area can draw on these ideas.", "We also hope that developers of datasets or systems can use some of our analysis as inspiration for how one can attempt to measureand then root outdifferent forms of bias in coreference resolution systems and NLP systems more broadly.", "The authors are grateful to a number of people who have provided pointers, edits, suggestions, and annotation facilities to improve this work: Lauren Ackerman, Cassidy Henry, Os Keyes, Chandler May, Hanyu Wang, and Marion Zepf, all contributed to various aspects of this work, including suggestions for data sources for the GI Coref dataset.", "We also thank the CLIP lab at the University of Maryland for comments on previous drafts.", "16 We endeavored to represent some non-Western gender identies that do not fall into the male/female binary, including people who identify as hijra (Indian subcontinent), phuying (Thailand, sometimes referred to as kathoey ), muxe (Oaxaca), two-spirit (Americas), fa'afafine (Samoa) and mahu (Hawaii)." ]
[ "abstain", "abstain", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "objective", "objective", "objective", "objective", "objective", "objective", "objective", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "method", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "result", "method", "method", "abstain", "method", "abstain", "method", "abstain", "method", "result", "abstain", "abstain", "method", "other", "other", "other" ]
[ "Multimodal pre-training models, such as LXMERT, have achieved excellent results in downstream tasks.", "However, current pre-trained models require large amounts of training data and have huge model sizes, which make them difficult to apply in low-resource situations.", "How to obtain similar or even better performance than a larger model under the premise of less pre-training data and smaller model size has become an important problem.", "In this paper, we propose a new M ultis tage P re-training (MSP) method, which uses information at different granularities from word, phrase to sentence in both texts and images to pre-train the model in stages.", "We also design several different pre-training tasks suitable for the information granularity in different stage in order to efficiently capture the diverse knowledge from a limited corpus.", "We take a Simplified LXMERT (LXMERT-S), which has only 45.9% parameters of the original LXMERT model and 11.76% of the original pre-training data as the testbed of our MSP method.", "Experimental results show that our method achieves comparable performance to the original LXMERT model in all downstream tasks, and even outperforms the original model in Image-Text Retrieval task.", "Self-attention based Transformer (Vaswani et al., 2017) effectively overcomes the problem of RNN being difficult to run in parallel, and greatly promotes the development of large-scale pre-training models.", "The pre-training language models, such as BERT (Devlin et al., 2019), have achieved excellent performance in many natural language processing tasks.", "With their big success, researchers have also developed pre-training models on multimodal tasks.", "A series of multimodal pre-training models have been proposed, such as ViLBERT (Lu et al., 2019), LXMERT (Tan and Bansal, 2019), UNITER (Chen et al., 2019) etc., and have achieved excellent results in language-vision multimodal tasks.", "However, the current pre-training models are normally with large-scale parameters, require huge pre-training data and have very high demands on computational resources.", "For example, the GPT model (Radford et al., 2018) has 110 Million parameters, GPT-2 (Radford et al., 2019) has 1.5 Billion parameters, and GPT-3 (Brown et al., 2020) has a staggering 175 Billion parameters.", "The same is true for multimodal pre-trained models.", "For example, LXMERT (Tan and Bansal, 2019) has 183.5 Million parameters and requires 816 TitanX GPU hours for training on 9.18 Million text-image pairs.", "The sizes of these models are too huge for them to be deployed in many real-world scenarios.", "Therefore, the study of lightweight pre-training models, which can achieve similar performances to large-scale models with smaller parameter scales and training costs, is significantly valuable.", "There are some types of work on developing lightweight pre-trained models, including the design of the model structure, quantization, pruning and distillation.", "For example, ALBERT (Lan et al., 2020) is a lightweight model through structural design such as parameter sharing and parameter decomposition, and achieves better performance than original models; Q8BERT (Zafrir et al., 2019) compresses the model to 1/4 of the original model but with no more than 1% performance loss by quantizing 32bit floating point into 8bit; (Michel et al., 2019) used BERT weight pruning to compress the model and found that removing a large number of attention heads would not have a major impact on the model performance; TinyBERT (Jiao et al., 2020) reduced the model size by 7.5 times but with no more than 4% performance loss by designing a teacher-student distillation model.", "All above works are on language pre-training models, and most of them concern scales of model parameters.", "There are few works on cutting training data and light weighing multimodal pretraining model.", "In fact, compared with language model, multimodal pre-training models should deal with data from both language and visual modal, which demand larger amounts of data and more computational resources.", "Meanwhile, collections of training data are more difficult.", "Taking for example the size of text-image pairs used for multimodal pre-training, the frequently used MS COCO (Lin et al., 2014) is a high quality dataset with only 0.82M pairs, while LAIT (Qi et al., 2020) is already a big data with 10M pairs but with average quality.", "Therefore, it is significantly valuable to develop lightweight multimodal pre-training models which can make use of limited data efficiently.", "Existing research on curriculum learning (Ben-gio et al., 2009) has shown that imitating the process of human learning by gradually increasing the difficulty of a task from simple to complex in stages helps to make better use of different types of data and effectively improve the performance of learning.", "Many models (Qi et al., 2020) use as much as data available but few works have been done on how to arrange the tasks for better making use of limited data.", "We therefore borrow the idea of curriculum learning on training pre-training models.", "We construct a pre-training process which makes use of data from smaller units to bigger units in stages, and design appropriate pre-training tasks for each corresponding stage.", "Specifically, we propose a new M ultis tage P retraining (MSP) method.", "The first pre-training stage is on the token units, where the text input is the category labels of the objects in the images, and the image input is the object features.", "An Image Features Random Shuffle (IFRS) is designed as a pre-training task for this stage.", "IFRS randomly shuffles the object features, and the model predicts the original object order based on the text information.", "The second stage focuses on phrase units.", "Phrase-level descriptions of the image are input on the text side and image features are input on the image side.", "A Topic of Image and Text for Phrase (TITP) task is designed for it.", "The third stage is sentence-based pre-training.", "Sentence-level captions are input on the text side, and image features are input on the image side.", "A Topic of Image and Text for Sentence (TITS) task is designed for it.", "We take a Simplified LXMERT (LXMERT-S) which has fewer parameters and less pre-training data as the testbed of our MSP method.", "Experimental results show that our method achieves comparable performance to the original LXMERT model in downstream tasks.", "The main contributions of our work are as follows: (1) We propose a new MSP method that allows the model to learn different granularities of text-image correspondence information at different stages; (2) For each stage, we design pre-training tasks suitable for that stage, IFRS task for token-based pre-training, TITP task for phrase-based pretraining, and TITS task for sentence-based pretraining; (3) With less pre-trained data (11.76%), fewer model parameters (45.9%), less resource consumption (25%) and less training time (46.57%), the performances of downstream tasks are comparable to or even exceed that of the original model.", "Multimodal Pre-training Models Multimodal pre-training models are mainly divided into two categories: single-stream models and two-stream models.", "Single-stream models such as B2T2 (Al-berti et al., 2019), OSCAR (Li et al., 2020), etc., fuse image and text information at the beginning of the input; two-stream models such as ViLBERT (Lu et al., 2019), LXMERT(Tan and Bansal, 2019), etc., encode the image and text information alone first and then fuse them later.", "Generally two-stream models will have more parameters than single-stream models, but whether the single-stream model or the two-stream model has better performance or is related to the specific tasks require more rigorous experimental proof.", "We conduct follow-up experiments based on the two-stream model LXMERT by removing the coding layer of the individual modalities and keeping only the fusion coding layer, so that the simplified LXMERT model is more like the single-stream model.", "Multimodal Pre-training Data There are several different considerations on making use of data.", "VisualBERT (Li et al., 2019) believes that pre-training on the target dataset can improve the performance of the model, so VisualBERT first pre-trains on COCO Caption and then continues pre-training on the target dataset (e.g. VQA).", "Im-ageBERT (Qi et al., 2020), on the other hand, is trained on the out-of-domain LAIT dataset and LXMERT-S Data token-image object (category) Task: IFRS and others [CLS] shirt pan [mask] [CLS] MLM towel mug [CLS] [CLS] MRFR random shuffle IFRS man shirt pan towel mug MOCLXMERT-S [CLS] pan [mask] yellow [CLS] MLM sliver , MRFR yellow pan mug sliver , MOC TITHLXMERT-S [CLS] man [mask] a [CLS] MLM a drying MRFR a pan man a drying MOC TITS [CLS] [CLS] ITM_HS [CLS] [CLS] Data phrase-image object (a\u0000ribute) Task: TITH and others Stage2 Data sentence-image object (seman\u0000c) Task: TITS and others Stage3 Stage1 Figure 1: Overview of our proposed MSP method, including three stages from token, phrase to sentence-based pre-training, with appropriate pre-training tasks for each stage of pre-training.", "then on the in-domain datasets, such as Conceptual Captions(CC) (Sharma et al., 2018) and SBU Captions (Ordonez et al., 2011).", "It can be said the dataset that is most similar to the downstream task is used for training at last, and the general data is used firstly.", "Clearly, this way of using data is directly related to the downstream tasks.", "Different downstream tasks might lead to different order of data usage.", "In this paper, we design a staged pre-training from word-level to phrase-level to sentence-level, which is related to the size of information units.", "We also design suitable pretraining tasks for different phases to fully exploit the text-image information correspondence of different units in each phase, which has consistent effectiveness for different downstream tasks.", "Multimodal Pre-training Tasks The mostly employed language pre-training task is Masked Language Modeling (MLM) (Chen et al., 2019), where tokens are masked with a probability and those masked tokens are predicted by the model.", "Masked Region Feature Regression (MRFR) (Chen et al., 2019), which is similar to the MLM task, is a popular image pre-training task.", "Masked Object Classification (MOC) (Qi et al., 2020) task can be regarded as a multimodal pre-training task, which is to predict the category label of each masked object feature.", "Another popular multimodal pre-training task called Image-Text Matching (ITM) (Chen et al., 2019) is similar to the Next Sentence Prediction (NSP) task in BERT (Devlin et al., 2019), where an image corresponding to a text is randomly replaced with a probability of 50%, and the task is to discriminate whether the image matches the text.", "The existing pre-training tasks for multimodal data are limited.", "We design new pretraining tasks with the aim of making full use of the existing training dataset at different granularities.", "The overall structure of our MSP method is shown in Figure", "1. The pre-training process is divided into three stages based on different granularities of text-image correspondence from token, phrase to sentence.", "We design corresponding pre-training tasks for the three stages.", "We perform the above three-stage pre-training on a simplified model of LXMERT (LXMERT-S).", "The simplified process of the LXMERT model is shown in Figure", "2. The Cross-Modality Encoder of LXMERT-S is identical to the LXMERT.", "We obtain the Simplified LXMERT (LXMERT-S) by removing the Object-Relationship Encoder and Language Encoder.", "The image features and text features are directly input to the Cross-Modality Encoder in the LXMERT-S.", "By removing the single modal coding layer in LXMERT, the 12-layer LXMERT is simplified to a 5-layer LXMERT-S.", "The amounts of parameters in simplified LXMERT-S are only 45.9% of the original model, and the whole experiment can be completed on a single GPU.", "The three-stage pretraining method is also fully applicable to other pre-training models.", "The first stage of pre-training focuses on learning the correspondence between text token units and image objects to help the model mine fine-grained information.", "To this end, we design the appropriate pre-training tasks and corresponding dataset for this phase of pre-training.", "Pre-training Tasks We design an Image Features Random Shuffle (IFRS) pre-training task to enhance the pre-training of the token layer, based on the existing Masked Language Modeling (MLM) (Chen et al., 2019), Masked Region Cross-Modality Encoder Object-RelationshipEncoder Language Encoder [CLS] token token token [CLS] token token token token [CLS] token [CLS] TITS ITM_HS...", "Feature Regression (MRFR) (Chen et al., 2019) and Masked Object Classification (MOC) (Qi et al., 2020).", "Image Features Random Shuffle (IFRS): Given a set of image regions R = { r 1 , r 2 , r 3 . . . r m } , which are obtained by adding a fully-connected (FC) layer to the regions of interest (ROIs) and projecting them to the hidden size, a feature triplet is three consecutive features in R, e.g. t j =( r i , r i +1 , r i +2 ) .", "A shuffle on a triplet is to randomly change the order of features in the triplet with a probability of 5%.", "For example, the triplet t j is shuffled as t [ S ] j = ( r i +1 , r i +2 , r i ) = ( r [ S ] i , r [ S ] i +1 , r [ S ] i +2 )", ".The shuffled triplet t [ S ] j is used as input for the network, and the corresponding output is converted to the dimensionality of ROIs to obtain h ( t [ S ] j )= ( h ( r [ S ] i ) , h ( r [ S ] i +1 ) , h ( r [ S ] i +2 )) .", "The ROIs extracted by Faster-RCNN corresponding to the original t j is f ( t j )=( f ( r i ) , f ( r i +1 ) , f ( r i +2 )) ,We use the L2 loss to calculate the distance between the network output h ( t [ S ] j ) and f ( t j ) as in the following equation.", "Other pre-training tasks: We add the existing MLM, MRFR and MOC tasks to the token-based pre-training.", "MLM masks the token-level category labels of objects with a certain probability P, and the model predicts the masked category label based on the corresponding object feature on the image side.", "MRFR masks the object features, and the model predicts the original object-level features based on the text-side category label and information around the object.", "Training Data We extract training data for IFRS task from caption-image pairs directly.", "For each image, 36 object features and their corresponding 36 category labels are provided by Faster-RCNN.", "These category labels have been unified with the text vocabulary, so they are all included in the text vocabulary.", "During training, the image side inputs the image features in sequence, and the text side inputs the category labels in the corresponding order.", "In the IFRS task, when the image side is shuffled, the order of the text side remains unchanged.", "The previous stage explores the correspondence between the image objects and their category.", "This stage mines the correspondence between the image object and the phrase describing of the object.", "Since the phrase description usually contains richer information about the attributes of the object, such as green old car, building a pre-training task based on the correspondence between the phrase and the object allows the model to obtain rich information about the attributes.", "Pre-training Tasks We define a Topic of Image and Text for Phrase (TITP) pre-training task that more directly supports phrase-based information mining.", "Topic of Image and Text for Phrase (TITP): Given a token sequence of image phrase-level description W = { w 1 , w 2 , w 3 . . . w n } , object feature sequence R = { r 1 , r 2 , r 3 . . . r m } , and correspondent category label sequence L = { l 1 , l 2 , l 3 . . . l m } extracted by Faster-RCNN.", "Let topic set is topic = W L = { p 1 , p 2 . . . p q } , and label set Y = { y 1 , y 2 . . . y v } , where v is the size of the vocabulary.", "If y i topic , then y i is 1, otherwise y i is", "0. We add a FC layer to the multimodal representation to get s ( W, R ) , predict the correct topic from the vocabulary size v categories, and use BCELoss to calculate the gap between the model output s ( W, R ) and the label Y. L = E ( W,R ) D [1 /v v 1 (cid:88) i =0 ( y i logs ( W,R )+(1 y i ) log (1 s ( W,R )) (2) Other pre-training tasks: We add MLM, MRFR and MOC tasks to the phrase-based pre-training. MLM masks the attribute or category information of the phrase with a certain probability P, and the model predicts the masked information based on the corresponding object features. MRFR masks the object features of the image, and the model predicts the original object based on the phrase-level description on the text side and the surrounding object information, and MOC predicts the category and attribute of the object being masked based on the surrounding image features and the phrase-level description on the text side. Training Data: We obtain the corresponding training data based on the Visual Genome (VG) (Kr-ishna et al., 2017) dataset, which contains a large number of phrases. We eliminate the phrases containing verbs. The remaining phrases are concatenated with commas to obtain a phrase-level description of the image. During training, the spliced VG phrase is used as input on the text side and 36 object features extracted by Faster-RCNN are input on the image side. 3.3 Stage 3: Sentence-based Pre-training On the basis of the above token and phrase training, this stage uses the overall sentence-image correspondence relationship for pre-training to mine larger unit text-image related information. Pre-training Tasks we design two sentence-level pre-training tasks, Image-Text Matching Based on Hard Sample (ITM HS) and Topic of Image and Text for Sentence (TITS) described as follows. Image-Text Matching Based on Hard Sample (ITM HS): The purpose of this task is to reduce the noise brought to the model when the text-image pair does not match. We retrieve the top M most similar images for each image from difficult samples file 1 as the hard sample set. In the ITM HS task, each image is replaced with a randomly selected hard sample with probability of 50% if the hard sample sets is not empty. If the set of current sample is empty, an image in the training set is randomly selected. Let the token sequence W = { w 1 , w 2 , w 3 . . . w n } and the image feature sequence R = { r 1 , r 2 , r 3 . . . r m } , the label y { 0 , 1 } indicates whether the input image-text pair matches each other. We apply the FC layer on top of the multimodal representation to get s ( T, R ) , which is the matching score of the image and text. L = E ( W,R ) D [ ylogs ( W,R )+(1 y ) log (1 s ( W,R ))] (3) 1 The difficult sample comes from the difficult sample file in ViLBERT's Image-Text Retrieval task.", "Topic of Image and Text for Sentence (TITS): The purpose of this task is to jointly predict the content described by both image and sentence information.", "Given a token sequence W = { w 1 , w 2 , w 3 . . . w n } , an image feature sequence R = { r 1 , r 2 , r 3 . . . r m } , category labels for object features L = { l 1 , l 2 , l 3 . . . l m } , topic = W L = { p 1 , p 2 . . . p q } , and label Y = { y 1 , y 2 . . . y v } , where v is the size of the vocabulary.", "If y i topic, then y i is 1, otherwise y i is", "0. We apply the FC layer on top of the multimodal representation, convert its dimension to the vocabulary size v to get s ( W, R ) , and use BCELoss to calculate the gap between the model output s ( W, R ) and the label Y. L = E ( W,R ) D [1 /v k = K (cid:88) k =0 ( y i logs ( W,R )+(1 y i ) log (1 s ( W,R ))) (4) Other pre-training tasks: We add the existing MLM, MRFR and MOC tasks to the sentence-based pre-training.", "MLM masks the information in the sentence and the model predicts the masked information based on the all information on the image side.", "MRFR masks the object features of the image and the model predicts the original object based on the overall information at the sentence level on the text side and the surrounding object information.", "MOC predicts the category and attribute of the masked object based on the image features and the text-side sentence-level description.", "Training Data In this stage, the image and its corresponding caption are directly used as input, the sentence level information caption is input on the text side, and the 36 object features provided by Faster-RCNN are input on the image side.", "In this paper, the model is pre-trained using the COCO dataset and part of the VG dataset, and only 1.08M text-image pairs are used, where 0.12M image-text pairs are used in token-based pre-training stage, 0.34M image-text pairs are used in phrase-based pre-training stage, and 0.62M image-text pairs are used in the sentence-based pre-training stage.", "All datasets we used are also used in initial LXMERT.", "Table 1 gives a comparison of the pre-training data, model parameters 2 and 2 We exclude the parameters of the word embedding and pre-training task and only count the number of parameters in the Transform part.", "Visual Question Answering (VQA): There are multiple datasets for VQA.", "We use three common used datasets: VQA V2.0 (Goyal et al., 2017), GQA (Hudson and Manning, 2019), and NLVR2 (Suhr et al., 2019).", "Accuracy is used as to measure model performance.", "Cross-modal Retrieval task: We choose Flickr30K (Young et al., 2014) dataset as the retrieval task data, and evaluate the performance of the model in Image Retrieval (IR), Text Retrieval (TR), Zero Shot Image Retrieval (ZS-IR), and Zero Shot Text Retrieval (ZS-TR) respectively, and the performance metric is the matching score of text and image pairs.", "Zero shot is to evaluate the performance of the pre-trained model directly on the test set without fine-tuning, and is used to evaluate the effect of the pre-trained model.", "Therefore ZS-IR and ZS-TR are directly loaded with model parameters to perform IR and TR tasks without fine-tuning.", "In the fine-tuning stage, the multimodal representation of the model is passed through a FC layer as a joint representation of image and text to solve downstream tasks.", "For VQA tasks, we linearize the multimodal representation into the answer category dimension through the FC layer to predict the answer of each question.", "For the Image-Text Retrieval (Young et al., 2014) task, we randomly replace the image or text, construct three negative examples for an image-text pair, including two random negative examples and a hard sample, and use BCELoss to calculate the difference between the matching score and the text-image matching label .", "We compare our model with both single-stream multimodal pre-training models including Unified VLP (Zhou et al., 2020), VisualBERT (Li et al., 2019) and VL-BERT (Su et al., 2020) and two-stream models including ViLBERT (Lu et al., 2019) and LXMERT (Tan and Bansal, 2019).", "Unified VLP Unified VLP uses a 12 layers of shared multi-layer transformer network for both encoding and decoding, which differs from many existing methods where the encoder and decoder are implemented using separate models.", "It conducts pre-training on the Conceptual Cap-tions(CC) (Sharma et al., 2018) which has around 3.3 million image-text pairs, and requires 150 hours of training on the 8x V100 GPUS.", "Unified VLP includes only the MLM task when processing the comprehension tasks.", "VisualBERT VisualBERT contains 12 layers of transformer with 85.05M parameters.", "It first pre-trains on COCO Caption (Lin et al., 2014) with MLM and ITM tasks and then continues pretraining on the target dataset with MLM task.", "The pre-training data sizes for VisuaBERT on the VQA V2.0 task are shown in Table", "1. For different downstream tasks, the second stage of pre-training needs to be re-trained.", "VL-BERTVL-BERT contains 12 layers of transformer with 134.8M parameters.", "It pre-trains on both visual-linguistic and text-only datasets.", "Samples are randomly drawn from both CC and BooksCorpus (Zhu et al., 2015) & English Wikipedia (at a ratio of 1:1) in each mini-batch.", "VL-BERT considers ITM to be harmful to downstream tasks and therefore only includes MLM and MOC tasks.", "model IR(zero-shot) TR(zero-shot) IR TR R@1 R@5 R@10 R@1 R@5 R@10 R@1 R@5 R@10 R@1 R@5 R@10 ViLBERT 31.86 61.12 72.8 --58.2 84.9 91.52 -LXMERT 24 47.38 58.22 23.6 51.5 61.3 ---ours 42.42 68.7 77.92 49 75 81.8 57.9 (99.4%) 83 (97.8%) 88.7 (97.0%) 64.6 87.5 90.4 Table 3: LXMERT-S results on Image-Text Retrieval task.", "ViLBERT ViLBERT extends the popular BERT architecture to a multi-modal two-stream model, processing both visual and textual inputs in separate streams that interact through co-attentional transformer layers.", "It trains on CC with MLM, MOC and ITM tasks.", "LXMERT LXMERT has a large-scale Transformer model that consists of three encoders and a large-scale pre-training data, including MS COCO, Visual Genome, VQA v2.0, GQA and VG-QA (Zhu et al., 2016).", "The pre-training requires 8.5 days on the 4x TitanX GPUS.", "It also has many pre-training tasks, including MLM, MRFR, MOC, ITM and Image Question Answering (QA) (Tan and Bansal, 2019), and has achieved good results in downstream tasks, especially VQA tasks.", "Our Transformer backbone is the same as LXMERT, where each Transformer block has 768 hidden units and 12 attention heads.", "Image features are extracted by Faster-RCNN (Ren et al., 2015) model (with ResNet-101 (He et al., 2016) backbone) trained on Visual Genome (VG).", "During pre-training, our model is trained for about 95 hours on 1 TitanX GPU, and takes Adam (Kingma and Ba, 2015) as the optimizer with a learning rate of 1e-5.", "We train the token-based model for 10 epochs with a batch size of 64, phrase-based model for 20 epochs with a batch size of 128 and sentence-based model for 20 epochs with a batch size of 128.", "During Fine-tuning, the learning rate of all downstream tasks is 5e-5, and the batch size is 32.", "We fine-tune 6 epochs for VQA V2.0, 5 epochs for GQA, and 8 epochs for NLVR2 and Image-Text Retrieval tasks.", "For hard samples in ITM HS task, we retrieve the top 100 most similar images from difficult samples file.", "For the masking strategies, we randomly mask 15% tokens, 15% object features.", "The codes of our models are available at https: //github.com/lttsmn/LXMERT-S .", "Table 2 gives the results of the model on the three VQA datasets, and Table 3 gives the results of the model on the Flickr30K Image-Text Retrieval dataset.", "It can be seen from both Table 2 and 3 that the pre-training model proposed in this paper has achieved comparable performances with the existing large models under the condition of less training data, fewer parameters and less computing resource occupation.", "In some cases, our small model even outperforms the big one.", "For example, NLVR2 task is 0.22 higher than LXMERT on Test-P, and ZS-IR is 18.42 higher than LXMERT in R@1 under the premise that the model parameters are reduced by 54.1% and the training data set is reduced by 88.24%.", "Table 4 gives results of LXMERT-S on different tasks with different pre-training setting.", "The first Stage(s) count Stage(s) used Tasks used VQAtestdev GQAtestdev NLVR2test-p IRavg ZS-IRavg TRavg ZS-TRavg None vanilla None 68.1 55.71 51.07 55.27 -58.07 Single S MLM MRFR MOC TITS ITM HS 70.25 57.66 70.23 73.86 54.64 78.3 59.73 -ITM HS 69.87 57.48 70.98 71.46 51.79 75.6 53.33 -ITM HS TITS 69.79 57.47 70.73 70.17 49.38 74.1 50.3 T + P + S MLM MRFR MOC ITM 70.1 57.58 72.24 74.31 59.31 77.87 63.33 Two T S MLM MRFR MOC TITS ITM HS IFRS 70.71 58.39 73.85 75.68 61.53 80.27 65.47 -IFRS 70.54 58.4 73.93 76.08 60.5 80.6 65.03 P S MLM MRFR MOC TITS ITM HS TITP 70.58 57.96 72.96 74.81 57.66 79.5 61.13 -TITP 70.52 58.17 71.18 75.49 59.28 80.4 62.73 Three T P S MLM MRFR MOC TITS ITM HS IFRS TITP 71.1 58.7 74.72 76.55 63.01 80.83 68.6 -TITP 71.01 58.3 74.48 76.07 63.07 80.96 67.77 S P T MLM MRFR MOC TITS ITM HS IFRS TITP 69.43 57.98 56.75 71.03 -74.87 P T S MLM MRFR MOC TITS ITM HS IFRS TITP 70.92 58.05 73.62 76.69 61.29 81.63 67 Table 4: Use VQA, GQA, NLVR2, Image-Text Retrieval (Flickr30k) downstream tasks to evaluate the MSP method and pre-training tasks.", "column gives the number of stage(s) in pre-training.", "The second column gives the stage(s) used, where S for sentence stage, P for phrase stage, and T for token stage, T S means there are two stages including token-based pre-training first and then sentence-based pre-training.", "T P S means there are three stages including token-based pre-training first and then phrase-based pre-training and sentence-based pre-training last.", "T+P+S means to train all stages together.", "The third column gives the pretraining tasks used in the pre-training.", "We first give all the pre-training tasks used in the training stages used, then verify the validity of the pretraining tasks by removing a task based on all the pre-training tasks, indicates that a pre-training task is removed.", "From Table 4, we can find: (1) With the orderly increase of the training phase, the performance of the model on downstream tasks is gradually improving; (2) The training granularity from small to large is the most effective training sequence; (3) The pre-training tasks we propose for each stage of pre-training can improve the performance of the model on downstream tasks, such as TITP improves VQA performance by 0.09, GQA performance by 0.4, NLVR2 performance by 0.24, IR performance by 0.48, and ZS-TR by 0.83.", "We visualize the impact of different pre-training stages on VQA and Image-Text Retrieval task by showing the answers probability distribution.", "For each example in Figure 3, the left side is the input image of the model, and the right side is the probability distribution of the top3 scoring answers in different pre-training stages.", "For Image-text Retrieval task, we select the top 1 caption for visualization.", "For each sample in Figure 4, the left side is the input image and the right side is the highest scoring caption predicted by the model.", "From both Figure 3 and 4, we can find: (1) Token-based pre-training (S vs T S ) helps the model to learn object information in the images.", "For example, in the left sample in Figure 3 and 4, the model improves its performance on downstream tasks by adding token-based pre-training that makes the model focus on object information such as horses, man and rocks in the images; (2) Phrase-based pre-training ( T S vs T P S ) helps the model to learn information about the attributes of the objects.", "As shown in right-hand image in Figure 3 and 4, the model pays attention to attribute information, i.e. blanket is white, clothes are pink, etc. 6 Conclusion In this paper, inspired by the idea of curriculum learning, we propose a MSP method, which uses information at different granularities from word, phrase to sentence in both texts and images to pre-train a model in stages, we also design pretraining tasks suitable for each stage of pre-training, IFRS task for word-based pre-training, TITP task for phrase-based pretraining, and TITS task for sentence-based pretraining.", "Experimental results on several VQA datasets as well as one cross-modal retrieval dataset show that our method achieves similar or even better performance than a larger model in terms of accuracy in all downstream tasks under the premise that the model parameters are reduced by 54.1% and the training data set is reduced by 88.24%.", "In future work, we will add the above training method to other simplified pre-trained models to further explore the effectiveness of MSP method.", "We would like to thank anonymous reviewers for their suggestions and comments.", "The work was supported by the National Natural Science Foundation of China (NSFC62076032), the Cooperation Poject with Beijing SanKuai Technology Co., Ltd and the National Key Research and Development Program of China (2020YFF0305302).", "We would like to thank Dr. Huixing Jiang and his colleagues." ]
[ "abstain", "abstain", "abstain", "objective", "method", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "result", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "other", "other", "method", "method", "other", "other", "other", "other", "other", "objective", "method", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "other", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "method", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "result", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "abstain", "abstain", "other", "other", "other" ]
[ "Language modeling is the technique to estimate the probability of a sequence of words.", "A bilingual language model is expected to model the sequential dependency for words across languages, which is difficult due to the inherent lack of suitable training data as well as diverse syntactic structure across languages.", "We propose a bilingual attention language model (BALM) that simultaneously performs language modeling objective with a quasi-translation objective to model both the monolingual as well as the cross-lingual sequential dependency.", "The attention mechanism learns the bilingual context from a parallel corpus.", "BALM achieves state-of-the-art performance on the SEAME code-switch database by reducing the perplexity of 20 .", "5% over the best-reported result.", "We also apply BALM in bilingual lexicon induction, and language normalization tasks to validate the idea.", "Monolingual language modeling has enabled many NLP tasks (Devlin et al., 2019; Dai et al., 2019; Radford et al., 2019).", "However, the bilingual language model was not well studied.", "The recent advances in cross-lingual word embedding (CLWE) (Ruder et al., 2019), which projects word of different languages into a shared embedding space for cross-lingual representations (Devlin et al., 2019; Lample and Conneau, 2019), make possible some cross-lingual applications.", "Unfortunately, they are not optimized to model the sequential dependency for word prediction in a bilingual text.", "In this paper, we would like to propose a bilingual language model that can learn word embeddings to represent the equivalent words between two languages, and more importantly, to model the sequential dependency for words across languages at the same time.", "For instance, the model should be able to predict the appropriate word to fill in the blank, given the bilingual context: movie ( ).", "1 The above sentence is an example of codeswitching or code-mixing (henceforth, CS), where a bilingual speaker alternates words of two or more languages within a single sentence.", "The switches could happen at sentence boundaries or word boundaries and for some agglutinative languages even within words.", "Code-switching is common in both spoken and, to some extent, written communication in many multilingual societies, such as Southeast Asia.", "Hence, the study of code-switch in linguistics and bilingual language modeling is becoming imperative, especially for NLP tasks such as code-switching automatic speech recognition (ASR) (Adel et al., 2013b; Li and Fung, 2013; Lee et al., 2019), cross-lingual language normalization.", "It is tempting to think that, given enough of codeswitching text data, bilingual language modeling could be approached in the same way as that for monolingual data.", "The main challenge is the lack of such CS data.", "We note that CS mainly occurs in the spoken form, and CS does not occur in every sentence.", "Therefore, collecting enough pure CS data is just not practical or even feasible (Lee et al., 2017; Pratapa et al., 2018).", "The problem is further exacerbated by the syntactic constraints of the two diverse languages, such as Chinese and English.", "Three dominant theories seek to explain the syntactic formation of CS sentences.", "They are the Matrix Language Frame theory (Myers-Scotton, 1997), which shows that individual monolingual sentences will conform to the grammar of the matrix language.", "The Equivalence Constraint theory (Poplack, 2000; Sankoff, 1998), which further constrains the intra-sentential CS points to the syntactic boundaries shared by both languages, and the Functional Head Constraint theory (Di Sciullo et al., 1986; Belazi et al., 1994) that imposes constraints on the functional head and its 1 English: The movie last night ( ) complements.", "A bilingual language model should be able to predict a word, either in the matrix language or otherwise, given either a bilingual or monolingual context.", "Therefore, it has to respect the respective monolingual word sequential dependency, the cross-lingual word correspondence, as well as the switching rules between languages.", "The contributions of this paper are summarized as follows: 1. We propose an attention-based, autoregressive model, bilingual attention language model (BALM), that not only learns the latent alignment from a parallel corpus for cross-lingual word embedding but also captures the word sequential dependency.", "2. Adhering to the Matrix Language Frame theory (Myers-Scotton, 1997) and Equivalence Constraint theory (Poplack, 2000; Sankoff, 1998), we implement an objective function by jointly optimizing the cross-entropy loss as the monolingual constraint and the quasi-translation loss as the cross-lingual constraint.", "3. We show that BALM can learn from bilingual parallel data without the need for CS data.", "When adapted on CS data, it outperforms the best reported result on the SEAME dataset in the perplexity test.", "We also successfully apply BALM in bilingual lexicon induction, and language normalization tasks to validate the idea.", "Several prior studies related to bilingual language modeling are the inspiration for this work.", "Cross-lingual correspondence: Several studies are focused on projecting words of different languages onto the common embedding space to establish cross-lingual correspondence.", "One idea is to train a model using bilingual information from corpora aligned at the sentence level (Zou et al., 2013; Hermann and Blunsom, 2014; Luong et al., 2015) and document level (Vulic and Moens, 2016; Levy et al., 2017).", "Another is to exploit the isomorphic structure (Conneau et al., 2017; Artetxe et al., 2018), dictionary (Mikolov et al., 2013; Faruqui and Dyer, 2014; Huang et al., 2015; Zhang et al., 2016), shared cognate, vocab (Hauer et al., 2017; Smith et al., 2017), numeral (Artetxe et al., 2017) through ad-hoc projection.", "As the above approaches do not explicitly consider the sequential dependency of words, the embedding doesn't encode the word ordering information.", "The multilingual techniques, such as M-BERT (Devlin et al., 2019) and XLM (Lample and Conneau, 2019), do not explicitly model the syntactic constraints for CS as formulated in the Equivalence Constraint theory, thus not making full use of the information which could potentially improve their performance.", "Code-switching modeling: Another school of thoughts is to extend the monolingual language modeling technique to accommodate code-switch content.", "Adel et al. (2013b, 2014) use factored language models and recurrent neural network (RNN) language model to improve the bilingual language model for CS ASR rescoring.", "They include additional linguistic information such as Part-of-Speech, language identifier to improve model generalization.", "Inversion constraints (Li and Fung, 2013) and Functional Head constraints (Li and Fung, 2014) are also used in language models for the ASR decoding process.", "Lee and Li (2019) use cross-lingual embedding to tie the input and output layer, and incorporate classes in the RNN language model.", "While these models are effective, they rely on the availability of CS training data.", "Therefore, they are not easily scalable.", "To address this, we propose a way to make use of the existing abundant parallel corpora.", "The method will be explained in Section 3.3.", "Code-switching text generation: Closer to our line of research, Pratapa et al. (2018) propose to use synthetic data following the Equivalence Constraint theory, while Lee et al. (2019) apply the Matrix Language Frame theory.", "In their works, a parser or an aligner is required to process the parallel corpus, which is followed by the standard monolingual language modeling process.", "Such techniques suffer from inaccurate alignment or parsing errors.", "These errors will be carried forward when training the language model.", "More recently, Winata et al. (2019) propose a technique to generate neural-based synthetic data using parallel sentences, in which a Point-Gen network is used to synthesize CS data without external aligner or parser.", "In this paper, we propose to learn the bilingual context and the CS language model jointly by attending to the parallel sentences directly without the need for an external aligner, parser or explicitly generating the synthetic data.", "Next, we discuss the motivation and the theoretical formulation of the proposed Bilingual Attention Language Model (BALM).", "In a bilingual text, we could encounter a sequence of word, w = w l 1 1 , w l 2 2 , . . . w l 2 t , . . . , w l 1 T , code mixed between languages l 1 and l 2 .", "However, such code mixed training data are not easily available.", "Let us assume that only parallel corpus at sentence level between l 1 and l 2 languages is available to us.", "Assuming the validity of the Matrix Frame theory, and Equivalence Constraint theory, the above code-switch sentence, w , can be constructed from two parallel sentences, w l 1 = w l 1 1 , w l 1 2 , . . . , w l 1 T 1 , w l 2 = w l 2 1 , w l 2 2 , . . . , w l 2 T 2 .", "For a monolingual case, the language model maximizes the log-likelihood of p ( w t | w <t ) which effectively captures the monolingual word sequential dependency.", "For a CS case, we would like to maximize p ( w t | w <t ) , whereby the bilingual context, w <t , is non-existent during training.", "In the subsequent section, we will explain the idea to encode the bilingual context using an attention mechanism.", "A bilingual language model has to be built on a common word representation.", "The continuous space word embedding is an effective solution.", "We first draw some principled insights from the cross-lingual word embedding (CLWE) study, which motivates this work.", "Building on the idea of CLWE, we refer to the general form of the loss function, J , summarized by Ruder et al. (2019) as follows, J = L ( X l 1 ) + L ( X l 2 ) + ( X l 1 , X l 2 , A ) .", "The monolingual language constraint L , which could be implemented with negative sampling, preserves the monolingual integrity.", "Importantly, there has to be a cross-lingual constraint, which could be the mean squared error (MSE) between the l 2 embedding space X l 2 = { x l 2 i } , and the transformed l 1 embedding space, X l 1 = { x l 1 i } .", "We use x i to denote the embedding of a word w i , which is also referred to as a token.", "The vocabulary size is v .", "The cross-lingual language constraint maps the two monolingual embeddings into a common space using the transformation matrix A , MSE = v (cid:88) i =1 || A x l 1 i x l 2 i || .", "The CLWE network can also be jointly learned (Lu-ong et al., 2015) with the alignment information as the regularization loss, .", "While CLWE lays the foundation for many cross-lingual applications, it is not designed to model word sequential dependency.", "We draw inspiration from the CLWE loss function and extend the objective function to the modeling of word sequential dependency while preserving its general form.", "The monolingual objective, L ( X l ) as formulated in Equation 3, is set to be the cross entropy loss between the target distribution, y l and the predicted distribution log p ( w lt | w l<t ) , for the respective language, which preserves the monolingual word sequential order.", "This allows the bilingual language model to adhere to the monolingual syntactic rules of the Matrix Language Frame and the Equivalent Constraint theory during word prediction, that the dominant language still abide by its own syntactic principle.", "We also define a quasi-translation loss, , that optimizes the model to learn the correspondence of tokens between languages as well as the dependencies between the current token in l 1 and the preceding context in l 2 .", "The quasi-translation loss can be interpreted as satisfying the requirement of the code-switching principle as described by the two theories.", "Equation 4 is the quasi-translation loss, l 1 l 2 l 1 , when predicting a word in l 1 given a bilingual context.", "Similarly, we have l 1 l 2 l 2 to predict a word in l 2 .", "Motivated by the self-attention model (Vaswani et al., 2017), we hypothesize that an autoregressive translation-cum-language modeling objective could leverage on parallel sentences to learn the bilingual context.", "To start with, let us consider a monolingual case that deals with l 1 .", "We define a transformer language model, f , using a causal mask (Radford et al., 2019), which can be further broken down \u0000 \u0000 22 i you \u0000 \u0000 11 \u0000 \u0000 12 \u0000 \u0000 13 \u0000 \u0000 14 \u0000 \u0000 21 \u0000\u0000\u0000\u0000\u0000 <eos> \u0000\u0000\u0000\u0000\u0000\u0000 like \u0000 \u0000", "into individual layer n in a total of N layers, f n 1 = Attention ( x l 1 <t )) f n 2 = F eedF orward ( f n 1 ) f n = f n 2 f n 1 The model will take in the embedding, x l 1 t = embed ( w l 1 t ) of each word, w l 1 t , in l 1 at the first layer, f 11 , and the output will encode the contextual information that is a weighted sum of its preceding context, f 1 = f 12 ( Attention ( x l 1 <t )) .", "In this way, the output of the last layer f N 2 contains the information, that is necessary for decoding p ( w l 1 t | w l 1 <t ) .", "This process is carried out on the monolingual side of the parallel data respectively for l 1 and l 2 to minimize the loss function in Equation 3. Extending the context of l 1 to include words in l 2 , we enable the model to learn from a bilingual context, as shown in Figure 1a.", "The question is how to find the appropriate context in both l 1 and l 2 to predict a word in l 2 .", "The attention mechanism with the quasi-translation loss provides a solution.", "Figure 1a is an illustration for l 1 l 2 l 2 training case.", "At the last layer, the encoded output for the time step t in l 2 will be, f N 2 ( Attention ( x l 1 , x l 2 t )) .", "It is important to note that the model architecture allows learnable alignment between current word x t with its preceding context in its own language l 2 as well as the whole sentence translation x l 1 in l 1 .", "The use of preceding context can be seen as an autoregressive process over the words in a sentence.", "As the predicted word always follows its preceding context sequentially, the word order in the matrix language matters in BALM.", "However, the attention mechanism does not attempt to distinguish word order within the encoded context, which is a weighted sum of the bilingual context (see discussions in Section 3.5).", "This can be observed in the quasi-translation loss, as formulated in Equation 4. 3.4 Training and Inference During training, we use the two sides of the parallel corpus independently as two monolingual corpora and both sides together as the bilingual constraint.", "When presented with monolingual text in l 1 or l 2 , the network learns to attend to the words in either l 1 or l 2 using a causal mask for monolingual word prediction.", "When presented with l 1 l 2 parallel sentences, and predicting a word in l 1 or l 2 , the network learns to attend to the bilingual context for word prediction.", "To summarize, given a parallel corpus, BALM is trained with 4 input output pairs, l 1 l 1 , l 2 l 2 , l 1 l 2 l 1 , and l 1 l 2 l 2 .", "The bilingual attention in theory allows BALM to take any of l 1 , l 2 or l 1 l 2 as input, and generate any of l 1 , l 2 or l 1 l 2 as output in 6 possible combinations.", "l 1 l 2 l 1 , l 2 represents the code-switch language modeling task of our interest.", "For brevity, we only illustrate the case of l 1 l 2 l 2 in Figure 1a.", "At run time inference, we do not have the two parallel sentences, but rather a code-switch sentence that consists of a mixture of words w <t from the two languages, as in Figure 1b.", "To predict p ( w l 2 t | w <t ) for a code-switch sentence at run time, we assume that the model would have encountered some variants of the bilingual context through ( Attention ( x l 1 , x l 2 <t )) .", "In this way, the model can estimate the run time probability according to the similarity between the encoding of the code-switch sequence, w <t , and the learned bilingual representation.", "The attention-based alignment is expected to find the appropriate bilingual context that was trained under the objective function to maximize p ( w l 2 t | w l 1 , w l 2 <t ) .", "In stark contrast to the masked language model (MLM), which employs positional embedding on top of its sequence ordering invariant setup, BALM does not use positional embedding.", "We argue that under the auto-regressive objective, positional embedding is not necessary.", "In BALM, the amount of information in an auto-regressive setup is strictly increasing.", "Taking one of its intermediate layers as an example, the hidden representation for the current token h t is the weighted sum of the previous tokens, and the weights are computed through the learned query and key matrix, AQ , AK .", "In comparison with a RNN layer, whereby the hidden state is a gated sum of the previous hidden states, i.e. h t = tanh ( W h h t 1 + W x x t ) , the difference is that the weight matrix, W h , for RNN is applied on the gated sum, h t 1 , at each time step while the weight for the attention model, a n,m , is a similarity comparison of the current token's query with the previous tokens' keys.", "The two networks are similar in the sense that they both compute the weights and incorporate the past information.", "They only differ in their implementation.", "We argue that the sequential information is already included in the attention model under an auto-regressive setup.", "Thus the positional encoding is not necessary.", "This is corroborated by Irie et al. (2019), which shows that the removal of positional encoding slightly improves the language model performance.", "By dropping the positional embedding, we can mix the bilingual context, as discussed in Section 3.3.", "We evaluate the language models on the text transcripts of the South East Asia Mandarin-English (SEAME) corpus (LDC2015S04) (Lee et al., 2017), a well-documented database for spontaneous conversational speech code-switching between Chinese Mandarin (ZH) and English (EN).", "A large number of CS studies were reported on SEAME.", "We adopt a slightly different setup as we focus on how BALM is able to learn from a parallel corpus alone without the need of CS training data.", "We use SEAME data mainly for adaptation and evaluation.", "We split the SEAME Phase II text transcripts equally into three portions, labeled as Adapt , Valid and Test respectively in Table 1. Such split also ensures that the individual component within the Test data, e.g. Test EN , is of sufficient size.", "Additionally, we also split the dataset following approximately the same proportion as in the previous works (Winata et al., 2019; Lee et al., 2019) for a fair benchmarking, labeled as Train , Dev , and Eval respectively.", "We use a random split of 1 .", "1 M/ 60 .", "8 K/ 60 .", "3 K for the number of tokens in Train / Dev / Eval as compared to 1 .", "2 M/ 65 K/ 60 K in the previous works.", "We use a bilingual parallel corpus from Ted and OpenSubtitle (Tiedemann, 2012; Lison and Tiede-mann, 2016) for BALM training because they are text transcripts of spontaneous speech similar to SEAME.", "The English text is tokenized using NLTK tokenizer (Bird et al., 2009) while the Chinese text is tokenized using Stanford Word Segmenter (Chang et al., 2008).", "We also develop a test set of 200 sentences for language normalization experiments, labeled as SEAME Norm .", "We conduct a series of experiments, namely BALM, Synthetic CS, CS-Only, and Mono, using the same BALM network architecture to evaluate different modeling strategies.", "During training, we construct a 50K vocabulary consisting of the most frequent words in the combined SEAME and parallel dataset, of which there are 17 .", "7 K and 32 .", "3 K unique Chinese and English words, respectively.", "Only for the benchmarking in Table 3, we use the SEAME vocabulary, a subset of the 50K vocabulary, for the perplexity evaluation to meaningfully compare the perplexity with the prior work on SEAME corpus.", "(Kingma and Ba, 2014) for all the experiments.", "BALM The attention mechanism follows largely the implementation of GPT (Radford et al., 2019), with 384-dimension hidden states, 12 layers and 12 heads.", "While Dai et al. (2019) reports state-of-the-art results using the recurrence mechanism within the attention, we exclude this in our experiment for two reasons.", "Firstly, the context beyond the given parallel sentence is not meaningful after shuf-fling the sentences.", "Furthermore, attending target sequence to context beyond the source sequence may introduce noise and depart from the theoretical motivation of the experiment.", "Secondly, for many downstream tasks like ASR, the decoding remains at the utterance level.", "We first train the BALM on the parallel corpus as described in Section 3.4.", "The trained network is then adapted with SEAME Adapt to bridge the domain gap, namely from l 1 l 2 l 1 and l 1 l 2 l 2 towards l 1 l 2 l 1 l 2 .", "Synthetic CS In this contrastive experiment, we remove the bilingual constraint, i.e. equation 4, from BALM, and use offline synthetic CS text outlined in Lee et al. (2019) in the training.", "The idea of synthetic CS is motivated by the Matrix Language Frame theory.", "The phrase alignment is performed on the same parallel dataset in Table 1, using Giza++ (Och and Ney, 2003).", "The aligned parallel sentences are then used to randomly switch phrases between the languages according to an empirical probability of 0 .", "7 .", "At the same, time the phrase table is used to inhibit switch within frequently occurring phrases.", "We train the same BALM network with both the synthetic CS data and the monolingual side of the parallel data.", "The model is finally adapted with SEAME Adapt .", "Mono & CS-Only In the Mono setting, we simply use parallel corpus as two independent monolingual corpora without any form of bilingual constraint.", "The monolingual sentences are passed alternating between the two languages to ensure a balanced training curriculum.", "The model is finally adapted with SEAME Adapt .", "This is similar to the Multilingual BERT pre-training under causal masking and subsequently fine-tune on the task dataset.", "The CS-Only model is trained only on the SEAME Adapt data without involving the parallel data.", "Positional Embedding We also implement the sinusoidal encoding matrix (Vaswani et al., 2017) and the learned weight matrix for the positional embedding in model PE-S and PE-L respectively.", "Both models are implemented on top of the BALM model using the same training data.", "The positional embedding is an element-wise addition to the word embedding layer.", "For the learned matrix in PE-L, we treat it as another lookup table.", "We simply extend the embedding matrix with the additional entries for each pos .", "In the case of sinusoidal encoding, the extended matrix is fixed to be, P E ( pos, 2 i ) = sin ( pos/ 10000 2 i/ 384 ) P E ( pos, 2 i +1) = cos ( pos/ 10000 2 i/ 384 ) .", "While the perplexity test on SEAME Test CS describes the overall performance of the model on CS sentences.", "As shown in Table 1, CS only takes place at an average occurrence (SPF) of 23% in the CS sentences.", "We would like to take a closer look at how the model performs only at those CS points, which is the main focus of this work.", "A lower perplexity suggests a better word prediction ability.", "The perplexity is evaluated on SEAME Test CS , in which we only include perplexity for the word that is preceded by a different language.", "While BALM is mainly optimized for word prediction, it also establishes cross-lingual word correspondence through word embedding.", "To examine the quality of cross-lingual embedding, we conduct bilingual lexicon induction (BLI) experiments, and compare with other major cross-lingual pretraining models.", "The same parallel corpus in Ta-Models Training Data PPL (SEAME Test) PPL (Test EN/ZH) PPL (Test CS) PPL (CS Points) WER CS only SEAME Adapt 180.09 147.42/139.96 198.09 650.82 28.02% Mono Monolingual+SEAME Adapt 131.54 96.33/99.99 146.37 554.71 27.62% Synthetic CS Parallel+SEAME Adapt 124.65 95.13/99.91 139.17 506.81 26.42% BALM Parallel+SEAME Adapt 118.25 91.74/94.41 130.49 477.78 19.73 % + PE-S Parallel+SEAME Adapt 135.22 101.78/106.12 151.05 561.11 26.24% + PE-L Parallel+SEAME Adapt 143.29 107.34/109.54 161.12 578.02 27.16% Table 2: Perplexity is reported on different test subsets, and at CS Points of Test CS .", "ble 1 is used for training and the same dictionary 2 is used for testing for all models.", "VecMap 3 (Artetxe et al., 2018) is a projection based CLWE alignment method which gives robust results using a unsupervised strategy (Glavas et al., 2019).", "The respective monolingual embeddings are trained using fastText 4 (Bojanowski et al., 2017) with the default setup and 384 dimensions.", "The two monolingual embedding space are then mapped using the VecMap.", "BiSkip 5 (Luong et al., 2015) is jointly trained with word alignment constraint.", "We prepare the alignment using fast align 6 (Dyer et al., 2013) following the similar procedure outlined in the paper.", "For the BALM model, we use the embedding from the model without the SEAME adaptation phase for a fair comparison.", "These three models represent three distinct categories in CLWE implementation, i.e. projection-based, jointly learned, and deep learning based embedding for VecMap, BiSkip and BALM, respectively.", "2 https://github.com/facebookresearch/MUSE#ground-truth-bilingual-dictionaries 3 https://github.com/artetxem/vecmap 4 https://github.com/facebookresearch/fastText 5 https://github.com/lmthang/bivec 6 https://github.com/clab/fast align 4.5 Language Normalization Suppose that l 1 is the matrix language in a code-switch sentence w .", "We would like to replace all l 2 tokens in w with their l 1 equivalent tokens, that is referred to as l 1 l 2 l 1 .", "The normalized sentence w l 1 can be expressed as, w l 1 = arg max w l 1 p ( w l 1 | w ) .", "In practice, when w is presented to BALM, as illustrated in Figure 1c, the network predicts a sequence of tokens one by one in the matrix language as follows, w l 1 = arg max { w l 1 t } t (cid:89) i =1 p ( w l 1 t | w , w l 1 i<t ) , (5) The generated tokens w l 1 i<t becomes the context for the next token w l 1 t in an auto-regressive manner.", "The sequence with the highest probability is simply computed using beam search, which is performed when the eos token is observed.", "We conduct two perplexity (PPL) test experiments, one for comparing the variations of BALM, another for benchmarking against the state-of-the-art.", "Comparing the variations of BALM, we report the overall test PPL as well as the PPL of each components, i.e. Test EN/ZH and Test CS for each model discussed in Section 4.2.", "It is observed in Table 2 that BALM outperforms all other variations, with a PPL of 118 .", "25 on SEAME Test .", "Mono, Synthetic CS and BALM all benefit from the use of data beyond SEAME Adapt.", "BALM represents the most effective use of the bilingual parallel corpus.", "All the results are reported according to the best performing model on SEAME Valid dataset.", "Benchmarking against the state-of-the-art, we show in Table 3 that BALM achieves a PPL of 103 .", "20 on SEAME Eval , which is a 20 .", "52% reduc-No.", "Let us examine the perplexity only at CS points.", "In Table 2, from CS-Only to Mono, we observe a 14 .", "8% PPL reduction, from 650 .", "82 to 554 .", "71 , as a result of the additional monolingual data.", "We have seen similar results in Lee et al. (2019); Gonen and Goldberg (2019).", "Our observation is also very similar to M-Bert and corroborates with the findings of Pires et al. (2019).", "The monolingual data contribute to a better word embedding, which is an integral part of the BALM.", "As the quality of the word embedding improves, so does the word prediction at the CS points.", "We also observe that Synthetic CS shows a 8 .", "6% PPL reduction, from 554 .", "71 to 506 .", "81 with the inclusion of the synthetic CS data.", "This is consistent with the observations in Lee et al. (2019) and Pratapa et al. (2018).", "We further observe that BALM, which is trained on exactly the same parallel data as in Synthetic CS, but with a different objective function, outperforms Synthetic CS by 5 .", "73% .", "This suggests that the quasi-translation loss function is an effective regularizer to enforce the linguistic constraint governing CS.", "We also confirm our aforementioned hypothesis that self-attention mechanism is able to attend to the appropriate bilingual context for word prediction without violating the grammar of the matrix language by qualitatively analysing the generated sentences from the model not yet adapted with CS adapt .", "Both the sinusoidal encoding and the learned encoding matrix degrade the model performance by 14 .", "4% and 21 .", "2% respectively.", "This result con-Method EN-ZH ZH-EN VecMap (Artetxe et al., 2018) 57 .", "firms our hypothesis that the attention mechanism is able to encode the mixed context well without positional embedding.", "The improvement of BALM over BALM+PE in the monolingual PPL also demonstrates that dropping the positional embedding is in fact beneficial.", "The comparable performance justifies the premise that the model is able to find word-level correspondence, which enables the subsequent bilingual context encoding.", "As shown in Table 5, when inferring ZH (Chinese) words from EN (En-glish), BALM ( 56 . 24% ) shows comparable performance with VecMap ( 57 . 13% ), that reported the state-of-the-art results in CLWE.", "However, BALM significantly outperforms VecMap in the inverse pair ZH-EN with an absolute 7 .", "41% improvement ( 48 . 46% 55 . 87% ).", "Two points to take note of, firstly, Glavas et al. (2019) point out that BLI cannot be used as the only metric to assess the word embedding quality and we do not intend to do so.", "Secondly, while it is true that VecMap does not need the corpus to be parallel and ours does, so the comparison did not showcase the best ability of VecMap.", "However, the focus of this paper is not on comparing the best cross-lingual word embedding methods.", "We use BLI performance as evidence to support our claim that BALM does not compromise on its CLWE while focusing on sequential modeling.", "References Heike Adel, Dominic Telaar, Ngoc Thang Vu, Katrin Kirchhoff, and Tanja Schultz.", "Combining recurrent neural networks and factored language models during decoding of code-switching speech.", "In NTERSPEECH-2014 , pages 14151419.", "Hedi M Belazi, Edward J Rubin, and Almeida Jacqueline Toribio.", "1994.", "Code switching and x-bar theory: The functional head constraint.", "Linguistic inquiry , pages 221237.", "Steven Bird, Ewan Klein, and Edward Loper.", "2009.", "Natural Language Processing with Python , 1st edition.", "O'Reilly Media, Inc.", "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov.", "2017.", "Enriching word vectors with subword information.", "Transactions of the Association for Computational Linguistics , 5:135146.", "Pi-Chuan Chang, Michel Galley, and Christopher D. Manning.", "2008.", "Optimizing Chinese word segmentation for machine translation performance.", "In Proceedings of the Third Workshop on Statistical Machine Translation , pages 224232, Columbus, Ohio.", "Association for Computational Linguistics.", "Alexis Conneau, Guillaume Lample, Marc'Aurelio Ranzato, Ludovic Denoyer, and Herve Jegou.", "2017.", "Word translation without parallel data.", "CoRR , abs/1710.04087.", "Zihang Dai, Zhilin Yang, Yiming Yang, Jaime Car-bonell, Quoc Le, and Ruslan Salakhutdinov.", "2019.", "Transformer-XL: Attentive language models beyond a fixed-length context.", "In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics , pages 29782988, Florence, Italy.", "As the code-switch sentence follows the syntactic structure of the matrix language, we assume that the matrix language is known in advance, for example, English for sentences 1-3, and Chinese for sentences 4-6 in Table 4. We observe that sometimes, mistakes can take the form of bad translation, however the normalized sentence still maintains an appropriate structure of the matrix language.", "The 6 th sentence of Table 4 is an example, which is wrongly normalized to to do my assignment (in the sense of task) instead of hand in my assignment (in the sense of homework).", "We report the WER on SEAME Norm between the normalized text and the reference.", "We observe in Table 2 that, with a WER of 19 .", "73% , BALM outperforms other models in the same way as in the perplexity tests.", "We note that BALM is an implementation of l 1 l 2 l 1 l 2 .", "The experiments show that it outperforms all state-of-the-art models in the literature for similar tasks.", "The results validate the idea of bilingual attention.", "The same BALM can be used in l 1 l 2 l 1 or l 2 for language normalization.", "It can be further extended for l 1 l 1 l 2 , or l 2 l 1 l 2 for code switch sentence generation, and l 1 l 2 , or l 2 l 1 for machine translation.", "We thank the reviewers for their insightful feedback.", "This research is supported by the National Research Foundation Singapore under its AI Singapore Programme (Award No. AISG-GC-2019-002); the Programmatic Grant No.", "A18A2b0046 from the Singapore Government's Research, Innovation and Enterprise 2020 plan (Advanced Manufacturing and Engineering domain), Project title: Human Robot Collaborative AI for AME; and the National Research Foundation, Prime Minister's Office, Singapore under the National Robotics Programme, Project title: Human-Robot Interaction Phase 1 (Grant No. 192 25 00054)." ]
[ "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "result", "objective", "method", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "abstain", "other", "other", "other", "other", "objective", "objective", "abstain", "other", "method", "other", "other", "abstain", "method", "other", "other", "objective", "abstain", "other", "other", "method", "other", "other", "other", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other" ]
[ "Abstract", "We introduce a new task called Multimodal Named Entity Recognition (MNER) for noisy user-generated data such as tweets or Snapchat captions, which comprise short text with accompanying images.", "These social media posts often come in inconsistent or incomplete syntax and lexical notations with very limited surrounding textual contexts, bringing significant challenges for NER.", "To this end, we create a new dataset for MNER called SnapCaptions (Snapchat image-caption pairs submitted to public and crowd-sourced stories with fully annotated named entities).", "We then build upon the state-of-the-art Bi-LSTM word/character based NER models with 1) a deep image network which incorporates relevant visual context to augment textual information, and 2) a generic modality-attention module which learns to attenuate irrelevant modalities while amplifying the most informative ones to extract contexts from, adaptive to each sample and token.", "The proposed MNER model with modality attention significantly outperforms the state-of-the-art text-only NER models by successfully leveraging provided visual contexts, opening up potential applications of MNER on myriads of social media platforms.", "Social media with abundant user-generated posts provide a rich platform for understanding events, opinions and preferences of groups and individuals.", "These insights are primarily hidden in unstructured forms of social media posts, such as in free-form text or images without tags.", "Named entity recognition (NER), the task of recognizing named entities from free-form text, is thus a critical step for building structural information, allowing for its use in personalized assistance, recommendations, advertisement, etc.", "While many previous approaches (Lample et al., 2016; Ma and Hovy, 2016; Chiu and", "Nichols, 2015; Huang et al., 2015; Lafferty et al., 2001) on NER have shown success for well-formed text in recognizing named entities via word context resolution ( e.g. LSTM with word embeddings) combined with character-level features ( e.g. CharLSTM/CNN), several additional challenges remain for recognizing named entities from extremely short and coarse text found in social media posts.", "For instance, short social media posts often do not provide enough textual contexts to resolve polysemous entities ( e.g. monopoly is da best , where monopoly' may refer to a board game (named entity) or a term in economics).", "In addition, noisy text includes a huge number of unknown tokens due to inconsistent lexical notations and frequent mentions of various newly trending entities ( e.g. xoxo Marshmelloooo , where Marshmelloooo' is a mis-spelling of a known entity Marshmello', a 852 music producer), making word embeddings based neural networks NER models vulnerable.", "To address the challenges above for social media posts, we build upon the state-of-the-art neural architecture for NER with the following two novel approaches (Figure 1).", "First, we propose to leverage auxiliary modalities for additional context resolution of entities.", "For example, many popular social media platforms now provide ways to compose a post in multiple modalities specifically image and text ( e.g. Snapchat captions, Twitter posts with image URLs), from which we can obtain additional context for understanding posts.", "While monopoly in the previous example is ambiguous in its textual form, an accompanying snap image of a board game can help disambiguate among polysemous entities, thereby correctly recognizing it as a named entity.", "Second, we also propose a general modality attention module which chooses per decoding step the most informative modality among available ones (in our case, word embeddings, character embeddings, or visual features) to extract context from.", "For example, the modality attention module lets the decoder attenuate the word-level signals for unknown word tokens ( e.g . Marshmellooooo with trailing o's) and amplifies character-level features intsead ( e.g . capitalized first letter, lexical similarity to other known named entity token Marshmello', etc.), thereby suppressing noise information (UNK token embedding) in decoding steps.", "Note that most of the previous literature in NER or other NLP tasks combine word and character-level information with naive concatenation, which is vulnerable to noisy social media posts.", "When an auxiliary image is available, the modality attention module determines to amplify this visual context e.g .", "in disambiguating polysemous entities, or to attenuate visual contexts when they are irrelevant to target named entities, e.g .", "selfies, etc.", "Note that the proposed modality attention module is distinct from how attention is used in other sequence-to-sequence literature ( e.g. attending to a specific token within an input sequence).", "Section 2 provides the detailed literature review.", "Our contributions are three-fold: we propose (1) an LSTM-CNN hybrid multimodal NER network that takes as input both image and text for recognition of a named entity in text input.", "To the best of our knowledge, our approach is the first work to incorporate visual contexts for named entity recognition tasks.", "(2) We propose a general modality attention module that selectively chooses modalities to extract primary context from, maximizing information gain and suppressing irrelevant contexts from each modality (we treat words, characters, and images as separate modalities).", "(3) We show that the proposed approaches outperform the state-of-the-art NER models ( both with and without using additional visual contexts) on our new MNER dataset SnapCaptions , a large collection of informal and extremely short social media posts paired with unique images.", "Neural models for NER have been recently proposed, producing state-of-the-art performance on standard NER tasks.", "For example, some of the end-to-end NER systems (Passos et al., 2014; Chiu and Nichols, 2015; Huang et al., 2015; Lample et al., 2016; Ma and Hovy, 2016) use a recurrent neural network usually with a CRF (Laf-ferty et al., 2001; McCallum and Li, 2003) for sequence labeling, accompanied with feature extractors for words and characters (CNN, LSTMs, etc.), and achieve the state-of-the-art performance mostly without any use of gazetteers information.", "Note that most of these work aggregate textual contexts via concatenation of word embeddings and character embeddings.", "Recently, several work have addressed the NER task specifically on noisy short text segments such as Tweets, etc. (Baldwin et al., 2015; Aguilar et al., 2017).", "They report performance gains from leveraging external sources of information such as lexical information ( e.g . POS tags, etc.) and/or from several preprocessing steps ( e.g . token substitution, etc.).", "Our model builds upon these state-of-the-art neural models for NER tasks, and improves the model in two critical ways: (1) incorporation of visual contexts to provide auxiliary information for short media posts, and (2) addition of the modality attention module, which better incorporates word embeddings and character embeddings, especially when there are many missing tokens in the given word embedding matrix.", "Note that we do not explore the use of gazetteers information or other auxiliary information (POS tags, etc.) (Ratinov and Roth, 2009) as it is not the focus of our study.", "Attention modules are widely applied in several deep learning tasks (Xu et al., 2015; Chan 853 et al., 2015; Sukhbaatar et al., 2015; Yao et al., 2015).", "For example, they use an attention module to attend to a subset within a single input (a part/region of an image, a specific token in an input sequence of tokens, etc.) at each decoding step in an encoder-decoder framework for image captioning tasks, etc. (Rei et al., 2016) explore various attention mechanisms in NLP tasks, but do not incorporate visual components or investigate the impact of such models on noisy social media data.", "(Moon and Carbonell, 2017) propose to use attention for a subset of discrete source samples in transfer learning settings.", "Our modality attention differs from the previous approaches in that we attenuate or amplifies each modality input as a whole among multiple available modalities, and that we use the attention mechanism essentially to map heterogeneous modalities in a single joint embedding space.", "Our approach also allows for reuse of the same model for predicting labels even when some of the modalities are missing in input, as other modalities would still preserve the same semantics in the embeddings space.", "Multimodal learning is studied in various domains and applications, aimed at building a joint model that extracts contextual information from multiple modalities (views) of parallel datasets.", "The most relevant task to our multimodal NER system is the task of multimodal machine translation (Elliott et al., 2015; Specia et al., 2016), which aims at building a better machine translation system by taking as input a sentence in a source language as well as a corresponding image.", "Several standard sequence-to-sequence architectures are explored ( e.g . a target-language LSTM decoder that takes as input an image first).", "Other previous literature include study of Canonical Correlation Analysis (CCA) (Dhillon et al., 2011) to learn feature correlations among multiple modalities, which is widely used in many applications.", "Other applications include image captioning (Xu et al., 2015), audio-visual recognition (Moon et al., 2015), visual question answering systems (Antol et al., 2015), etc.", "To the best of our knowledge, our approach is the first work to incorporate visual contexts for named entity recognition tasks.", "Figure 2 illustrates the proposed multimodal NER (MNER) model.", "First, we obtain word embed-Figure 2: The main architecture for our multimodal NER (MNER) network with modality attention.", "At each decoding step, word embeddings, character embeddings, and visual features are merged with modality attention.", "Bi-LSTM/CRF takes as input each token and produces an entity label.", "dings, character embeddings, and visual features (Section 3.1).", "A Bi-LSTM-CRF model then takes as input a sequence of tokens, each of which comprises a word token, a character sequence, and an image, in their respective representation (Section 3.2).", "At each decoding step, representations from each modality are combined via the modality attention module to produce an entity label for each token (3.3).", "We formulate each component of the model in the following subsections.", "Notations : Let x = { x t } Tt =1 a sequence of input tokens with length T , with a corresponding label sequence y = { y t } Tt =1 indicating named entities ( e.g. in standard BIO formats).", "Each input token is composed of three modalities: x t = { x ( w ) t , x ( c ) t , x ( v ) t } for word embeddings, character embeddings, and visual embeddings representations, respectively.", "Similar to the state-of-the-art NER approaches (Lample et al., 2016; Ma and Hovy, 2016; Aguilar et al., 2017; Passos et al., 2014; Chiu and Nichols, 2015; Huang et al., 2015), we use both word embeddings and character embeddings.", "Word embeddings are obtained from an unsupervised learning model that learns co-occurrence statistics of words from a large external corpus, yielding word embeddings as distributional semantics (Mikolov et al., 2013).", "Specifically, we use pre-trained embeddings from GloVE (Pen-nington et al., 2014).", "Character embeddings are obtained from a Bi-LSTM which takes as input a sequence of characters of each token, similarly to (Lample et al., 2016).", "An alternative approach for obtaining character embeddings is using a convolutional neural network as in (Ma and Hovy, 2016), but we find that Bi-LSTM representation of characters yields empirically better results in our experiments.", "Visual embeddings : To extract features from an image, we take the final hidden layer representation of a modified version of the convolutional network model called Inception (GoogLeNet) (Szegedy et al., 2014, 2015) trained on the ImageNet dataset (Russakovsky et al., 2015) to classify multiple objects in the scene.", "Our implementation of the Inception model has deep 22 layers, training of which is made possible via net-work in network principles and several dimension reduction techniques to improve computing resource utilization.", "The final layer representation encodes discriminative information describing what objects are shown in an image, which provide auxiliary contexts for understanding textual tokens and entities in accompanying captions.", "Incorporating this visual information onto the traditional NER system is an open challenge, and multiple approaches can be considered.", "For instance, one may provide visual contexts only as an initial input to decoder as in some encoder-decoder image captioning systems (Vinyals et al., 2015).", "However, we empirically observe that an NER decoder which takes as input the visual embeddings at every decoding step (Section 3.2), combined with the modality attention module (Section 3.3), yields better results.", "Lastly, we add a transform layer for each feature e.g .", "x ( w ) t , x ( c ) t , x ( v ) t := w ( x ( w ) t ) , c ( x ( c ) t ) , v ( x ( v ) t ) before it is fed to the NER entity LSTM.", "Our MNER model is built on a Bi-LSTM and CRF hybrid model.", "We use the following implementation for the entity Bi-LSTM.", "i t = ( W xi h t 1 + W ci c t 1 ) c t = (1 i t ) (cid:12) c t 1 + i t (cid:12) tanh ( W xc x t + W hc h t 1 ) o t = ( W xo x t + W ho h t 1 + W co c t ) h t = LSTM ( x t ) (1) = o t (cid:12) tanh ( c t ) where x t is a weighted average of three modalities x t = { x ( w ) t ; x ( c ) t ; x ( v ) t } via the modality attention module, which will be defined in Section 3.3.", "Bias terms for gates are omitted here for simplicity of notation.", "We then obtain bi-directional entity token representations h t = [ h t ; h t ] by concatenating its left and right context representations.", "To enforce structural correlations between labels in sequence decoding, h t is then passed to a conditional random field (CRF) to produce a label for each token maximizing the following objective.", "y = argmax y p ( y | h ; WCRF ) (2) p ( y | h ; WCRF ) = Q t t ( y t 1 , y t ; h ) P y 0 Q t t ( y 0 t 1 , y 0 t ; h ) where t ( y 0 , y 0 ; h ) is a potential function, WCRF is a set of parameters that defines the potential functions and weight vectors for label pairs ( y 0 , y 0 ).", "Bias terms are omitted for brevity of formulation.", "The model can be trained via log-likelihood maximization for the training set { ( x i , y i ) } : L ( WCRF ) = X i log p ( y | h ; W ) (3) 3.3 Modality Attention The modality attention module learns a unified representation space for multiple available modalities ( e.g . words, characters, images, etc.), and produces a single vector representation with aggregated knowledge among multiple modalities, based on their weighted importance.", "We motivate this module from the following observations.", "A majority of the previous literature combine the word and character-level contexts by simply concatenating the word and character embeddings at each decoding step, e.g. h t = LSTM ([ x ( w ) t ; x ( c ) t ]) in Eq.1.", "However, this naive concatenation of two modalities (word and characters) results in inaccurate decoding, specifically for unknown word token embeddings ( e.g .", "an all-zero vector x ( w ) t = 0 or a random vector x ( w ) t = (cid:15) U ( , + ) is assigned for any unknown token x t , thus h t = LSTM ([ 0 ; x ( c ) t ]) or LSTM ([ (cid:15) ; x ( c ) t ]) ).", "While this concatenation approach does not cause significant errors for well-formatted text, we observe that it induces performance degradation for our social media post 855 datasets which contain a significant number of missing tokens.", "Similarly, naive merging of textual and visual information ( e.g .", "h t = LSTM ([ x ( w ) t ; x ( c ) t ; x ( v ) t ]) ) yields suboptimal results as each modality is treated equally informative, whereas in our datasets some of the images may contain irrelevant contexts to textual modalities.", "Hence, ideally there needs a mechanism in which the model can effectively turn the switch on and off the modalities adaptive to each sample.", "To this end, we propose a general modality attention module, which adaptively attenuates or emphasizes each modality as a whole at each decoding step t , and produces a soft-attended context vector x t as an input token for the entity LSTM.", "where t = [ ( w ) t ; ( c ) t ; ( v ) t ] R 3 is an attention vector at each decoding step t , and x t is a final context vector at t that maximizes information gain for x t .", "Note that the optimization of the objective function (Eq.1) with modality attention (Eq.4) requires each modality to have the same dimension ( e.g .", "x ( w ) t , x ( c ) t , x ( v ) t R p ), and that the transformation via W m essentially enforces each modality to be mapped into the same unified subspace, where the weighted average of which encodes discrimitive features for recognition of named entities.", "When visual context is not provided with each token (as in the traditional NER task), we can de-fine the modality attention for word and character embeddings only in a similar way: [ a ( w ) t , a ( c ) t ] = (cid:0) W m [ x ( w ) t ; x ( c ) t ] + b m (cid:1) (5) ( m ) t = exp( a ( m ) t ) P m 0 { w,c } exp( a ( m 0 ) t ) m { w, c } x t = X m { w,c } ( m ) t x ( m ) t Note that while we apply this modality attention module to the Bi-LSTM+CRF architecture (Sec-tion 3.2) for its empirical superiority, the module itself is flexible and thus can work with other NER architectures or for other multimodal applications.", "The SnapCaptions dataset is composed of 10K user-generated image (snap) and textual caption pairs where named entities in captions are manually labeled by expert human annotators (en-tity types: PER, LOC, ORG, MISC).", "These captions are collected exclusively from snaps submitted to public and crowd-sourced stories (aka Snapchat Live Stories or Our Stories ).", "Examples of such public crowd-sourced stories are New York Story or Thanksgiving Story, which comprise snaps that are aggregated for various public events, venues, etc.", "All snaps were posted between year 2016 and 2017, and do not contain raw images or other associated information (only textual captions and obfuscated visual descriptor features extracted from the pre-trained InceptionNet are available).", "We split the dataset into train (70%), validation (15%), and test sets (15%).", "The captions data have average length of 30.7 characters (5.81 words) with vocabulary size 15,733, where 6,612 are considered unknown tokens from Stanford GloVE embeddings (Pennington et al., 2014).", "Named entities annotated in the SnapCaptions dataset include many of new and emerging entities, and they are found in various surface forms (various nicknames, typos, etc.) To the best of our knowledge, SnapCaptions is the only dataset that contains natural image-caption pairs with expert-annotated named entities.", "Task : given a caption and a paired image (if used), the goal is to label every token in a caption in BIO scheme (B: beginning, I: inside, O: outside) (Sang and Veenstra, 1999).", "We report the performance of the following state-of-the-art NER models as baselines, as well as several configurations of our proposed approach to examine contributions of each component (W: word, C: char, V: visual).", "Bi-LSTM/CRF (W only): only takes word token embeddings (Stanford GloVE) as input.", "The rest of the architecture is kept the same.", "Bi-LSTM/CRF + Bi-CharLSTM (C only): only takes a character sequence of each word token as input.", "(No word embeddings) 856 Modalities Model 4 Entity Types (%) Segmentation (%) Prec.", "Bi-LSTM/CRF + Bi-CharLSTM (W+C) (Lample et al., 2016): takes as input both word embeddings and character embeddings extracted from a Bi-CharLSTM.", "Entity LSTM takes concatenated vectors of word and character embeddings as input tokens.", "Bi-LSTM/CRF + CharCNN (W+C) (Ma and Hovy, 2016): uses character embeddings extracted from a CNN instead.", "Bi-LSTM/CRF + CharCNN (W+C) + Multitask (Aguilar et al., 2017): trains the model to perform both recognition (into multiple entity types) as well as segmentation (binary) tasks.", "( proposed ) Bi-LSTM/CRF + Bi-CharLSTM with modality attention (W+C): uses the modality attention to merge word and character embeddings.", "( proposed ) Bi-LSTM/CRF + Bi-CharLSTM + Inception (W+C+V): takes as input visual contexts extracted from InceptionNet as well, concatenated with word and char vectors.", "( proposed ) Bi-LSTM/CRF + Bi-CharLSTM + Inception with modality attention (W+C+V): uses the modality attention to merge word, character, and visual embeddings as input to entity LSTM.", "Table 1 shows the NER performance on the Snap Captions dataset.", "We report both entity types recognition (PER, LOC, ORG, MISC) and named entity segmentation (named entity or not) results.", "Parameters : We tune the parameters of each model with the following search space (bold indicate the choice for our final model): character embeddings dimension: { 25, 50, 100, 150 , 200, 300 } , word embeddings size: { 25, 50, 100, 150 , 200, 300 } , LSTM hidden states: { 25, 50, 100 , 150, 200, 300 } , and x dimension: { 25, 50, 100, 150 , 200, 300 } .", "We optimize the parameters with Adagrad (Duchi et al., 2011) with batch size 10, learning rate 0.02, epsilon 10 8 , and decay 0.0.", "Main Results : When visual context is available (W+C+V), we see that the model performance greatly improves over the textual models (W+C), showing that visual contexts are complimentary to textual information in named entity recognition tasks.", "In addition, it can be seen that the modality attention module further improves the entity type recognition performance for (W+C+V).", "This result indicates that the modality attention is able to focus on the most effective modality (visual, words, or characters) adaptive to each sample to maximize information gain.", "Note that our text-only model (W+C) with the modality attention module also significantly outperform the state-of-the-art baselines (Aguilar et al., 2017; Ma and Hovy, 2016; Lample et al., 2016) that use the same textual modalities (W+C), showing the effectiveness of the modality attention module for textual models as well.", "Error Analysis : Table 2 shows example cases where incorporation of visual contexts affects prediction of named entities.", "For example, the token curry' in the caption The curry's is polysemous and may refer to either a type of food or a famous basketball player Stephen Curry', and the surrounding textual contexts do not provide 857 Caption (target) Visual Tags GT Prediction (W+C+V) (W+C) + The curry's parade, marching, urban area, ...", "enough information to disambiguate it.", "On the other hand, visual contexts (visual tags: parade', urban area', ...) provide similarities to the token's distributional semantics from other training examples ( e.g . snaps from NBA Championship Parade Story), and thus the model successfully predicts the token as a named entity.", "Similarly, while the text-only model erroneously predicts Apple' in the caption Grandma w dat lit Apple Crisp as an organization ( e.g . Apple Inc.), the visual contexts (describing objects related to food) help disambiguate the token, making the model predict it correctly as a non-named entity (a fruit).", "Trending entities (musicians or DJs such as CID', Duke Dumont', Marshmello', etc.) are also recognized correctly with strengthened contexts from visual information (describing concert scenes) despite lack of surrounding textual contexts.", "A few cases where visual contexts harmed the performance mostly include visual tags that are unrelated to a token or its surrounding textual contexts.", "Visualization of Modality Attention : Figure 3 visualizes the modality attention module at each decoding step (each column), where amplified modality is represented with darker color, and attenuated modality is represented with lighter color.", "For the image-aided model (W+C+V; upper row in Figure 3), we confirm that the modality attention successfully attenuates irrelevant signals ( e.g . selfies, etc.) and amplifies relevant modality-based contexts in prediction of a given token.", "In the example of disney word essential = cof-fee with visual tags selfie, phone, person , the modality attention successfully attenuates distracting visual signals and focuses on textual modalities, consequently making correct predictions.", "The named entities in the examples of Beautiful night atop The Space Needle and Splash Mountain are challenging to predict because they are composed of common nouns (space, needle, splash, mountain), and thus they often need additional contexts to correctly predict.", "In the training data, visual contexts make stronger indicators for these named entities (space needle, splash mountain), and the modality attention module successfully attends more to stronger signals.", "For text-only model (W+C), we observe that performance gains mostly come from the modality attention module better handling tokens unseen during training or unknown tokens from the pre-trained word embeddings matrix.", "For example, while WaRriOoOrs and Kooler Matic are missing tokens in the word embeddings matrix, it successfully amplifies character-based contexts ( e.g . capitalized first letters, similarity to known entities Golden State Warriors') and suppresses word-based contexts (word embeddings for unknown tokens e.g . WaRriOoOrs'), leading to correct predictions.", "This result is significant because it shows performance of the model, with an almost identical architecture, can still improve without having to scale the word embeddings matrix indefinitely.", "Figure 3", "(b) shows the cases where the modality attention led to incorrect predictions.", "For example, the model predicts missing tokens HUUUGE and Shampooer incorrectly as named entities by amplifying misleading character-based contexts ( e.g . capitalized first letters) or visual contexts ( e.g .", "Sensitivity to Word Embeddings Vocabulary Size : In order to isolate the effectiveness of the modality attention module on textual models in handling missing tokens, we report the performance with varying word embeddings vocabulary sizes in Table 3.", "By increasing the number of missing tokens artificially by randomly removing words from the word embeddings matrix (original vocab size: 400K), we observe that while the overall performance degrades, the modality attention module is able to suppress the peformance degradation.", "Note also that the performance gap generally gets bigger as we decrease the vocabulary size of the word embeddings matrix.", "This result is significant in that the modality attention is able to improve the model more robust to missing tokens without having to train an indefinitely large word embeddings matrix for arbitrarily noisy social media text datasets.", "We proposed a new multimodal NER (MNER: image + text) task on short social media posts.", "We demonstrated for the first time an effective MNER system, where visual information is combined with textual information to outperform traditional text-based NER baselines.", "Our work can be applied to myriads of social media posts or other articles across multiple platforms which often include both text and accompanying images.", "In addition, we proposed the modality attention module, a new neural mechanism which learns optimal integration of different modes of correlated information.", "In essence, the modality attention learns to attenuate irrelevant or uninformative modal information while amplifying the primary modality to extract better overall representations.", "We showed that the modality attention based model outperforms other state-of-the-art baselines when text was the only modality available, by better combining word and character level information." ]
[ "abstain", "objective", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "result", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "objective", "other", "other", "other", "other", "other", "abstain", "abstain", "other", "other", "other", "objective", "method", "other", "abstain", "other", "other", "other", "objective", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "other", "abstain", "method", "abstain", "other", "abstain", "other", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "other", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "objective", "objective", "method", "objective", "abstain", "result" ]
[ "Document structure is critical for efficient information consumption.", "However, it is challenging to encode it efficiently into the mod-ern Transformer architecture.", "In this work, we present HIBRIDS, which injects Hierarchical Biases foR Incorporating Document Structure into the calculation of attention scores.", "We further present a new task, hierarchical question-summary generation, for summarizing salient content in the source document into a hierarchy of questions and summaries, where each follow-up question inquires about the content of its parent question-summary pair.", "We also annotate a new dataset with 6 , 153 question-summary hierarchies labeled on long government reports.", "Experiment results show that our model produces better question-summary hierarchies than comparisons on both hierarchy quality and content coverage, a finding also echoed by human judges.", "Additionally, our model improves the generation of long-form summaries from lengthy government reports and Wikipedia articles, as measured by ROUGE scores.", "Document structure facilitates information searching, reading comprehension, and knowledge acquisition by providing an informative overview of the content (Guthrie et al., 1991; Meyer et al., 1980; Taylor and Beach, 1984; Shavelson, 1974; Jonassen, 1988).", "Specifically, for summarization, its utility is twofold: (1) Source document structures, such as sections and paragraphs, can be instructive for summary generation (Cohan et al., 2018; Celikyilmaz et al., 2018; Zhang et al., 2019); (2) Structures in output summaries, e.g., time-lines (Shahaf et al., 2012; Wang et al., 2015) or aspects (Angelidis and Lapata, 2018), can also ease content understanding.", "Nonetheless, state-of-the-art abstractive summarization systems, all built on the Transformer architecture (Zhang et al., 2020; Lewis et al., 2020), use attentions to estimate relations between pairwise tokens and largely ignore document structures.", "While hierarchical encoding has been investigated (Zhang et al., 2019; Balachandran et al., 2021), its need for training large amounts of additional parameters leads to increased memory footprint and thus limits the allowed input length.", "As for the output, the structure of single document summaries remains largely flat, such as a list of aspects (Meng et al., 2021).", "We argue that it is imperative to develop systems that can output summaries with rich structures to support knowledge acquisition, which is especially critical for long documents that cover numerous subjects with varying details (Huang et al., 2021; Kryscinski et al., 2021).", "This work consists of two main objectives: (1) effectively informing summarization models of the source document's structure, and (2) presenting a new summarization task that produces hierarchically organized question-summary pairs to facilitate information consumption.", "To this end, we propose HIBRIDS (Hierarchical Biases foR Incorporating Document Structure).", "1 We design learnable hierarchical biases , as part of the Transformer attention calculation, to adjust attention weights based on tokens' relative positions with regard to the document structure, inspired by the relative position method that modifies attention calculation (Raffel et al., 2020).", "Concretely, we leverage the natural structure of a document, i.e., section levels, to construct a document structure tree (Figure 2).", "Each learnable bias corresponds to the relation between a pair of sections, based on the distance between them in the structure tree.", "Intuitively, hierarchical biases adjust attention weights between tokens based on how conceptually close/distant their corresponding sections are, and they also enable summarizers to capture long-range 1 Our code and newly collected data can be found at https://shuyangcao.github.io/projects/structure_long_summ .", "Furthermore, we design a new summarization task, hierarchical question-summary generation : Given a document, automatically generate questions and summaries that are organized hierarchically to lay out details for topics at different levels.", "As shown in Figure 1, each question asks about salient content of the document (to be summarized) and its child questions focus on content in the corresponding summary.", "This hierarchy not only exposes salient topics and their relations, but also allows readers to quickly identify aspects of interest to focus on.", "Our task design is inspired by the top-down knowledge learning process: People start by asking broad questions to acquire general knowledge, and then dive into details (Hintikka, 1981; Stede and Schlangen, 2004).", "Notably, as there is no available dataset with such annotations, we also label a new dataset, GOVREPORT-QS , consisting of 6 , 153 question-summary (QS) hierarchies for summary paragraphs based on 1 , 714 reports from the GOVREPORT dataset (Huang et al., 2021).", "Each summary paragraph contains 4 .", "07 questions with an average QS hierarchy depth of 2 .", "26 levels.", "We first compare HIBRIDS with models that use structure-aware architectures (Rohde et al., 2021) and linear relative positions (Raffel et al., 2020).", "We conduct experiments on the hierarchical QS generation dataset using two setups: (1) generating a full hierarchy given the first question, and (2) generating follow-up questions given a QS pair.", "Automatic evaluation shows that our model produces better follow-up questions and summaries than comparisons, while also achieving better or comparable content coverage of full summaries, when compared with a hierarchical model (Ro-hde et al., 2021) that learns 2 M more parameters.", "In human evaluation, HIBRIDS is considered to build better hierarchies that require fewer manual corrections with more relevant summaries.", "We further test on the long document summarization task to produce full summaries using GOVREPORT and a newly collected dataset consisting of about 21 k high-quality biographies with summaries from Wikipedia.", "Again, our system summaries obtain uniformly higher ROUGE scores than comparisons, demonstrating the generalizability of HIBRIDS.", "Document Structure-aware Summarization.", "Structural information has long been leveraged for identifying summary-worthy content, including discourse structures labeled by experts (Marcu, 1997) or automatic parsers (Hirao et al., 2013; Durrett et al., 2016; Xu et al., 2020), and topical structures derived from lexical chains (Barzilay and Elhadad, 1999) or probabilistic models (Barzilay and Lee, 2004; Daum III and Marcu, 2006).", "Natural structures of documents, such as sentences, have been used for pre-training a sentence-level encoder (Zhang et al., 2019) or inducing dependencies among them (Liu et al., 2019) for building extractive summarization systems.", "Based on separately encoded paragraphs, deep communica-787 tion agents (Celikyilmaz et al., 2018) and inter-paragraph attentions (Liu and Lapata, 2019) are employed to build abstractive summarization models by exchanging information from different paragraphs.", "Using section structures, Cohan et al. (2018) design a section-level encoder based on the output of a word-level encoder for long document summarization.", "Nevertheless, multi-level encoders are more expensive since they introduce a significant amount of parameters and add extra padding at multiple levels of model design.", "By contrast, HIBRIDS effectively informs models of document structure by introducing a novel bias term in attention calculation among tokens, which only introduces a small number of learnable parameters.", "Long Document Summarization also benefits from the inclusion of document structure information.", "For example, extractive summarization methods are developed to combine section-level and sentence-level information encoded by multilevel encoders (Xiao and Carenini, 2019) and include longer context via sliding encoding over sections (Cui and Hu, 2021).", "Recent work on summarizing long documents focuses on designing efficient Transformers with sparse attentions to produce abstractive summaries for long documents in an end-to-end fashion (Beltagy et al., 2020; Zaheer et al., 2020; Huang et al., 2021).", "However, they all ignore the natural structure of long documents, such as sections and subsections.", "Based on a simple design, HIBRIDS can be integrated into any efficient Transformer seamlessly for incorporating document structure information.", "Generating question-answer (QA) pairs has been studied to facilitate information seeking within documents, mainly for producing questions that can be addressed by short phrases (Du and Cardie, 2018; Liu et al., 2020).", "Prior work mostly focuses on improving QA pair relevance by leveraging additional QA systems (Sachan and Xing, 2018), measuring roundtrip consistency (Alberti et al., 2019), or refining questions iteratively (Qu et al., 2021).", "Generating a two-level hierarchy of QA pairs from a given paragraph is investigated by Krishna and Iyyer (2019).", "Our work is different in at least three aspects.", "First, our goal is to provide a structured summary that focuses on the salient content of the given document, rather than creating questions about any generic information, as done in most QA data construction (Rajpurkar et al., 2016; Choi et al., 2018).", "Second, our GOVREPORT-QS data 1.1.1 1 1.1 1.2 2 ROOT 0,0 1,-1 2,-2 1,-1 2,0 -1,1 0,0 1,-1 2,0 3,1 -2,2 -1,1 0,0 3,1 4,2 -1,1 -2,0 -3,-1 0,0 3,1 -2,0 -3,-1 -4,-2 -3,-1 0,0 1 1.1 1.1.1 1.2 2 1 1 .", "concerns richer hierarchies for presenting content in long documents, e.g., 23 .", "6% of our hierarchies contain at least three levels .", "Our parent-child pairs also cover diverse relations, e.g., adding explanations or expanding the topics, beyond asking about specific details as done in Krishna and Iyyer (2019).", "Third, our questions are designed to be open-ended and grounded in the given document, so our new task is more suitable for summarization models.", "In this section, we first introduce how relative positions are defined over the document structure tree.", "Then we present HIBRIDS, which can be included in encoder self-attentions or decoder cross-attentions to adjust the attention scores based on tokens' relative positions.", "We first construct a document structure tree (Fig-ure 2, left), by leveraging the natural structure of sections and subsections (henceforth sections) in documents, which is available in our experiment data extracted from government reports and Wikipedia articles.", "We then capture the relative position between pairwise tokens x and y in two different sections, e.g., S x and S y , with two tree-based measures.", "(1) PathLen ( x, y ) : the length of the shortest path from S x to S y ; (2) LvlDiff ( x, y ) : the level difference from S x to S y .", "PathLen is 788 designed to be asymmetric to capture content ordering, i.e., its value is positive if S x appears before S y in the document, and vice versa.", "Examples are displayed in Figure 2.", "The design of HIBRIDS is based on a lookup table B [ , ] : Each item in it corresponds to a learnable hierarchical bias defined by path length and level difference, which is then used to bias the attention calculation for tokens in different sections.", "Each head maintains its own lookup table B .", "We first apply HIBRIDS to Transformer encoder self-attention computation, which is called HIBRIDS-ENC .", "Given the i -th query q i and the matrix K formed by n keys for all input tokens, HIBRIDS adds a bias for each key, with respect to the i -th query, to attention calculation: a ij = softmax( q i KT + b i ) j (1) where the vector b i = [ b i 1 , . . . , b ij , . . . , b in ] contains the bias terms derived from our hierarchical biases as follows: b ij = B [ PathLen ( i, j ) , LvlDiff ( i, j )] (2) where PathLen ( i, j ) and LvlDiff ( i, j ) are the path length and level difference between the sections that tokens i and j belong to.", "Note that b ij varies among different heads.", "HIBRIDS-ENC guides tokens to attend to structurally related tokens during encoding.", "We then apply HIBRIDS to decoder cross-attention calculation, named as HIBRIDS-DEC , to encourage more coherent generation by establishing better alignment with the source document.", "At the generation step t , the cross-attention weight to the j -th input token adjusted by bias b tj is obtained similarly as in Eq.", "1 with the following modification.", "We calculate b tj as the weighted sum of the hierarchical biases for all input tokens (indexed with l ) to the j -th token.", "The weight is chosen as the decoder's second last layer's cross-attention score between the t -th generated token and the l th input token, which is shown to better capture word alignment (Garg et al., 2019; Cao and Wang, 2021a).", "b tj is only applied to the decoder's last layer with the following formulation: b tj = (cid:88) l a crstl B [ PathLen ( l, j ) , LvlDiff ( l, j )] (3) where a crstl is the decoder's second last layer's cross-attention weight for the generation step t to the l -th input token.", "HIBRIDSS with Selected Relations.", "We further consider only keeping salient relations from the tree to reduce the number of parameters to learn, including self (same section), parent-child , ancestor-descendant , sibling , neighboring in text , and within the same top-level section (e.g., 1.1.1 and 1.2 are both in 1).", "In total, they account for 21 .", "6% of all relation occurrences.", "The modified HIBRIDSS can also be applied to both encoder and decoder.", "We introduce a new summarization task in this section: Given a document or several sections of a document, we aim to generate question-summary (QS) pairs that are organized hierarchically.", "As shown in Figure 1, this QS hierarchy lays out details for topics at multiple levels, with each child QS pair expanding the content of its parent.", "Our task is motivated by how human learns knowledge in a top-down fashion, where general knowledge is acquired first and details and in-depth content are explored later (Hintikka, 1981).", "This hierarchy proactively highlights the document structure, to further promote content engagement and comprehension (McKeown et al., 2009).", "We first annotate a new dataset, GOVREPORT-QS, with hierarchical QS pairs, based on articles and corresponding summaries selected from the GOVREPORT dataset (Huang et al., 2021).", "As these documents and summaries have 9 , 409 and 553 words on average respectively, directly annotating full documents with a QS hierarchy presents a challenge.", "To address this, we ask annotators to create hierarchical questions for a selected summary paragraph and only allow them to select complete sentences from the summary paragraph as the corresponding answers.", "Each question created should be fully addressed by its answer and the answer should not contain information irrelevant to the question.", "For follow-up questions, they are encouraged to ask about specific details or issue questions that can yield summaries that elaborate from their parents.", "Annotators are also instructed to construct 789 hierarchies of as many levels as possible.", "Figure 1 demonstrates how hierarchical questions are created and how answer sentences are selected when annotating a report on the development of renewable energy.", "To cover more documents and avoid collecting shallow hierarchies, each summary paragraph is annotated by one annotator and we select high-quality summary paragraphs for annotation based on heuristic rules, e.g., each paragraph should have at least 3 sentences and 70 words and an adequate level of abstractiveness as measured by normalized density of extractive fragments (Grusky et al., 2018) (with a threshold of < 0 . 15 ).", "Annotation instructions and details of paragraph selection are in Appendix A. We hired 11 college students who are native English speakers to carry out the annotation tasks in multiple rounds.", "Feedback was provided to each annotator after each round.", "A finalization stage was conducted after collecting all annotations, where 4 high-quality annotators were asked to correct typos, remove factoid questions, and make minor adjustment to the hierarchies when errors were detected.", "GOVREPORT-QS Statistics.", "In total, 6 , 153 summary paragraphs are annotated with 25 , 055 QS pairs.", "On average, 4 .", "07 QS pairs are created per summary paragraph, spanning 2 .", "26 levels.", "70 .", "5% and 23 .", "6% of paragraphs are annotated with two and three levels of questions, making our dataset a valuable benchmark for studying QS hierarchy generation, query-focused summarization, and question generation.", "The QS hierarchies then become the target generation, and we construct inputs to our QS hierarchy generation system by mapping annotated summary paragraphs back to sections in source documents.", "Concretely, we match each summary sentence to a document paragraph based on a combination of BERT-based, word overlap-based, and entity overlap-based similarities (details in Appendix A).", "All sections where matched paragraphs belong, along with the titles of their ancestor sections, are combined together to serve as the system input for generating the corresponding QS hierarchy, as demonstrated in Figure 1.", "The paired sections have an average length of 2 , 029 , longer than documents in many standard summarization benchmarks.", "Task I: QSGen-Hier.", "Based on GOVREPORTQS, we first experiment with a setup where, given the aligned document sections and a root question, the model is expected to produce a summary that addresses the question as well as the rest of the hierarchy.", "To linearize a QS hierarchy for the Transformer sequential decoder, we concatenate its QS pairs following a depth-first traversal.", "Special tokens are inserted before each QS pair to indicate the change of its level from the previous QS pair: [L ] , [L ] , and [L-] indicate that the level has incremented, decremented, and not changed, respectively.", "For example, the sample hierarchy in Figure 1 can be formulated as: A1 [L ] Q1.1 A1.1 [L-] Q1.2 A1.2 [L ] Q1.2.1 A1.2.1 .", "On this task, we divide our samples into train/dev/test splits with sizes of 4 , 878 / 644 / 631 .", "Task II: QSGen-ChildQ.", "Next, we leverage GOVREPORT-QS for follow-up question generation: Given a QS pair and the aligned document sections, we aim to generate all child questions.", "With this setup, two samples can be created from the example in Figure 1.", "The first one takes as input Q1 A1 and the aligned sections to generate Q1.1 Q1.2 , whereas the other reads in Q1.2 A1.2 and the aligned sections to produce Q1.2.1 .", "Here we construct train/dev/test splits with sizes of 7 , 157 / 958 / 942 .", "Task III: Full Summary Generation.", "We also conduct experiments on GOVREPORT to test HIBRIDS on generating long-form summaries for long inputs.", "We use the original data splits with 17 , 516 / 974 / 973 samples in train/dev/test sets.", "We further collect a new dataset from WikiProject Biography 2 (WIKIBIOSUM ) to perform biography summarization.", "After collecting all available biographies, we keep the ones with at least two levels of section hierarchy and preserve section structures of all levels.", "For each article, the paragraph before the first section is treated as the target summary, and the rest becomes the input.", "The finalized dataset has 20 , 833 pairs, divided into 18 , 751 / 1 , 041 / 1 , 041 samples for train/dev/test sets.", "The average lengths of the input and output for WIKIBIOSUM are 3 , 478 and 1 , 266 .", "Details of WIKIBIOSUM data collection and filtering procedures are in Appendix B. We set the maximum input length to 5 , 120 for QSGen-Hier, QSGen-ChildQ, and full document summarization on WIKIBIOSUM .", "On GOVREPORT , the limit is set to 16 , 384 .", "Evaluation Metrics.", "We use ROUGE (Lin, 2004) for summarization evaluation and additionally report BLEU up to 4-gram (Papineni et al., 2002) for evaluating the generated questions.", "We propose to evaluate the generated QS hierarchy against the reference hierarchy with F1 scores calculated as follows, inspired by labeled attachment score in dependency parsing (Zeman et al., 2017): We first map each generated QS pair to a reference QS pair following the highest sum of ROUGE-1 and ROUGE-2 scores between their summaries.", "After that, we consider two QS pairs with parent-child relation in the generated hierarchy.", "A match is established only when their mapped QS pairs have a parent-child or ancestor-descendant relation in the reference hierarchy.", "Precision can then be calculated based on the matching results.", "We further weight each match based on the sum of the ROUGE-1 and ROUGE-2 scores calculated over both parent and child summaries.", "Weighted recall and F1 are calculated similarly.", "Comparisons.", "All tasks in this work involve long inputs.", "To allow efficient encoding, we use LONGFORMER (Beltagy et al., 2020) with a window size of 1024 as the base model, and fine-tune it for all systems and comparisons.", "We first consider comparisons by adding special tokens to encode document structure: (1) SECTOK inserts a special token [SEC] at the start of each section.", "(2) LVLSECTOK further differentiates sections at varying levels using different tokens (e.g., [SEC-L1] for 1, [SEC-L2] for 1.1).", "Based on LVLSECTOK , we build all HIBRIDS variants and other comparisons listed below: HIERENC : We implement the hierarchical model by Rohde et al. (2021), where we replace its sentence encoder with a section encoder of 12 layers to maintain section structures.", "Among all models, HIERENC requires the most architecture change and adds the most parameters to learn.", "the selected relations used by HIBRIDSS (3) in a multi-task prediction setup with a bilinear classifier, operating on the representations of section tokens.", "We use equal weights for prediction loss and summarization loss.", "TOKBIAS uses linear relative position biases as in T5 (Raffel et al., 2020), which changes Eq.", "2 to b ij = R [ i j ] where R [ ] is a lookup table with each item corresponding to a learnable bias for a given relative distance.", "SECBIAS replaces token-level linear distance in TOKBIAS with section-level linear distance.", "Notably, LONGFORMER and models using special tokens have 4 .", "59 M parameters.", "HIBRIDS and models with linear relative position biases use about 4 .", "60 M parameters in total.", "On the other hand, HIERENC and MULTITASK modify the architecture and have 6 .", "62 M and 4 .", "66 M parameters, which is less efficient for learning compared with models that use bias terms to adjust attention calculation.", "Results on QSGen-Hier.", "We report results on the task of generating QS hierarchies in Table 1.", "HIBRIDS-ENC uniformly outperforms other variants and all comparisons on all metrics, except 791 U.S. Attorney's Office Actions to Enforce the LDA The Office stated that it has sufficient authority and resources to enforce compliance with LDA requirements, including imposing Q1: What authority does the Office for the District of Columbia have in regard to LDA requirements?", "for ROUGE-1 and ROUGE-L scores by HIERENC .", "Note that HIERENC learns 2 M more new parameters than our models, and it produces QS hierarchies of lower quality despite its competitive ROUGE scores (Figure 3).", "This signifies the effectiveness of our design that directly injects structural information into word-level relation computation.", "Meanwhile, HIBRIDS on encoder is better at hierarchy quality than its variant on decoder , suggesting the importance of resolving section relations during encoding.", "Though not reported here, we experiment with HIBRIDS on both the encoder and the decoder, and it results in degraded performance.", "One possible cause is that HIBRIDS functions differently in these two setups (discussed in 7).", "We will explore better fusion techniques in future work.", "Results on QSGen-ChildQ.", "Results on generating follow-up questions further validate the usefulness of hierarchical biases as shown in Table 2, where questions generated by HIBRIDS -ENC have the best quality as measured by all metrics except for BLEU.", "SECBIAS , which is aware of section-level linear distance, also obtains outstanding performance, since it focuses on intra-section information and thus better determines what child questions should be asked for better relevance.", "Human evaluation is conducted on QSGen-Hier, for five models with the highest automatic scores, to help understand how well the generated hierarchies are structured.", "We hire three judges who Model R1 R2 RL B4 LONGFORMER 26.90 8.69 25.57 14.44 SECTOK 26.76 8.82 25.42 14.51 LVLSECTOK 26.80 8.75 25.52 14.33 Structure-aware Comparisons HIERENC 26.38 8.81 24.99 14.54 MULTITASK 26.84 8.46 25.41 14.59 Models with Linear Bias TOKBIAS 26.73 8.69 25.38 14.43 SECBIAS 27.25 9.07 25.92 14.76 Our Models HIBRIDS-ENC 27.33 9.46 26.00 14.73 HIBRIDSS-ENC 26.41 8.74 24.99 14.44 HIBRIDS-DEC 27.17 8.67 25.71 14.36 HIBRIDSS-DEC 26.29 8.50 25.09 14.30 Table 2: Results for QSGen-ChildQ.", "have extensive experience in summarization annotation and evaluation tasks to assess 50 groups of question-summary hierarchies.", "Human inspection on randomly selected outputs shows that most system generations have an appropriate coverage of the salient content in the source.", "Therefore, we focus on evaluating both global coherence and local coherence of the QS hierarchies based on the following two aspects.", "First, we ask evaluators to correct each generated hierarchy by rearranging the QS pairs so that each pair is attached to the parent that forms the best follow-up relation in steps.", "For each step, they are only allowed to attach a pair to its grandparent or sibling (i.e., the parent or child of its current parent).", "They then report the number of edits conducted for the rearrangement.", "Second, for each QS pair, we ask them to determine if the question can be answered by the summary.", "Details of human evaluation are in Appendix C. As can be seen from Table 3, QS hierarchies generated by HIBRIDS-ENC model contain the best structured summaries as they require the fewest number of corrections and the generated questions are also more likely to be addressed by the corresponding summaries.", "Despite being competitive on automatic metrics, SECTOK generates hierarchies that require the most corrections.", "Upon additional inspection, we find that HIBRIDS's outputs often have better local coherence than the comparisons.", "Additionally, all models struggle to generate more engaging questions, which poses another challenge to future studies.", "As demonstrated in Figure 4, HIBRIDS with full hierarchical biases outperform all comparisons on both datasets, suggesting that our design of including structural relations in bias terms can generalize to other tasks.", "Compared to the results on QS hierarchy generation, using HIBRIDS on the decoder yields greater improvement on full summary generation, especially in the biography domain where HIBRIDS-DEC obtains the best performance.", "It is likely that the longer summary length and higher compression ratio on WIKIBIOSUM ( 1 , 266 and 0 . 45 ) makes generation coherence more important by using better alignment.", "This highlights how hierarchical biases can aid long text generation.", "Here we aim to understand what is learned by our hierarchical biases.", "For HIBRIDS-ENC and HIBRIDS-DEC trained on QSGen-Hier, we visualize the values of their learned hierarchical biases averaged over all heads at all layers for each (path length, level difference) pair on an example structure.", "Additional visualization is in Appendix D. From Figure 5 we see that using HIBRIDS on the encoder encourages models to encode various relations, e.g., by upweighing grandparent (1.1.1 to 1, 1.1.1.1 to 1.1) and preceding sibling (1.2 to 1.1), and downweighing children (1 to 1.1 and 1.2, 1.1 to 1.1.1).", "This highlights the need of learning heterogeneous relations among sections beyond token distances.", "By contrast, HIBRIDS on the decoder consistently biases towards parent and sibling contexts.", "It might be because that the generation of fluent and coherent question-summary pairs relies on being aware of the scope of sections at the same or higher levels.", "We examine which design choices contribute the most to the performance gain by HIBRIDS, by carrying out ablation studies on QSGen-Hier with HIBRIDS-ENC .", "We consider taking out (1) level difference, (2) path length, and (3) asymmetry of path length.", "As shown in Table 4, removing any component reduces summaries' content coverage and hierarchy quality, underscoring their contributions in more precisely representing structural relations for better document encoding.", "Level difference adds the most to hierarchy quality, as levels 793 Summary Question Hierarchy Model RL B4 F1 HIBRIDS-ENC 38.03 10.16 13.26 w/o Level Difference 0.50 0.08 0.51 w/o Path Length 0.43 +0.05 0.18 w/o Asymmetric Path 0.15 0.12 0.18 Table 4: Ablation study results.", "We further study if HIBRIDS can boost the section encoder of HIERENC .", "Table 5 shows that HIERENC with HIBRIDS gains further improvements on generating QS hierarchies and full document summarization on GOVREPORT .", "This points to promising future adoptions of HIBRIDS by existing models that would benefit from encoding document structure.", "We present HIBRIDS, which effectively and efficiently injects document structure information into abstractive summarization models via hierarchical learnable biases that adjust the attention score matrix.", "A new task, hierarchical question-summary generation, is then introduced for generating hierarchically organized question-summary pairs, to expose document structure and salient content to readers.", "We annotate a new dataset consisting of 6 , 153 summary paragraphs with question-summary hierarchies to facilitate our study, and it can also be used for query-focused summarization and question generation.", "Experiments on hierarchical question-summary generation and full summary generation show that HIBRIDS produces question-summary hierarchies of higher quality as measured by both automatic metrics and human judges, and achieves higher content coverage of summaries than competitive comparisons as reported by ROUGE.", "This work is supported in part by National Science Foundation through grant IIS-2046016, Oracle Cloud credits and related resources provided by the Oracle for Research program.", "We thank the anonymous reviewers for their valuable suggestions.", "Collection of GOVREPORT-QS and WIKIBIOSUM .", "We comply with the terms of use and copyright policies of all data sources during the collection of GOVREPORT-QS and WIKIBIOSUM .", "Personal and other sensitive information is not collected to ensure the privacy of content creators.", "Before annotating GOVREPORT-QS, we obtain consents from the annotators and inform them of their rights to temporarily suspend or quit the annotation process.", "During annotation, annotators are fairly compensated ( $15 per hour).", "Limitations and Potential Risks of HIBRIDS and GOVREPORT-QS.", "While our experiments focus on datasets consisting of formal long documents, we recognize that long documents could be written in informal languages where our model might not perform reasonably and could generate degraded or even incorrect outputs.", "Despite recent advancement in improving summary factuality along with its evaluation (Kryscinski et al., 2020; Goyal and Durrett, 2020; Scialom et al., 2021; Cao and Wang, 2021b), the accuracy of existing factuality evaluation metrics has not been verified on long documents, which further increases the risk of incorrect outputs by our model.", "As our GOVREPORT-QS is based on reports from the United States (US) Government, the topics covered by the dataset are mostly relevant to the national interest of US.", "Therefore, models trained on our dataset might not be suitable for producing structured summaries for documents published by other countries that focus on other topics.", "Moreover, our GOVREPORT-QS might bias the model towards a pro-US perspective, which could produce outputs that are harmful to certain populations." ]
[ "abstain", "abstain", "method", "objective", "objective", "result", "result", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "method", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "other", "abstain", "method", "objective", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "objective", "abstain", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method" ]
[ "Nested named entity recognition (NER) has been receiving increasing attention.", "Recently, Fu et al. (2020) adapt a span-based constituency parser to tackle nested NER.", "They treat nested entities as partially-observed constituency trees and propose the masked inside algorithm for partial marginalization.", "However, their method cannot leverage entity heads, which have been shown useful in entity mention detection and entity typing.", "In this work, we resort to more expressive structures, lexicalized constituency trees in which constituents are annotated by headwords, to model nested entities.", "We leverage the Eisner-Satta algorithm to perform partial marginalization and inference efficiently.", "In addition, we propose to use (1) a two-stage strategy (2) a head regularization loss and (3) a head-aware labeling loss in order to enhance the performance.", "We make a thorough ablation study to investigate the functionality of each component.", "Experimentally, our method achieves the state-of-the-art performance on ACE2004, ACE2005 and NNE, and competitive performance on GENIA, and meanwhile has a fast inference speed.", "Our code will be publicly available at: github.com/LouChao98/nner_as_parsing.", "Named Entity Recognition (NER) is a fundamental task in information extraction, playing an essential role in many downstream tasks.", "Nested NER brings more flexibility than flat NER by allowing nested structures, thereby enabling more fine-grained meaning representations and broader applications (Byrne, 2007; Dai, 2018).", "Traditional sequence-labeling-based models have achieved remarkable performance on flat NER but fail to handle nested entities.", "To resolve this problem, there are many layer-based methods (Ju et al., 2018; Fisher and Vlachos, 2019; Shibuya and Hovy, Corresponding Author 2020; Wang et al., 2020, 2021) proposed to recognize entities layer-by-layer in bottom-up or top-down manners.", "However, they suffer from the error propagation issue due to the cascade decoding.", "Recently, Fu et al. (2020) adapt a span-based constituency parser to tackle nested NER, treating annotated entity spans as a partially-observed constituency tree and marginalizing latent spans out for training.", "Their parsing-based method, namely PO-TreeCRF, admits global exact inference thanks to the CYK algorithm (Cocke, 1969; Younger, 1967; Kasami, 1965), thereby eliminating the error propagation problem.", "However, their method does not consider entity heads, which provide important clues for entity mention detection (Lin et al., 2019; Zhang et al., 2020d) and entity typing (Katiyar and Cardie, 2018; Choi et al., 2018; Chen et al., 2021).", "For example, University and California are strong clues of the existence of ORGEDU and STATE entities in Fig.1.", "Motivated by this and inspired by head-driven phrase structures, Lin et al. (2019) propose the Anchor-Region Network (ARN), which identifies all entity heads firstly and then predicts the boundary and type of entities governed by each entity head.", "However, their method is heuristic and greedy, suffering from the error propagation problem as well.", "Our main goal in this work is to obtain the best of two worlds: proposing a probabilistically principled method that enables exact global inference like Fu et al. (2020), meanwhile taking entity heads into accounts like Lin et al. (2019).", "To enable exact global inference, we also view observed entities as partially-observed trees.", "Since constituency trees cannot model entity heads, we resort to lexicalized trees, in which constituents are annotated with headwords.", "A lexicalized tree embeds a constituency tree and a dependency tree (Gaifman, 1965), and lexicalized constituency parsing can thus be viewed as joint dependency and constituency parsing (Eisner and Satta, 6183 [Bickford] [Bickford] [Bickford] Reginold Bickford , [researcher] [researcher] a researcher [university] at [university] [university] [university] the university [California] of California [Diego] at [Diego] San Diego PER NAME STATE CITY PER ORGEDU Figure 1: An example sentence with a compatible latent lexicalized constituency tree (top) and observed entities (down).", "1999; Collins, 2003).", "Fig.1 illustrates an example lexicalized tree.", "Joint dependency and constituency parsing has been shown to outperform standalone constituency parsing (Zhou and Zhao, 2019; Fernndez-Gonzlez and Gmez-Rodrguez, 2020) possibly because modeling dependencies between headwords helps predict constituents correctly.", "Hence, in the context of nested NER, we have reasons to believe that modeling latent lexicalized constituency trees would bring improvement in predicting entities over modeling latent constituency trees, and we verify this in experiments.", "When using a lexicalized constituency tree for nested NER, only part of unlexicalized spans, i.e., entities, are observed, so we need to marginalize latent spans and dependency arcs out for training.", "Inspired by the masked inside algorithm of Fu et al. (2020), we propose a masked version of the Eisner-Satta algorithm (Eisner and Satta, 1999), a fast lexicalized constituency parsing algorithm, to perform partial marginalization.", "We also adopt the Eisner-Satta algorithm for fast inference.", "Besides the difference in parsing formalism and algorithms, our work also differs from the work of Fu et al. (2020) and Lin et al. (2019) in the following three aspects.", "First, inspired by Zhang et al. (2020a), we adopt a two-stage parsing strategy, i.e., we first predict an unlabeled tree and then label the predicted constituents, instead of using the one-stage parsing strategy of PO-TreeCRF.", "We show that two-stage parsing can improve the performance of both PO-TreeCRF and our proposed method.", "Second, Lin et al. (2019) observe that each entity head governs only one entity span in most cases, so they impose a hard constraint of that during learning and inference, which is potentially harmful since the constraint is not always satisfied.", "Instead, we add a soft KL penalty term to encourage satisfaction of the constraint, which is reminiscent of posterior regularization (Ganchev et al., 2010; Zhang et al., 2017).", "Third, considering that gold entity heads are not given, Lin et al. (2019) propose a bag loss for entity boundary detection and labeling.", "However, this loss is heuristic and brings an additional hyperparameter, to which the final performance is sensitive.", "In contrast, entity boundary detection is learned in the first stage of our method, and in the second stage, we propose a more principled labeling loss based on expectations (i.e., marginal likelihoods) of all possible entity heads within gold entity spans, which can be estimated efficiently and does not introduce new hyperparameters.", "We conduct experiments on four benchmark datasets, showing that our model achieves state-of-the-art results on ACE2004, ACE2005 and NNE, and competitive results on GENIA, validating the effectiveness of our method.", "A labeled constituency tree can be represented as a rank-3 binary tensor T where T ijk = 1 if there is a span from the i -th word to the j -th word with label k in the tree and T ijk = 0 otherwise.", "We assume the 0 -th label is preserved for (i.e., no label) without loss of generality.", "Similarly, an unlabeled constituency tree can be represented as a binary matrix T (cid:48) .", "One-stage span-based constituency parsers 6184 decompose the score of a labeled constituency tree into the scores of constituents s ijk : s ( T ) = (cid:88) ijk T ijk s ijk They use the CYK algorithm to recover the optimal labeled tree.", "In contrast, two-stage constituency parsers score unlabeled trees and constituent labels independently.", "They decompose the score of an unlabeled constituency tree into the scores of spans s i,j : s ( T (cid:48) ) = (cid:88) ij T (cid:48) ij s ij They use the CYK algorithm to recover the optimal unlabeled tree in the first stage and then use a separate component to label spans, including the label, in the second stage.", "Zhang et al. (2020c) show that adopting the two-stage parsing strategy leads to a better result in constituency parsing.", "PO-TreeCRF (Fu et al., 2020) adapts a one-stage constituency parser to tackle nested NER.", "It views the set of entities y := { ( i, j, k ) , . . . } as observed parts of a constituency tree T where ( i, j ) is the unlabeled entity span and k is the entity label.", "We refer to other constituents as latent spans.", "A labeled tree T is compatible with y if T ijk = 1 for any entity ( i, j, k ) y and T ij 0 = 1 for all latent spans ( i, j ) (recall that the 0 -th label is ).", "Define set T ( y ) as all compatible trees with y .", "PO-TreeCRF maximizes the total likelihood of all compatible trees: s ( y ) = log (cid:88) T T ( y ) exp( s ( T )) log p ( y ) = s ( y ) log Z where log Z is the log-partition function.", "The diffi-culty is how to estimate s ( y ) efficiently.", "Fu et al. (2020) propose the masked inside algorithm to tackle this, in which they set all incompatible span (overlapped but not nested with any of y ) values to negative infinity before running the inside algorithm.", "We refer readers to their paper for more details.", "Figure 1 shows an example lexicalized constituency tree.", "We omit all constituent labels for brevity.", "Each constituent is annotated by a headword.", "A non-leaf constituent span consists of two adjacent sub-constituents and copies the headword from one of them.", "We refer to the copied headword as the inherited head and the other headword as the noninherited head.", "We can draw a dependency arc from the inherited head to the non-inherited head.", "A dependency tree can be obtained by reading off all headwords recursively, and hence in this view, a lexicalized constituency tree embeds a dependency tree and a constituency tree.", "The O ( n 4 ) Eisner-Satta algorithm (Eisner and Satta, 1999) can be used to calculate the partition function or obtain the best parse if we decompose the score of a lexicalized constituency tree into scores of spans and arcs.", "We refer interested readers to Appendix A for details of the Eisner-Satta algorithm.", "Notations Given a lengthn sentence x = x 0 , ..., x n 1 with (gold) entity set y := { ( i, j, ) , . . . } , where ( i, j ) is an unlabeled entity span and is the set of entity labels (there could be multiple labels for one entity).", "We denote y as the set of unlabeled entity spans, i.e., y := { ( i, j ) , . . . } .", "The first stage always predicts 2 n 1 spans 1 and most of them are not entities.", "Hence naively adopting the two-stage parsing strategy to nested NER suffers from the imbalanced classification problem when predicting labels in the second stage because the label would dominate all the entity labels.", "To bypass this problem, we modify unlabeled constituency trees by assigning 0-1 labels to unlabeled constituency trees, where 0 stands for latent spans and 1 stands for entities.", "It transfers the burden of identifying non-entities to the first stage, in which the binary classification problem is much more balanced and easier to tackle.", "The total training loss can be decomposed into: L = L tree + L label + L reg where L tree is a 0-1 labeled constituency tree loss, L label is a head-aware labeling loss and L reg is a regularization loss based on the KL divergence.", "Encoding and scoring We feed the sentence into the BERT encoder (Devlin et al., 2019), apply scalar mixing (Peters et al., 2018) to the last four layers of BERT, and apply mean-pooling to all sub-word embeddings to obtain word-level contextual embedding.", "We concatenate static word embedding, e.g., GloVe (Pennington et al., 2014), to the contextual embedding to obtain the word representation a = a 0 ,", ".., a n 1 .", "Then we feed a into a three-layer bidirectional LSTM (Hochreiter and Schmidhuber, 1997) network (BiLSTM): . . . , ( b i , b i ) , = BiLSTM ([ . . . , a i , . . . ]) Next, we use deep biaffine scoring functions (Dozat and Manning, 2017) to calculate span scores s c R n n 2 and arc scores s d R n n : e c,in/outi = MLP c,in/out ([ b i ; b i +1 ]) e d,in/outi = MLP d,in/out ([ b i ; b i ]) s cij = PN ([ e c,ini ; 1 ] TW c [ e c,outj ; 1 ]) s dij = PN ([ e d,ini ; 1 ] TW d [ e d,outj ; 1 ]) , where MLPs are multi-layer perceptrons that project embeddings into k -dimensional spaces; W c R ( k +1) 2 ( k +1) , W d R ( k +1) ( k +1) are trainable parameters; PN is Potential Normalization, which normalizes scores to follow unit Gaussian distributions and has been shown beneficial (Fu et al., 2020).", "Scores of trees A 0-1 labeled lexicalized constituency tree l embeds an unlabeled dependency tree d and a 0-1 labeled constituency tree c .", "The label set is { 0 , 1 } , where 0 denotes latent spans and 1 denotes entity spans.", "We use a binary rank-3 tensor C R n n 2 to represent c , where C ijk = 1 if and only if there is a span from x i to x j with label k in c ; and a binary matrix D R n n to represent d , where D ij = 1 if and only if there is an arc from x i to x j in d .", "We define the score of l as : s ( l ) = s ( c ) + s ( d ) = (cid:88) ijk C ijk s cijk + (cid:88) ij D ij s dij Structural tree loss We marginalize all latent spans and arcs out to define the loss: s ( y ) = log (cid:88) T T exp( s ( T )) L tree = log Z s ( y ) where T is the set of all compatible lexicalized trees whose constituents contain y ; log Z is the log-partition function that can be estimated by the Eisner-Satta algorithm.", "For each compatible tree T T , the 0-1 labels are assigned in accordance with the entity spans in y .", "We use a masked version of the Eisner-Satta algorithm (Appendix A) to estimate s ( y ) .", "Regularization loss As previously discussed, entity heads govern only one entity in most cases.", "But imposing a hard constraint is sub-optimal because there are also cases violating this constraint.", "Hence we want to encourage the model to satisfy this constraint in a soft manner.", "Inspired by posterior regularization (Ganchev et al., 2010; Zhang et al., 2017), we build a constrained TreeCRF and minimize the KL divergence between constrained and original unconstrained TreeCRFs.", "The first problem is how to construct the constrained TreeCRF.", "We propose to hack the forward pass (i.e., inside) of the Eisner-Satta algorithm to achieve this: we decrease the arc scores by a constant value (we typically set to 0.4) whenever the parent has already governed an entity during computing the inside values, so it discourages a head having several children and thus governing several spans.", "We refer readers to Appendix A for more details.", "The second problem is how to optimize the KL divergence efficiently for exponential numbers of trees.", "We adopt the specific semiring designed to calculate KL divergences between structured log-linear models (Li and Eisner, 2009) from the Torch-Struct library (Rush, 2020) 2 .", "The calculation of KL divergence is fully differentiable and thus is amenable to gradient-based optimization methods.", "It has the same time complexity as the forward pass of the Eisner-Satta algorithm.", "We denote the value of KL divergence as L reg .", "To incorporate entity head information when labeling entity spans, we score the assignment of label l L to a span ( i, j ) with head x k as follows:", "e l,in/outi = MLP l,in/out ([ b i ; b i +1 ]) e l,headi = MLP l,head ([ b i ; b i ]) s labelijkl = TriAff ( e l,ini , e l,outj , e l,headk ) ,", "where Triaff is the triaffine scoring function (Zhang et al., 2020b); L is the set of all labels.", "We reuse the encoder (BiLSTM) from Stage I. Nested named entities could have multiple labels.", "For instance, 7% entity spans in the NNE dataset (Ringland et al., 2019) have multiple labels.", "We use a multilabel loss introduced by Su (2020).", "For each ( i, j, ) y , consider a potential head x k with i k j , we define the loss as: l ( i, j, k, ) = log(1 + (cid:88) l L / exp( s labelijkl )) + log(1 + (cid:88) l exp( s labelijkl )) Since the gold entity heads are not given, we define the head-aware labeling loss based on expectation over the headword for each entity span: L label = (cid:88) ( i,j, ) y (cid:88) i k j ijk l ( i, j, k, ) where ijk is the marginal likelihood of x k being the headword of span ( i, j ) under the TreeCRF, which satisfies (cid:80) i k j ijk = 1 and can be estimated efficiently via the backward pass (i.e., back-propagation (Eisner, 2016)) of the Eisner-Satta algorithm.", "We conduct experiments on four datasets: ACE2004 (Doddington et al., 2004), ACE2005 (Walker, Christopher et al., 2006), GENIA (Kim et al., 2003) and NNE (Ringland et al., 2019).", "For ACE2004, ACE2005 and GENIA, we use the same data splitting and preprocessing as in Shibuya and Hovy (2020) 3 .", "For NNE, we use the official preprocessing script 4 to split train/dev/test sets.", "We refer readers to Appendeix B.1 for implementation details and to Appendix B.2 for data statistics of each dataset.", "We report span-level labeled precision (P), labeled recall (R) and labeled F1 scores (F1).", "We select models according to the performance on development sets.", "All results are averaged over three runs with different random seeds.", "We show the comparison of various methods on ACE2004, ACE2005 and GENIA in Table 1.", "We note that there is an inconsistency in the data prepossessing.", "For instance, the data statistics shown in Table 1 of (Shibuya and Hovy, 2020) and Table 5 of (Shen et al., 2021) do not match.", "More seriously, we find Shen et al. (2021); Tan et al. (2021) use context sentences, which plays a crucial role in their performance improvement but is not standard practice in other work.", "In addition, they report the best result instead of the mean result.", "Hence we rerun the open-sourced codes of Shen et al. (2021); Tan et al. (2021) using our preprocessed data and no context sentences and we report their mean results over three different runs.", "We also rerun the code of PO-TreeCRF for a fair comparison.", "We can see that our method outperforms PO-TreeCRF, our main baseline, by 0.30/2.42/0.64 F1 scores on the three datasets, respectively.", "Our method has 87.90 and 86.91 F1 scores on ACE2004 and ACE2005, achieving the state-of-the-art performances.", "On GENIA, our method achieves competitive performance.", "We also evaluate our method on the NNE dataset, whereby there are many multilabeled entities.", "Table 2 shows the result: our method outperforms Pyramid by 0.27 F1 score.", "Structured vs. unstructured We study the effect of structural training and structured decoding as a whole.", "Unstructured is a baseline that adopts the local span classification loss and local greedy decoding.", "1-stage is our re-implementation of PO-TreeCRF, which adopts the latent structural constituency tree loss and uses the CYK algorithm for decoding. 1-stage+ LEX adopts the latent structural lexicalized constituency tree loss and uses the Eisner-Satta algorithm for decoding.", "All methods use the same neural encoders.", "We can see that 1-stage outperforms the unstructured baseline by 0.33 F1 score.", "Further, 1-stage+ LEX outperforms 1-stage by 0.25 F1 score, verifying the effectiveness of using latent lexicalized constituency tree structures.", "1-stage vs. 2-stage On the unstructured model, we adopt a 0-1 local span classification loss in the first stage of the two-stage version, and we observe that the two-stage version performs similarly the one-stage version.", "On the other hand, we observe improvements on structured methods: 2-stage outperforms 1-stage by 0.23 F1 score and 2stage+ LEX outperforms 1-stage+ LEX by 0.18 F1 scores, validating the benefit of adopting the two-stage strategy.", "Moreover, \"2-stage(0/1)+ LEX \" outperforms \"2-stage+ LEX \" by 0.15 F1 score, suggesting the effectiveness of bypassing the imbalanced classification problem.", "de-5 They did not report Pyramid-Full with BERT only.", "However, with BERT+ALBERT, Pyramid-Full only outperforms Pyramid-Basic with a small margin ( < 0 . 1 ).", "6 The max and logsumexp versions are the best models for BERT only and BERT+Flair respectively.", "coding in a decoupled way here.", "-parsing denotes the case that we use the latent lexicalized constituency tree loss for training, while we do not use the Eisner-Satta algorithm for parsing and instead predict spans locally whenever their label score of 1 is greater than that of 0.", "We can see that it causes a performance drop of 0.49 F1 score, indicating the importance of structural decoding, i.e., parsing.", "It is also worth noting that -parsing outperforms the unstructured baseline by 0.42 F1 score, showing the benefit of structural training even without structural decoding.", "Effect of head regularization We can see that using the regularization loss brings an improvement of 0.24 F1 score (86.32->86.56).", "In the case study (Section 5.2), we observe that some common errors are avoided because of this regularization.", "Effect of head-aware labeling loss We can see that using the head-aware labeling loss brings an improvement of 0.30 F1 score (86.32 -> 86.62).", "When combined with the head regularization, we achieve further improvements because of more accurate head estimation (Appendix B.3).", "Table 4 shows example predictions of our models.", "In the first pair, 2-stage predict reasonable structures (visualized in B.5), but fail to label entities, whereas 2-stage (0-1) predicts further correct labels.", "The second pair shows that, by constraining head sharing and head-aware entity labeling, +both successfully detect bus as a headword, then produce correct entity boundaries and labels.", "Besides, +both can be seen to handle both fine-grained and coarse-grained entities in the last two predictions: this bus near the airport is predicted into two entities but all sites and people in Iraq remains one multilabeled entity.", "Table 5 gives the most common headwords of each type predicted by our model on ACE2005.", "We find that the most frequently predicted headwords are gold headwords 7 , except for some common function words, e.g., in and of .", "It proves the ability of our model in recognizing headwords.", "One concern regarding our method is that since the Eisner-Satta algorithm has a O ( n 4 ) time complexity, it would be too slow to use for NER practitioners.", "Fortunately, the Eisner-Satta algorithm is amenable to highly-parallelized implementation so that O ( n 3 ) out of O ( n 4 ) can be computed in parallel (Zhang et al., 2020b; Rush, 2020), which greatly accelerates parsing.", "We adapt the fast implemen-7 ACE2005 is additionally annotated with headwords.", "We only use them for evaluation.", "tation of Yang and Tu (2022b) 8 .", "Empirically, we observe linear running time on GPUs in most cases.", "We show the comparison of (both training and decoding) running time in Table 6.", "We measure the time on a machine with Intel Xeon Gold 6278C CPU and NVIDIA V100 GPU.", "We can see that compared with PO-TreeCRF, which also uses a highly-parallelized implementation of the O ( n 3 ) CYK algorithm, our method is around 20% slower in training and decoding, which is acceptable.", "Notably, both PO-TreeCRF and our method are much faster than Seq2Set (Tan et al., 2021) and Locate&Label(Shen et al., 2021).", "Nested NER Nested NER has been receiving increasing attentions and there are many methods proposed to tackle it.", "We roughly categorize the methods into the following groups: (1) Span-based methods: Luan et al. (2019); Yu et al. (2020); Li et al. (2021) directly assign scores to each potential entity span.", "(2) Layered methods: Ju et al. (2018); Fisher and Vlachos (2019) dynamically merge sub-spans to larger spans and Shibuya and Hovy (2020); Wang et al. (2021) use linear-chain CRFs and recursively find second-best paths for predicting nested entities.", "(3) Hypergraph-based methods: Lu and Roth (2015); Katiyar and Cardie (2018) propose different hypergraph structures to model nested entities but suffer from the spurious structure issue, and Wang and Lu (2018) solve this issue later.", "(4) Object-detection-based methods: Shen et al. (2021) adapt classical two-stage object detectors to tackle nested NER and Tan et al. (2021) borrow the idea from DETR (Carion et al., 2020).", "(5) Parsing-based methods (Finkel and Manning, 2009; Wang et al., 2018; Fu et al., 2020; Yang and Tu, 2022a).", "(6) Sequence-to-sequence methods (Yan et al., 2021).", "Our method belongs to parsing-based methods.", "Finkel and Manning (2009) use a non-neural TreeCRF parser.", "Wang et al. (2018) adapt a shift-reduce transition-based parser.", "Fu et al. (2020) use a span-based neural TreeCRF parser.", "Recently, Yang and Tu (2022a) propose a bottom-up constituency parser with pointer networks to tackle nested NER as well.", "All of them cast nested NER to constituency parsing, while we cast nested NER to lexicalized constituency parsing and our method 8 https://github.com/sustcsonglin/ span-based-dependency-parsing/blob/main/src/inside/eisner_satta.py 6189 Model Prediction 2-stage [I] PER have never heard of [a pig like [this] WEA ] WEA before !", "2-stage (0-1) [I] PER have never heard of a pig like this before !", "2-stage (0-1) [Police] PER surrounded [this bus near [the airport] FAC ] VEH,FAC with [guns] WEA drawn .", "+ both [Police] PER surrounded [this bus] VEH near [the airport] FAC with [guns] WEA drawn .", "+ both [Blix] PER stressed that [council] ORG resolutions call for [[U.N.] ORG inspectors] PER to have access to [all sites and people in [Iraq] GPE ] FAC,PER .", "Structured models using partial trees Full gold parse trees are expensive to obtain, so there are many methods proposed to marginalize over latent parts of partial trees, performing either approximate marginalization via loopy belief propagation or other approximate algorithms (Narad-owsky et al., 2012; Durrett and Klein, 2014) or exact marginalization via dynamic programming algorithms (Li et al., 2016; Zhang et al., 2020b; Fu et al., 2020; Zhang et al., 2021).", "Naradowsky et al. (2012); Durrett and Klein (2014) construct fac-tor graph representations of syntactically-coupled NLP tasks whose structures can be viewed as latent dependency or constituency trees, such as NER, semantic role labeling (SRL), and relation extraction.", "Li et al. (2016); Zhang et al. (2020b) perform partial marginalization to train (second-order) TreeCRF parsers for partially-annotated dependency parsing.", "Zhang et al. (2021) view arcs in SRL as partially-observed dependency trees; Fu et al. (2020) view entities in nested NER as partially-observed constituency trees; and we view entities in nested NER as partially-observed lexicalized constituency trees in this work.", "Lexicalized parsing Probabilistic context-free grammars (PCFGs) have been widely used in syntactic parsing.", "Lexicalized PCFGs (L-PCFGs) leverage headword information to disambiguate parsing and are thus more expressive.", "Eisner and Satta (1999) propose an efficient O ( n 4 ) algorithm for lexicalized parsing.", "Collins (2003) conduct a thorough study of lexicalized parsing.", "Recently, neurally parameterized L-PCFGs have been used in unsupervised joint dependency and constituency parsing (Zhu et al., 2020; Yang et al., 2021).", "Our work removes the grammar components and adapts the dynamic programming algorithm of lexicalized parsing (Eisner and Satta, 1999) in the spirit of span-based constituency parsing (Stern et al., 2017).", "We have presented a parsing-based method for nested NER, viewing entities as partially-observed lexicalized constituency trees, motivated by the close relationship between entity heads and entity recognition.", "Benefiting from structural modeling, our model does not suffer from error propagation and heuristic head choosing and is easy for regularizing predictions.", "Furthermore, our highly-parallelized implementation enables fast training and inference on GPUs.", "Experiments on four benchmark datasets validate the effectiveness and efficiency of our proposed method.", "We thank the anonymous reviewers for their constructive comments.", "This work was supported by the National Natural Science Foundation of China (61976139)." ]
[ "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "other", "abstain", "abstain", "abstain", "result", "result", "abstain", "method", "abstain", "abstain", "objective", "abstain", "method", "abstain", "abstain", "objective", "result", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "method", "method", "abstain", "result", "abstain", "abstain", "result", "abstain", "result", "abstain", "result", "result", "result", "abstain", "result", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "method", "abstain", "objective", "other", "other" ]
[ "A pun is a form of wordplay for an intended humorous or rhetorical effect, where a word suggests two or more meanings by exploiting polysemy ( homographic pun ) or phonological similarity to another word ( heterographic pun ).", "This paper presents an approach that addresses pun detection and pun location jointly from a sequence labeling perspective.", "We employ a new tagging scheme such that the model is capable of performing such a joint task, where useful structural information can be properly captured.", "We show that our proposed model is effective in handling both homographic and heterographic puns.", "Empirical results on the benchmark datasets demonstrate that our approach can achieve new state-of-the-art results.", "There exists a class of language construction known as pun in natural language texts and utterances, where a certain word or other lexical items are used to exploit two or more separate meanings.", "It has been shown that understanding of puns is an important research question with various real-world applications, such as human-computer interaction (Morkes et al., 1999; Hempelmann, 2008) and machine translation (Schroter, 2005).", "Recently, many researchers show their interests in studying puns, like detecting pun sentences (Vadehra, 2017), locating puns in the text (Cai et al., 2018), interpreting pun sentences (Sevgili et al., 2017) and generating sentences containing puns (Ritchie, 2005; Hong and Ong, 2009; Yu et al., 2018).", "A pun is a wordplay in which a certain word suggests two or more meanings by exploiting polysemy, homonymy, or phonological similarity to another sign, for an intended humorous or rhetorical effect.", "Puns can be generally categorized into two groups, namely heterographic puns (where the pun and its latent target are phonologically similar) and homographic puns (where the two meanings of the pun reflect its two distinct senses) (Miller et al., 2017).", "Consider the following two examples: (1) When the church bought gas for their annual barbecue, proceeds went from the sacred to the propane .", "The first punning joke exploits the sound similarity between the word propane and the latent target profane, which can be categorized into the group of heterographic puns.", "Another categorization of English puns is homographic pun, exemplified by the second instance leveraging distinct senses of the word gut .", "Pun detection is the task of detecting whether there is a pun residing in the given text.", "The goal of pun location is to find the exact word appearing in the text that implies more than one meanings.", "Most previous work addresses such two tasks separately and develop separate systems (Pramanick and Das, 2017; Sevgili et al., 2017).", "Typically, a system for pun detection is built to make a binary prediction on whether a sentence contains a pun or not, where all instances (with or without puns) are taken into account during training.", "For the task of pun location, a separate system is used to make a single prediction as to which word in the given sentence in the text that trigger more than one semantic interpretations of the text, where the training data involves only sentences that contain a pun.", "Therefore, if one is interested in solving both problems at the same time, a pipeline approach that performs pun detection followed by pun location can be used.", "Compared to the pipeline methods, joint learning has been shown effective (Katiyar and Cardie, 2016; Peng et al., 2018) since it is able to reduce error propagation and allows information exchange between tasks which is potentially bene-ficial to all the tasks.", "In this work, we demonstrate that the detection and location of puns can be jointly addressed by a single model.", "The pun detection and location tasks can be combined as a sequence labeling problem, which allows us to jointly detect and locate a pun in a sentence by assigning each word a tag.", "Since each context contains a maximum of one pun (Miller et al., 2017), we design a novel tagging scheme to capture this structural constraint.", "Statistics on the corpora also show that a pun tends to appear in the second half of a context.", "To capture such a structural property, we also incorporate word position knowledge into our structured prediction model.", "Experiments on the benchmark datasets show that detection and location tasks can reinforce each other, leading to new state-of-the-art performance on these two tasks.", "To the best of our knowledge, this is the first work that performs joint detection and location of English puns by using a sequence labeling approach.", "1 2 Approach 2.1 Problem Definition We first design a simple tagging scheme consisting of two tags { N , P } : N tag means the current word is not a pun.", "P tag means the current word is a pun.", "If the tag sequence of a sentence contains a P tag, then the text contains a pun and the word corresponding to P is the pun.", "The contexts have the characteristic that each context contains a maximum of one pun (Miller et al., 2017).", "In other words, there exists only one pun if the given sentence is detected as the one containing a pun.", "Otherwise, there is no pun residing in the text.", "To capture this interesting property, we propose a new tagging scheme consisting of three tags, namely { B , P , A } .", "B tag indicates that the current word appears before the pun in the given context.", "P tag highlights the current word is a pun.", "A tag indicates that the current word appears after the pun.", "We empirically show that the BPA scheme can guarantee the context property that there exists a maximum of one pun residing in the text.", "Given a context from the training set, we will be able to generate its corresponding gold tag sequence using a deterministic procedure.", "Under the two schemes, if a sentence does not contain any puns, all words will be tagged with N or B , respectively.", "Exemplified by the second sentence Some diets cause a gut reaction , the pun is given as gut .", "Thus, under the BPA scheme, it should be tagged with P , while the words before it are assigned with the tag B and words after it are with A , as illustrated in Figure 1.", "Likewise, the NP scheme tags the word gut with P , while other words are tagged with N .", "Therefore, we can combine the pun detection and location tasks into one problem which can be solved by the sequence labeling approach.", "Neural models have shown their effectiveness on sequence labeling tasks (Chiu and Nichols, 2016; Ma and Hovy, 2016; Liu et al., 2018).", "In this work, we adopt the bidirectional Long Short Term Memory (BiLSTM) (Graves and Schmidhuber, 2005) networks on top of the Conditional Random Fields (Lafferty et al., 2001) (CRF) architecture to make labeling decisions, which is one of the classical models for sequence labeling.", "Our model architecture is illustrated in Figure 1 with a running example.", "Given a context/sentence x = ( x 1 , x 2 , . . . , x n ) where n is the length of the context, we generate the corresponding tag sequence y = ( y 1 , y 2 , . . . , y n ) based on our designed tagging schemes and the original annotations for pun detection and location provided by the corpora.", "Our model is then trained on pairs of ( x , y ) .", "Input.", "The contexts in the pun corpus hold the property that each pun contains exactly one content word, which can be either a noun, a verb, an adjective, or an adverb.", "To capture this characteristic, we consider lexical features at the character level.", "Similar to the work of (Liu et al., 2018), the character embeddings are trained by the character-level LSTM networks on the unannotated input sequences.", "Nonlinear transformations are then applied to the character embeddings by highway networks (Srivastava et al., 2015), which map the character-level features into different semantic spaces.", "We also observe that a pun tends to appear at the end of a sentence.", "Specifically, based on the statistics, we found that sentences with a pun that locate at the second half of the text account for around 88% and 92% in homographic and heterographic datasets, respectively.", "We thus introduce a binary feature that indicates if a word is located at the first or the second half of an input sentence to capture such positional information.", "A binary indicator can be mapped to a vector representation using a randomly initialized embedding table (He et al., 2017; Wang and Lu, 2018).", "In this work, we directly adopt the value of the binary indicator as part of the input.", "The concatenation of the transformed character embeddings, the pre-trained word embeddings (Pennington et al., 2014), and the position indicators are taken as input of our model 2 .", "Tagging.", "The input is then fed into a BiLSTM network, which will be able to capture contextual information.", "For a training instance ( x , y ) , we suppose the output by the word-level BiLSTM is Z = ( z 1 , z 2 , . . . , z n ) .", "The CRF layer is adopted to capture label dependencies and make final tagging decisions at each position, which has been included in many state-of-the-art sequence labeling models (Ma and Hovy, 2016; Liu et al., 2018).", "The conditional probability is defined as: P ( y | x ) = (cid:81) ni =1 exp( W yi 1 ,yi z i + b yi 1 ,yi ) (cid:80) y (cid:48) Y (cid:81) ni =1 exp( W y (cid:48) i 1 ,y (cid:48) i z i + b y (cid:48) i 1 ,y (cid:48) i ) where Y is a set of all possible label sequences consisting of tags from { N , P } (or { B , P , A } ), W y i 1 ,y i and b y i 1 ,y i are weight and bias parameters corresponding to the label pair ( y i 1 , y i ) .", "During training, we minimize the negative log-likelihood summed over all training instances: L = (cid:80) i log P ( y i | x i ) 2 The word sense has also been shown helpful for the location of a homographic pun (Cai et al., 2018).", "However, such information may not always be helpful for the location of heterographic puns.", "We thus exclude such knowledge.", "where ( x i , y i ) refers to the i -th instance in the training set.", "During testing, we aim to find the optimal label sequence for a new input x : y = arg max y YP ( y | x ) This search process can be done efficiently using the Viterbi algorithm.", "We evaluate our model on two benchmark datasets (Miller et al., 2017).", "The homographic dataset contains 2,250 contexts, 1,607 of which contain a pun.", "The heterographic dataset consists of 1,780 contexts with 1,271 containing a pun.", "We notice there is no standard splitting information provided for both datasets.", "Thus we apply 10-fold cross validation.", "To make direct comparisons with prior studies, following (Cai et al., 2018), we accumulated the predictions for all ten folds and calculate the scores in the end.", "For each fold, we randomly select 10% of the instances from the training set for development.", "Word embeddings are initialized with the 100-dimensional Glove (Pennington et al., 2014).", "The dimension of character embeddings is 30 and they are randomly initialized, which can be fine tuned during training.", "The pre-trained word embeddings are not updated during training.", "The dimensions of hidden vectors for both char-level and word-level LSTM units are set to 300.", "We adopt stochastic gradient descent (SGD) (Bottou, 1991) with a learning rate of 0.015.", "For the pun detection task, if the predicted tag sequence contains at least one P tag, we regard the output (i.e., the prediction of our pun detection model) for this task as true, otherwise false.", "For the pun location task, a predicted pun is regarded as correct if and only if it is labeled as the gold pun in the dataset.", "As to pun location, to make fair comparisons with prior studies, we only consider the instances that are labeled as the ones containing a pun.", "We report precision, recall and F 1 score in Table 1.", "A list of prior works that did not employ joint learning are also shown in the first block of Table 1.", "We also implemented a baseline model based on conditional random fields (CRF), where features like POS tags produced by the Stanford POS tagger (Toutanova et al., 2003), n-grams, label tran-System", "sitions, word suffixes and relative position to the end of the text are considered.", "We can see that our model with the BPA tagging scheme yields new state-of-the-art F 1 scores on pun detection and competitive results on pun location, compared to baselines that do not adopt joint learning in the first block.", "For location on heterographic puns, our model's performance is slightly lower than the system of (Vechtomova, 2017), which is a rule-based locator.", "Compared to CRF, we can see that our model, either with the NP or the BPA scheme, yields significantly higher recall on both detection and location tasks, while the precisions are relatively close.", "This demonstrates the effectiveness of BiLSTM, which learns the contextual features of given texts such information appears to be helpful in recalling more puns.", "Compared to the NP scheme, the BPA tagging scheme is able to yield better performance on these two tasks.", "After studying outputs from these two approaches, we found that one leading source of error for the NP approach is that there exist more than one words in a single instance that are assigned with the P tag.", "However, according to the description of pun in (Miller et al., 2017), each context contains a maximum of one pun.", "Thus, such a useful structural constraint is not well captured by the simple approach based on the NP tagging scheme.", "On the other hand, by applying the BPA tagging scheme, such a constraint is properly captured in the model.", "As a result, the results for such a approach are significantly better than the approach based on the NP tagging scheme, as we can observe from the table.", "Under the same experimental setup, we also attempted to exclude word position features.", "Results are given by BPA p .", "It is expected that the performance of pun location drops, since such position features are able to capture the interesting property that a pun tends to appear in the second half of a sentence.", "While such knowledge is helpful for the location task, interestingly, a model without position knowledge yields improved performance on the pun detection task.", "One possible reason is that detecting whether a sentence contains a pun is not concerned with such word position information.", "Additionally, we conduct experiments over sentences containing a pun only, namely 1,607 and 1,271 instances from homographic and heterographic pun corpora separately.", "It can be regarded as a pipeline method where the classifier for pun detection is regarded as perfect.", "3 Following the prior work of (Cai et al., 2018), we apply 10-fold cross validation.", "Since we are given that all input sentences contain a pun, we only report accumulated results on pun location, denoted as Pipeline in Table 1.", "Compared with our approaches, the performance of such an approach drops significantly.", "On the other hand, such a fact demonstrates that the two task, detection and location of puns, can reinforce each other.", "These figures demonstrate the effectiveness of our sequence labeling method to detect and locate English puns in a joint manner.", "We studied the outputs from our system and make some error analysis.", "We found the errors can be broadly categorized into several types, and we elaborate them here.", "1) Low word coverage: since the corpora are relatively small, there exist many unseen words in the test set.", "Learning the representations of such unseen words is challeng-3 Under a pipeline setting, the first step is to detect if a sentence contains a pun.", "Then another algorithm is called to locate the exact pun word residing in the sentence if such a sentence is detected as the one containing a pun.", "In our setting, we assume the detection phase is perfect.", "In other words, all sentences containing a pun are exactly retrieved.", "ing, which affects the model's performance.", "Such errors contribute around 40% of the total errors made by our system.", "2) Detection errors: we found many errors are due to the model's inability to make correct pun detection.", "Such inability harms both pun detection and pun location.", "Although our approach based on the BPA tagging scheme yields relatively higher scores on the detection task, we still found that 40% of the incorrectly predicted instances fall into this group.", "3) Short sentences: we found it was challenging for our model to make correct predictions when the given text is short.", "Consider the example Su-perglue! Tom rejoined, here the word rejoined is the corresponding pun.", "However, it would be challenging to figure out the pun with such limited contextual information.", "Most existing systems address pun detection and location separately.", "Pedersen (2017) applied word sense knowledge to conduct pun detection.", "Indurthi and Oota (2017) trained a bidirectional RNN classifier for detecting homographic puns.", "Next, a knowledge-based approach is adopted to find the exact pun.", "Such a system is not applicable to heterographic puns.", "Doogan et al. (2017) applied Google n-gram and word2vec to make decisions.", "The phonetic distance via the CMU Pronouncing Dictionary is computed to detect heterographic puns.", "Pramanick and Das (2017) used the hidden Markov model and a cyclic dependency network with rich features to detect and locate puns.", "Mikhalkova and Karyakin (2017) used a supervised approach to pun detection and a weakly supervised approach to pun location based on the position within the context and part of speech features.", "Vechtomova (2017) proposed a rule-based system for pun location that scores candidate words according to eleven simple heuristics.", "Two systems are developed to conduct detection and location separately in the system known as UWAV (Vadehra, 2017).", "The pun detector combines predictions from three classifiers.", "The pun locator considers word2vec similarity between every pair of words in the context and position to pinpoint the pun.", "The state-of-the-art system for homographic pun location is a neural method (Cai et al., 2018), where the word senses are incorporated into a bidirectional LSTM model.", "This method only supports the pun location task on homographic puns.", "Another line of research efforts related to this work is sequence labeling, such as POS tagging, chunking, word segmentation and NER.", "The neural methods have shown their effectiveness in this task, such as BiLSTM-CNN (Chiu and Nichols, 2016), GRNN (Xu and Sun, 2016), LSTM-CRF (Lample et al., 2016), LSTM-CNN-CRF (Ma and Hovy, 2016), LM-LSTM-CRF (Liu et al., 2018).", "In this work, we combine pun detection and location tasks as a single sequence labeling problem.", "Inspired by the work of (Liu et al., 2018), we also adopt a LSTM-CRF with character embeddings to make labeling decisions.", "In this paper, we propose to perform pun detection and location tasks in a joint manner from a sequence labeling perspective.", "We observe that each text in our corpora contains a maximum of one pun.", "Hence, we design a novel tagging scheme to incorporate such a constraint.", "Such a scheme guarantees that there is a maximum of one word that will be tagged as a pun during the testing phase.", "We also found the interesting structural property such as the fact that most puns tend to appear at the second half of the sentences can be helpful for such a task, but was not explored in previous works.", "Furthermore, unlike many previous approaches, our approach, though simple, is generally applicable to both heterographic and homographic puns.", "Empirical results on the benchmark datasets prove the effectiveness of the proposed approach that the two tasks of pun detection and location can be addressed by a single model from a sequence labeling perspective.", "Future research includes the investigations on how to make use of richer semantic and linguistic information for detection and location of puns.", "Research on puns for other languages such as Chinese is still under-explored, which could also be an interesting direction for our future studies.", "We would like to thank the three anonymous reviewers for their thoughtful and constructive comments.", "This work is supported by Singapore Ministry of Education Academic Research Fund (AcRF) Tier 2 Project MOE2017-T2-1-156, and is partially supported by SUTD project PIE-SGP-AI-2018-01." ]
[ "abstain", "method", "objective", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "objective", "result", "method", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "result", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "method", "method", "objective", "result", "objective", "abstain", "objective", "abstain", "abstain", "abstain", "objective", "other", "other" ]
[ "The recent tremendous success of unsupervised word embeddings in a multitude of applications raises the obvious question if similar methods could be derived to improve embeddings (i.e. semantic representations) of word sequences as well.", "We present a simple but efficient unsupervised objective to train distributed representations of sentences.", "Our method outperforms the state-of-the-art unsupervised models on most benchmark tasks, highlighting the robustness of the produced general-purpose sentence embeddings.", "Improving unsupervised learning is of key importance for advancing machine learning methods, as to unlock access to almost unlimited amounts of data to be used as training resources.", "The majority of recent success stories of deep learning does not fall into this category but instead relied on supervised training (in particular in the vision domain).", "A very notable exception comes from the text and natural language processing domain, in the form of semantic word embeddings trained unsupervised (Mikolov et al., 2013b,a; Pennington et al., 2014).", "Within only a few years from their invention, such word representations which are based on a simple matrix factorization model as we formalize below are now routinely trained on very large amounts of raw text data, and have become ubiquitous building blocks of a majority of current state-of-the-art NLP applications.", "While very useful semantic representations are available for words, it remains challenging to produce and learn such semantic embeddings for longer pieces of text, such as sentences, paragraphs or entire documents.", "Even more so, it re-* indicates equal contribution mains a key goal to learn such general-purpose representations in an unsupervised way.", "Currently, two contrary research trends have emerged in text representation learning: On one hand, a strong trend in deep-learning for NLP leads towards increasingly powerful and complex models, such as recurrent neural networks (RNNs), LSTMs, attention models and even Neural Turing Machine architectures.", "While extremely strong in expressiveness, the increased model complexity makes such models much slower to train on larger datasets.", "On the other end of the spectrum, simpler shallow models such as matrix factorizations (or bilinear models) can benefit from training on much larger sets of data, which can be a key advantage, especially in the unsupervised setting.", "Surprisingly, for constructing sentence embeddings, naively using averaged word vectors was shown to outperform LSTMs (see Wieting et al. (2016b) for plain averaging, and Arora et al. (2017) for weighted averaging).", "This example shows potential in exploiting the trade-off between model complexity and ability to process huge amounts of text using scalable algorithms, towards the simpler side.", "In view of this tradeoff, our work here further advances unsupervised learning of sentence embeddings.", "Our proposed model can be seen as an extension of the C-BOW (Mikolov et al., 2013b,a) training objective to train sentence instead of word embeddings.", "We demonstrate that the empirical performance of our resulting general-purpose sentence embeddings very significantly exceeds the state of the art, while keeping the model simplicity as well as training and inference complexity exactly as low as in averaging methods (Wieting et al., 2016b; Arora et al., 2017), thereby also putting the work by (Arora et al., 2017) in perspective.", "Contributions.", "The main contributions in this work can be summarized as follows: Model.", "We propose Sent2Vec 1 , a simple unsupervised model allowing to compose sentence embeddings using word vectors along with n-gram embeddings, simultaneously training composition and the embedding vectors themselves.", "Efficiency & Scalability.", "The computational complexity of our embeddings is only O (1) vector operations per word processed, both during training and inference of the sentence embeddings.", "This strongly contrasts all neural network based approaches, and allows our model to learn from extremely large datasets, in a streaming fashion, which is a crucial advantage in the unsupervised setting.", "Fast inference is a key benefit in downstream tasks and industry applications.", "Performance.", "Our method shows significant performance improvements compared to the current state-of-the-art unsupervised and even semi-supervised models.", "The resulting general-purpose embeddings show strong robustness when transferred to a wide range of prediction benchmarks.", "Our model is inspired by simple matrix factor models (bilinear models) such as recently very successfully used in unsupervised learning of word embeddings (Mikolov et al., 2013b,a; Pennington et al., 2014; Bojanowski et al., 2017) as well as supervised of sentence classification (Joulin et al., 2017).", "More precisely, these models can all be formalized as an optimization problem of the form min U , VXS C f S ( UV S ) (1) for two parameter matrices U R k h and V R h |V| , where V denotes the vocabulary.", "Here, the columns of the matrix V represent the learnt source word vectors whereas those of U represent the target word vectors.", "For a given sentence S , 1 All our code and pre-trained models will be made publicly available on http://github.com/epfml/ sent2vec which can be of arbitrary length, the indicator vector S { 0 , 1 } |V| is a binary vector encoding S (bag of words encoding).", "Fixed-length context windows S running over the corpus are used in word embedding methods as in C-BOW (Mikolov et al., 2013b,a) and GloVe (Pennington et al., 2014).", "Here we have k = |V| and each cost function f S : R k R only depends on a single row of its input, describing the observed target word for the given fixed-length context S .", "In contrast, for sentence embeddings which are the focus of our paper here, S will be entire sentences or documents (therefore variable length).", "This property is shared with the supervised FastText classifier (Joulin et al., 2017), which however uses soft-max with k (cid:28) |V| being the number of class labels.", "We propose a new unsupervised model, Sent2Vec , for learning universal sentence embeddings.", "Conceptually, the model can be interpreted as a natural extension of the word-contexts from C-BOW (Mikolov et al., 2013b,a) to a larger sentence context, with the sentence words being specifically optimized towards additive combination over the sentence, by means of the unsupervised objective function.", "Formally, we learn a source (or context) embedding v w and target embedding u w for each word w in the vocabulary, with embedding dimension h and k = |V| as in (1).", "The sentence embedding is defined as the average of the source word embeddings of its constituent words, as in (2).", "We augment this model furthermore by also learning source embeddings for not only unigrams but also n-grams present in each sentence, and averaging the n-gram embeddings along with the words, i.e., the sentence embedding v S for S is modeled as v S := 1 | R ( S ) | V R ( S ) = 1 | R ( S ) | X w R ( S ) v w (2) where R ( S ) is the list of n-grams (including unigrams) present in sentence S .", "In order to predict a missing word from the context, our objective models the softmax output approximated by negative sampling following (Mikolov et al., 2013b).", "For the large number of output classes |V| to be predicted, negative sampling is known to significantly improve training efficiency, see also (Gold-berg and Levy, 2014).", "Given the binary logistic 529 loss function : x 7 log (1 + e x ) coupled with negative sampling, our unsupervised training objective is formulated as follows: min U , VXS C X w t S (cid:18) (cid:0) u > w t v S \\{ w t } (cid:1) + X w 0 N wt (cid:0) u > w 0 v S \\{ w t } (cid:1)(cid:19) where S corresponds to the current sentence and N w t is the set of words sampled negatively for the word w t S . The negatives are sampled 2 following a multinomial distribution where each word w is associated with a probability q n ( w ) := f w (cid:14) (cid:0) P w i V p f w i (cid:1) , where f w is the normalized frequency of w in the corpus. To select the possible target unigrams (posi-tives), we use subsampling as in (Joulin et al., 2017; Bojanowski et al., 2017), each word w being discarded with probability 1 q p ( w ) where q p ( w ) := min (cid:8) 1 , p t/f w + t/f w (cid:9) . Where t is the subsampling hyper-parameter. Subsampling prevents very frequent words of having too much influence in the learning as they would introduce strong biases in the prediction task. With positives subsampling and respecting the negative sampling distribution, the precise training objective function becomes min U , VXS C X w t S (cid:18) q p ( w t ) (cid:0) u > w t v S \\{ w t } (cid:1) (3) + | N w t | X w 0 V q n ( w 0 ) (cid:0) u > w 0 v S \\{ w t } (cid:1)(cid:19) 2.2 Computational Efficiency In contrast to more complex neural network based models, one of the core advantages of the proposed technique is the low computational cost for both inference and training. Given a sentence S and a trained model, computing the sentence representation v S only requires | S | h floating point operations (or | R ( S ) | h to be precise for the n-gram case, see (2)), where h is the embedding dimension. The same holds for the cost of training with SGD on the objective (3), per sentence seen in the training corpus. Due to the simplicity of the 2 To efficiently sample negatives, a pre-processing table is constructed, containing the words corresponding to the square root of their corpora frequency. Then, the negatives N w t are sampled uniformly at random from the negatives table except the target w t itself, following (Joulin et al., 2017; Bojanowski et al., 2017). model, parallel training is straight-forward using parallelized or distributed SGD. Also, in order to store higher-order n-grams efficiently, we use the standard hashing trick, see e.g. (Weinberger et al., 2009), with the same hashing function as used in FastText (Joulin et al., 2017; Bojanowski et al., 2017). 2.3 Comparison to C-BOW C-BOW (Mikolov et al., 2013b,a) aims to predict a chosen target word given its fixed-size context window, the context being defined by the average of the vectors associated with the words at a distance less than the window size hyper-parameter ws . If our system, when restricted to unigram features, can be seen as an extension of C-BOW where the context window includes the entire sentence, in practice there are few important differences as C-BOW uses important tricks to facilitate the learning of word embeddings. C-BOW first uses frequent word subsampling on the sentences, deciding to discard each token w with probability q p ( w ) or alike (small variations exist across imple-mentations). Subsampling prevents the generation of n-grams features, and deprives the sentence of an important part of its syntactical features. It also shortens the distance between subsampled words, implicitly increasing the span of the context window. A second trick consists of using dynamic context windows: for each subsampled word w , the size of its associated context window is sampled uniformly between 1 and ws . Using dynamic context windows is equivalent to weighing by the distance from the focus word w divided by the window size (Levy et al., 2015). This makes the prediction task local, and go against our objective of creating sentence embeddings as we want to learn how to compose all n-gram features present in a sentence. In the results section, we report a significant improvement of our method over C-BOW. 2.4 Model Training Three different datasets have been used to train our models: the Toronto book corpus 3 , Wikipedia sentences and tweets. The Wikipedia and Toronto books sentences have been tokenized using the Stanford NLP library (Manning et al., 2014), while for tweets we used the NLTK tweets tok-enizer (Bird et al., 2009). For training, we select a 3 http://www.cs.toronto.edu/mbweb/ 530 sentence randomly from the dataset and then proceed to select all the possible target unigrams using subsampling. We update the weights using SGD with a linearly decaying learning rate. Also, to prevent overfitting, for each sentence we use dropout on its list of n-grams R ( S ) \\ { U ( S ) } , where U ( S ) is the set of all unigrams contained in sentence S . After empirically trying multiple dropout schemes, we find that dropping K n-grams ( n > 1 ) for each sentence is giving superior results compared to dropping each token with some fixed probability. This dropout mechanism would negatively impact shorter sentences. The regularization can be pushed further by applying L1 regularization to the word vectors. Encouraging sparsity in the embedding vectors is particularly beneficial for high dimension h . The additional soft thresholding in every SGD step adds negligible computational cost. See also Appendix B. We train two models on each dataset, one with unigrams only and one with unigrams and bigrams. All training parameters for the models are provided in Table 5 in the appendix. Our C++ implementation builds upon the FastText library (Joulin et al., 2017; Bojanowski et al., 2017). We will make our code and pre-trained models available open-source. 3 Related Work We discuss existing models which have been proposed to construct sentence embeddings. While there is a large body of works in this direction several among these using e.g. labelled datasets of paraphrase pairs to obtain sentence embeddings in a supervised manner (Wieting et al., 2016a,b; Con-neau et al., 2017) to learn sentence embeddings we here focus on unsupervised, task-independent models. While some methods require ordered raw text i.e., a coherent corpus where the next sentence is a logical continuation of the previous sentence, others rely only on raw text i.e., an unordered collection of sentences. Finally, we also discuss alternative models built from structured data sources. 3.1 Unsupervised Models Independent of Sentence Ordering The ParagraphVector DBOW model (Le and Mikolov, 2014) is a log-linear model which is trained to learn sentence as well as word embeddings and then use a softmax distribution to predict words contained in the sentence given the sentence vector representation. They also propose a different model ParagraphVector DM where they use n-grams of consecutive words along with the sentence vector representation to predict the next word. (Lev et al., 2015) also presented an early approach to obtain compositional embeddings from word vectors. They use different compositional techniques including static averaging or Fisher vectors of a multivariate Gaussian to obtain sentence embeddings from word2vec models. Hill et al. (2016a) propose a Sequential (De-noising) Autoencoder, S(D)AE . This model first introduces noise in the input data: Firstly each word is deleted with probability p 0 , then for each non-overlapping bigram, words are swapped with probability p x . The model then uses an LSTM-based architecture to retrieve the original sentence from the corrupted version. The model can then be used to encode new sentences into vector representations. In the case of p 0 = p x = 0 , the model simply becomes a Sequential Autoencoder. Hill et al. (2016a) also propose a variant (S(D)AE + embs.) in which the words are represented by fixed pre-trained word vector embeddings. Arora et al. (2017) propose a model in which sentences are represented as a weighted average of fixed (pre-trained) word vectors, followed by post-processing step of subtracting the principal component. Using the generative model of (Arora et al., 2016), words are generated conditioned on a sentence discourse vector c s : P r [ w | c s ] = f w + (1 )exp( c > s v w ) Z c s , where Z c s := P w V exp( c > s v w ) and c s := c 0 + (1 ) c s and , are scalars. c 0 is the common discourse vector, representing a shared component among all discourses, mainly related to syntax. It allows the model to better generate syntactical features. The f w term is here to enable the model to generate some frequent words even if their matching with the discourse vector c s is low. Therefore, this model tries to generate sentences as a mixture of three type of words: words matching the sentence discourse vector c s , syntactical words matching c 0 , and words with high f w . (Arora et al., 2017) demonstrated that for this model, the MLE of c s can be approximated by P w S a f w + a v w , where a is a scalar. The sentence 531 discourse vector can hence be obtained by subtracting c 0 estimated by the first principal component of c s 's on a set of sentences. In other words, the sentence embeddings are obtained by a weighted average of the word vectors stripping away the syntax by subtracting the common discourse vector and down-weighting frequent tokens. They generate sentence embeddings from diverse pre-trained word embeddings among which are unsupervised word embeddings such as GloVe (Pennington et al., 2014) as well as supervised word embeddings such as paragram-SL999 (PSL) (Wieting et al., 2015) trained on the Paraphrase Database (Ganitkevitch et al., 2013). In a very different line of work, C-PHRASE (Pham et al., 2015) relies on additional information from the syntactic parse tree of each sentence, which is incorporated into the C-BOW training objective. Huang and Anandkumar (2016) show that single layer CNNs can be modeled using a tensor decomposition approach. While building on an unsupervised objective, the employed dictionary learning step for obtaining phrase templates is task-specific (for each use-case), not resulting in general-purpose embeddings. 3.2 Unsupervised Models Depending on Sentence Ordering The SkipThought model (Kiros et al., 2015) combines sentence level models with recurrent neural networks. Given a sentence S i from an ordered corpus, the model is trained to predict S i 1 and S i +1 . FastSent (Hill et al., 2016a) is a sentence-level log-linear bag-of-words model. Like SkipThought, it uses adjacent sentences as the prediction target and is trained in an unsupervised fashion. Using word sequences allows the model to improve over the earlier work of paragraph2vec (Le and Mikolov, 2014). (Hill et al., 2016a) augment FastSent further by training it to predict the constituent words of the sentence as well. This model is named FastSent + AE in our comparisons. Compared to our approach, Siamese C-BOW (Kenter et al., 2016) shares the idea of learning to average word embeddings over a sentence. However, it relies on a Siamese neural network architecture to predict surrounding sentences, contrasting our simpler unsupervised objective. Note that on the character sequence level instead of word sequences, FastText (Bojanowski et al., 2017) uses the same conceptual model to obtain better word embeddings. This is most similar to our proposed model, with two key differences: Firstly, we predict from source word sequences to target words, as opposed to character sequences to target words, and secondly, our model is averaging the source embeddings instead of summing them. 3.3 Models requiring structured data DictRep (Hill et al., 2016b) is trained to map dictionary definitions of the words to the pre-trained word embeddings of these words. They use two different architectures, namely BOW and RNN (LSTM) with the choice of learning the input word embeddings or using them pre-trained. A similar architecture is used by the CaptionRep variant, but here the task is the mapping of given image captions to a pre-trained vector representation of these images. 4 Evaluation Tasks We use a standard set of supervised as well as unsupervised benchmark tasks from the literature to evaluate our trained models, following (Hill et al., 2016a). The breadth of tasks allows to fairly measure generalization to a wide area of different domains, testing the general-purpose quality (univer-sality) of all competing sentence embeddings. For downstream supervised evaluations, sentence embeddings are combined with logistic regression to predict target labels. In the unsupervised evaluation for sentence similarity, correlation of the cosine similarity between two embeddings is compared to human annotators. Downstream Supervised Evaluation. Sentence embeddings are evaluated for various supervised classification tasks as follows. We evaluate paraphrase identification (MSRP) (Dolan et al., 2004), classification of movie review sentiment (MR) (Pang and Lee, 2005), product reviews (CR) (Hu and Liu, 2004), subjectivity classification (SUBJ) (Pang and Lee, 2004), opinion polarity (MPQA) (Wiebe et al., 2005) and question type classification (TREC) (Voorhees, 2002). To classify, we use the code provided by (Kiros et al., 2015) in the same manner as in (Hill et al., 2016a). For the MSRP dataset, containing pairs of sentences ( S 1 , S 2 ) with associated paraphrase label, we generate feature vectors by concatenating 532 their Sent2Vec representations | v S 1 v S 2 | with the component-wise product v S 1 (cid:12) v S 2 . The pre-defined training split is used to tune the L2 penalty parameter using cross-validation and the accuracy and F1 scores are computed on the test set. For the remaining 5 datasets, Sent2Vec embeddings are inferred from input sentences and directly fed to a logistic regression classifier. Accuracy scores are obtained using 10-fold cross-validation for the MR, CR, SUBJ and MPQA datasets. For those datasets nested cross-validation is used to tune the L2 penalty. For the TREC dataset, as for the MRSP dataset, the L2 penalty is tuned on the pre-defined train split using 10-fold cross-validation, and the accuracy is computed on the test set. Unsupervised Similarity Evaluation. We perform unsupervised evaluation of the learnt sentence embeddings using the sentence cosine similarity, on the STS 2014 (Agirre et al., 2014) and SICK 2014 (Marelli et al., 2014) datasets. These similarity scores are compared to the gold-standard human judgements using Pearson's r (Pearson, 1895) and Spearman's (Spearman, 1904) correlation scores.", "The SICK dataset consists of about 10,000 sentence pairs along with relatedness scores of the pairs.", "The STS 2014 dataset contains 3,770 pairs, divided into six different categories on the basis of the origin of sen-tences/phrases, namely Twitter, headlines, news, forum, WordNet and images.", "In Tables 1 and 2, we compare our results with those obtained by (Hill et al., 2016a) on different models.", "Table 3 in the last column shows the dramatic improvement in training time of our models (and other C-BOW-inspired models) in contrast to neural network based models.", "All our Sent2Vec models are trained on a machine with 2x Intel Xeon E5 2680v3, 12 cores @2.5GHz.", "Along with the models discussed in Section 3, this also includes the sentence embedding baselines obtained by simple averaging of word embeddings over the sentence, in both the C-BOW and skipgram variants.", "TF-IDF BOW is a representation consisting of the counts of the 200,000 most common feature-words, weighed by their TF-IDF frequencies.", "To ensure coherence, we only include unsupervised models in the main paper.", "Performance of supervised and semi-supervised models on these evaluations can be observed in Tables 6 and 7 in the appendix.", "Downstream Supervised Evaluation Results.", "On running supervised evaluations and observing the results in Table 1, we find that on an average our models are second only to SkipThought vectors.", "Also, both our models achieve state of the art results on the CR task.", "We also observe that on half of the supervised tasks, our unigrams + bigram model is the best model after SkipThought.", "Our models are weaker on the MSRP task (which consists of the identification of labelled paraphrases) compared to state-of-the-art methods.", "However, we observe that the models which perform very strongly on this task end up faring very poorly on the other tasks, indicating a lack of generalizability.", "On rest of the tasks, our models perform extremely well.", "The SkipThought model is able to outperform our models on most of the tasks as it is trained to predict the previous and next sentences and a lot of tasks are able to make use of this contextual information missing in our Sent2Vec models.", "For example, the TREC task is a poor measure of how one predicts the content of the sentence (the question) but a good measure of how the next sentence in the sequence (the answer) is predicted.", "Unsupervised Similarity Evaluation Results.", "In Table 2, we see that our Sent2Vec models are state-of-the-art on the majority of tasks when comparing to all the unsupervised models trained on the Toronto corpus, and clearly achieve the best averaged performance.", "Our Sent2Vec models also on average outperform or are at par with the C-PHRASE model, despite significantly lagging behind on the STS 2014 WordNet and News subtasks.", "This observation can be attributed to the fact that a big chunk of the data that the C-PHRASE model is trained on comes from English Wikipedia, helping it to perform well on datasets involving definition and news items.", "Also, C-PHRASE uses data three times the size of the Toronto book corpus.", "Interestingly, our model outperforms C-PHRASE when trained on Wikipedia, as shown in Table 3, despite the fact that we use no parse tree information.", "Official STS 2017 benchmark.", "In the official results of the most recent edition of the STS 2017 benchmark (Cer et al., 2017), our model also significantly outperforms C-PHRASE, and in fact delivers the best unsupervised baseline method.", "4 For the Siamese C-BOW model trained on the Toronto 533 Data Model MSRP (Acc / F1) MR CR SUBJ MPQA TREC Average Unordered Sentences: (Toronto Books; 70 million sentences, 0.9 Billion Words) SAE 74.3 / 81.7 62.6 68.0 86.1 76.8 80.2 74.7 SAE + embs.", "Table 1 : Comparison of the performance of different models on different supervised evaluation tasks.", "An underline indicates the best performance for the dataset.", "Top 3 performances in each data category are shown in bold.", "The average is calculated as the average of accuracy for each category (For MSRP, we take the accuracy).", ") STS 2014 SICK 2014 Model News Forum WordNet Twitter Images Headlines Test + Train Average SAE .17/.16 .12/.12 .30/.23 .28/.22 .49/.46 .13/.11 .32/.31 .26/.23", "Table 2 : Unsupervised Evaluation Tasks : Comparison of the performance of different models on Spearman/Pearson correlation measures.", "An underline indicates the best performance for the dataset.", "Top 3 performances in each data category are shown in bold.", "The average is calculated as the average of entries for each correlation measure.", "Macro Average.", "To summarize our contributions on both supervised and unsupervised tasks, in Table 3 we present the results in terms of the macro average over the averages of both supervised and unsupervised tasks along with the training times of the models 5 .", "For unsupervised tasks, averages are taken over both Spearman and Pearson scores.", "The comparison includes the best performing unsupervised and semi-supervised methods described in Section 3.", "For models trained on the Toronto books dataset, we report a 3.8 % points improvement over the state of the art.", "Considering all supervised, semi-supervised methods and all datasets compared in (Hill et al., 2016a), corpus, supervised evaluation as well as similarity evaluation results on the SICK 2014 dataset are unavailable.", "we report a 2.2 % points improvement.", "We also see a noticeable improvement in accuracy as we use larger datasets like Twitter and Wikipedia.", "We furthermore see that the Sent2Vec models are faster to train when compared to methods like SkipThought and DictRep, owing to the SGD optimizer allowing a high degree of paral-lelizability.", "We can clearly see Sent2Vec outperforming other unsupervised and even semi-supervised methods.", "This can be attributed to the superior generalizability of our model across supervised and unsupervised tasks.", "Comparison with Arora et al. (2017).", "We also compare our work with Arora et al. (2017) who also use additive compositionality to obtain sentence embeddings.", "However, in contrast to our 534 Type Training corpus Method Supervisedaverage Unsupervisedaverage Macroaverage Training time (in hours) unsupervised twitter (19.7B words) Sent2Vec uni.", "Table 3 : Best unsupervised and semi-supervised methods ranked by macro average along with their training times.", "** indicates trained on GPU.", "* indicates trained on a single node using 30 threads.", "Training times for non-Sent2Vec models are due to Hill et al. (2016a).", "For CPU based competing methods, we were able to reproduce all published timings (+-10%) using our same hardware as for training Sent2Vec.", "Table 4 : Comparison of the performance of the unsupervised and semi-supervised sentence embeddings by (Arora et al., 2017) with our models.", "Unsupervised comparisons are in terms of Pearson's correlation, while comparisons on supervised tasks are stating the average described in Table", "1. model, they use fixed, pre-trained word embeddings to build a weighted average of these embeddings using unigram probabilities.", "While we couldn't find pre-trained state of the art word embeddings trained on the Toronto books corpus, we evaluated their method using GloVe embeddings obtained from the larger Common Crawl Corpus 6 , which is 42 times larger than our twitter corpus, greatly favoring their method over ours.", "In Table 4, we report an experimental comparison to their model on unsupervised tasks.", "In the table, the suffix W indicates that their down-weighting scheme has been used, while the suffix R indicates the removal of the first principal component.", "They report values of a [10 4 , 10 3 ] as giving the best results and used a = 10 3 for all their experiments.", "We observe that our results are competitive with the embeddings of Arora et al. (2017) for purely unsupervised methods.", "It is important to note that the scores obtained from supervised task-specific PSL embeddings trained for the purpose of semantic similarity outperform our method on both SICK and average STS 2014, which is expected as our model is trained purely unsupervised.", "In order to facilitate a more detailed comparison, we also evaluated the unsupervised Glove + WR embeddings on downstream supervised tasks 6 http://www.cs.toronto.edu/mbweb/ and compared them to our twitter models.", "To use Arora et al. (2017)'s method in a supervised setup, we precomputed and stored the common discourse vector c 0 using 2 million random Wikipedia sentences.", "On an average, our models outperform their unsupervised models by a significant margin, this despite the fact that they used GloVe embeddings trained on larger corpora than ours (42 times larger).", "Our models also outperform their semi-supervised PSL + WR model.", "This indicates our model learns a more precise weighing scheme than the static one proposed by Arora et al. (2017).", "Figure 1 : Left figure: the profile of the word vector L 2 norms as a function of log( f w ) for each vocabulary word w , as learnt by our unigram model trained on Toronto books.", "Right figure: down-weighting scheme proposed by Arora et al. (2017): weight ( w ) = a a + f w .", "being trained on three very different datasets, all of our models generalize well to sometimes very", "specific domains.", "Models trained on Toronto Corpus are the state-of-the-art on the STS 2014 images dataset even beating the supervised CaptionRep model trained on images.", "We also see that addition of bigrams to our models doesn't help much when it comes to unsupervised evaluations but gives a significant boost-up in accuracy on supervised tasks.", "We attribute this phenomenon to the ability of bigrams models to capture some non-compositional features missed by unigrams models.", "Having a single representation for not good or very bad can boost the supervised model's ability to infer relevant features for the corresponding classifier.", "For semantic similarity tasks however, the relative uniqueness of bigrams results in pushing sentence representations further apart, which can explain the average drop of scores for bigrams models on those tasks.", "On learning the importance and the direction of the word vectors .", "Our model by learning how to generate and compose word vectors has to learn both the direction of the word embeddings as well as their norm.", "Considering the norms of the used word vectors as by our averaging over the sentence, we observe an interesting distribution of the importance of each word.", "In Figure 1 we show the profile of the L 2 -norm as a function of log( f w ) for each w V , and compare it to the static down-weighting mechanism of Arora et al. (2017).", "We can observe that our model is learning to down-weight frequent tokens by itself.", "It is also down-weighting rare tokens and the norm profile seems to roughly follow Luhn's hypothesis (Luhn, 1958), a well known information retrieval paradigm, stating that mid-rank terms are the most significant to discriminate content.", "In this paper, we introduce a novel, computationally efficient, unsupervised, C-BOW-inspired method to train and infer sentence embeddings.", "On supervised evaluations, our method, on an average, achieves better performance than all other unsupervised competitors with the exception of SkipThought.", "However, SkipThought vectors show a very poor performance on sentence similarity tasks while our model is state-of-the-art for these evaluations on average.", "Also, our model is generalizable, extremely fast to train, simple to understand and easily interpretable, showing the relevance of simple and well-grounded representation models in contrast to the models using deep architectures.", "Future work could focus on augmenting the model to exploit data with ordered sentences.", "Furthermore, we would like to investigate the model's ability to use pre-trained embeddings for downstream transfer learning tasks.", "We are indebted to Piotr Bojanowski and Armand Joulin for helpful discussions.", "This project was supported by a Google Faculty Research Award." ]
[ "abstain", "objective", "objective", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "abstain", "objective", "objective", "objective", "result", "objective", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "other", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "result", "abstain", "result", "abstain", "result", "abstain", "abstain", "result", "result", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "other", "abstain", "result", "result", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "objective", "result", "result", "objective", "abstain", "objective", "other", "other" ]
[ "Recent work has shown that LSTMs trained on a generic language modeling objective capture syntax-sensitive generalizations such as long-distance number agreement.", "We have however no mechanistic understanding of how they accomplish this remarkable feat.", "Some have conjectured it depends on heuristics that do not truly take hierarchical structure into account.", "We present here a detailed study of the inner mechanics of number tracking in LSTMs at the single neuron level.", "We discover that long-distance number information is largely managed by two number units.", "Importantly, the behaviour of these units is partially controlled by other units independently shown to track syntactic structure.", "We conclude that LSTMs are, to some extent, implementing genuinely syntactic processing mechanisms, paving the way to a more general understanding of grammatical encoding in LSTMs.", "Schmidhu-ber, 1997), have been successfully applied to a variety of NLP tasks.", "This has spurred interest in whether these generic sequence-processing devices are discovering genuine structural properties of language in their training data, or whether their success can be explained by opportunistic surface-pattern-based heuristics.", "Until now, this debate has mostly relied on behavioural evidence: The LSTM had been treated as a black box, and its capacities had been indirectly inferred by its performance on linguistic tasks.", "In this study, we took a complementary approach inspired by neuroscience: We thoroughly investigated the inner dynamics of an LSTM language model performing a number agreement task, striving to achieve a mechanistic understanding of how it accomplishes it.", "We found that the LSTM had specialized two grandmother cells (Bowers, 2009) to carry number features from the subject to the verb across the intervening material.", "1 Interestingly, the LSTM also 1 In the neuroscientific literature, grandmother cells are (sets of) neurons coding for specific information,", "e.g., about your grandmother, in a non-distributed manner.", "possesses a more distributed mechanism to predict number when subject and verb are close, with the grandmother number cells only playing a crucial role in more difficult long-distance cases.", "Crucially, we independently identified a set of cells tracking syntactic structure, and found that one of them encodes the presence of an embedded phrase separating the main subject-verb dependency, and has strong efferent connections to the long-distance number cells, suggesting that the network relies on genuine syntactic information to regulate agreement-feature percolation.", "Our analysis thus provides direct evidence for the claim that LSTMs trained on unannotated corpus data, despite lacking significant linguistic priors, learn to perform structure-dependent linguistic operations.", "In turn, this suggests that raw linguistic input and generic memory mechanisms, such as those implemented in LSTMs, may suffice to trigger the induction of non-trivial grammatical rules.", "Starting with the seminal work of Linzen et al. (2016), a long-distance number agreement task has emerged as a standard way to probe the syntactic capabilities of neural language models.", "In the number agreement task, a model is asked to predict the verb in a sentence where the subject and main verb are separated by one or more intervening nouns (the boy near the cars greets . . . ) and evaluated based on how often it predicts the right verb form.", "Following mixed initial results by Linzen and colleagues and Bernardy and Lappin (2017), Gulordava et al. (2018) and Kuncoro et al. (2018b) have robustly established that LSTM language models achieve near-human performance on the agreement task.", "While Gulordava and colleagues provided some evidence that the LSTMs are relying on genuine syntactic generalizations, Kuncoro et al. (2018a) and Linzen and Leonard (2018) suggested that the LSTM achievements can, at least in part, be accounted by superficial heuristics (e.g., percolate the number of the first noun in a sentence).", "Other recent work has extended syntax probing to other phenomena such as negative polarity items and island constraints (Chowdhury and Zamparelli, 2018; Jumelet and Hupkes, 2018; Marvin and Linzen, 2018; Wilcox et al., 2018).", "intriguing qualitative data showing cells that track grammatical number in a network directly trained on the agreement task, most of the following work focused on testing the network output behaviour, rather than on understanding how the latter follows from the inner representations of the network.", "Another research line studied linguistic processing in neural networks through diagnos-tic classifiers', that is, classifiers trained to predict a certain property from network activations (e.g., Gelderloos and Chrupaa, 2016; Adi et al., 2017; Alain and Bengio, 2017; Hupkes et al., 2018).", "This approach may give insight into which information is encoded by the network in different layers or at different time points, but it only provides indirect evidence about the specific mechanics of linguistic processing in the network.", "Other studies are closer to our approach in that they attempt to attribute function to specific network cells, often by means of visualization (Karpathy et al., 2016; Li et al., 2016; Tang et al., 2017).", "Radford et al. (2017), for example, detected a sentiment grandmother cell in a language-model-trained network.", "Kementchedjhieva and Lopez (2018) recently found a character-level RNN to track morpheme boundaries in a single cell.", "We are however not aware of others studies systematically characterizing the processing of a linguistic phenomenon at the level of RNN cell dynamics, as is the attempt in the study hereby presented.", "Language Model We study the pretrained LSTM language model made available by Gulordava et al. (2018).", "This model is composed of a 650-dimensional embedding layer, two 650-dimensional hidden layers, and an output layer with vocabulary size 50,000.", "The model was trained on Wikipedia data, without fine-tuning for number agreement, and obtained perplexity close to state of the art in the experiments of Gulordava et al. 2 Number-Agreement Tasks We complement analysis of the naturalistic, corpus-derived number-agreement test set of Linzen et al. (2016), in the version made available by Gulordava et al. (2018), with synthetically generated data-sets.", "2 Key findings reported below were also replicated with the same model trained with different initialization seeds and variations with different hyper-parameters.", "Simple the boy greets the guy Adv the boy probably greets the guy 2Adv the boy most probably greets the guy CoAdv the boy openly and deliberately greets the guy NamePP the boy near Pat greets the guy NounPP the boy near the car greets the guy NounPPAdv the boy near the car kindly greets the guy Table 1: NA tasks illustrated by representative singular sentences.", "Each synthetic number-agreement task (NA-task) instantiates a fixed syntactic structure with varied lexical material, in order to probe subject-verb number agreement in controlled and increasingly challenging setups.", "3 The different structures are illustrated in Table 1, where all forms are in the singular.", "Distinct sentences were randomly generated by selecting words from pools of 20 subject/object nouns, 15 verbs, 10 adverbs, 5 prepositions, 10 proper nouns and 10 location nouns.", "The items were selected so that their combination would not lead to semantic anomalies.", "For each NA-task, we generated singular and plural versions of each sentence.", "We refer to each such version as a condition .", "For NA-tasks that have other nouns occurring between subject and main verb, we also systematically vary their number, resulting in two congruent and two incongruent conditions.", "For example, the NounPP sentence in the table illustrates the congruent SS (singular-singular) condition and the corresponding sentence in the incongruent PS (plural-singular) condition is: the boys near the car greet the guy.", "For all NA-tasks, each condition consisted of 600 sentences Syntactic Depth Data-Set We probed the implicit syntax-parsing abilities of the model by testing whether its representations predict the syntactic depth of the words they process.", "Following Nelson et al. (2017), this was operational-ized as predicting the number of open syntactic nodes at each word, given the canonical syntactic parse of a sentence.", "We generated a data-set of sentences with unambiguous but varied syntactic structures and annotated them with the number of open nodes at each word.", "For example: Ten 1 really 2 ecstatic 3 cousins 3 of 4 four 5 teachers 6 are 2 quickly 3 laughing 4 , where indexes show the cor-3 We exclude, for the time being, agreement across a relative clause, as it comes with the further complication of accounting for the extra agreement process taking place inside the relative clause.", "responding number of open nodes.", "Since syntactic depth is naturally correlated with the position of a word in a sentence, we used a data-point sampling strategy to de-correlate these factors.", "For each length between 2 and 25 words, we randomly generated 300 sentences.", "From this set, we randomly picked examples uniformly covering all possible position-depth combinations within the 7-12 position and 3-8 depth ranges.", "The final data-set contains 4,033 positions from 1,303 sentences.", "4 4 Experiments To successfully perform the NA-task, the LSTM should: (1) encode and store the grammatical number of the subject; and (2) track the main subject-verb syntactic dependency.", "The latter information is important for identifying the time period during which subject number should be stored, output and then updated by the network.", "This section describes the neural circuit' that encodes and processes this information in the LSTM.", "We first tested the performance of the LSTM on the Linzen's data and on the NA-tasks in Table 1.", "Following Linzen et al. (2016) and later work, we computed the likelihood that the LSTM assigns to the main verb of each sentence given the preceding context and compared it to the likelihood it assigns to the wrong verb inflection.", "Accuracy in a given condition was measured as the proportion of sentences in this condition for which the model assigned a higher likelihood to the correct verb form than to the wrong one.", "Network performance is reported in Table 2 (right column Full').", "We first note that our results on the Linzen NA-task confirm those reported in Gulordava et al. (2018).", "For the other NA-tasks, results show that some tasks and conditions are more difficult than others.", "For example, performance on the Simple (0-distance) NA-task is better than that on the Co-Adv NA-task, which in turn is better than that of the nounPP tasks.", "Second, as expected, incongruent conditions (the number-mismatch conditions of namePP, nounPP and nounPPAdv) reduce network performance.", "4 All our data-sets are available at: https: //github.com/FAIRNS/Number_and_syntax_units_in_LSTM_LMs .", "Third, for long-range dependencies, reliably encoding singular subject across an interfering noun is more difficult than a plural subject: for both nounPP and nounPPAdv, PS is easier than SP.", "A possible explanation for this finding is that in English the plural form is almost always more frequent than the singular one, as the latter only marks third person singular, whereas the former is identical to the infinitive and other forms.", "Thus, if the network reverts to unigram probabilities, it will tend to prefer the plural.", "Looking for Number Units Through Ablation Number information may be stored in the network in either a local, sparse, or a distributed way, depending on the fraction of active units that carry it.", "We hypothesized that if the network uses a local or sparse coding, meaning that there's a small set of units that encode number information, then ablating these units would lead to a drastic decrease in performance in the NA-tasks.", "To test this, we ablated each unit of the network, one at a time, by fixing its activation to zero, and tested on the NA-tasks.", "Two units were found to have exceptional effect on network performance (Table 2, 776 and 988 columns).", "5 Ablating them reduced network performance by more than 10% across various conditions, and, importantly, they were the only units whose ablation consistently brought network performance to around chance level in the more difficult incongruent conditions of the namePP, nounPP and nounPPAdv tasks.", "Moreover, the ablation effect depended on the grammatical number of the subject: ablating 776 significantly reduced network performance only if the subject was plural (P, PS or PP conditions) and 988 only if the subject was singular (S, SP or SS conditions).", "In what follows, we will therefore refer to these units as the plural' and singular' units, respectively, or long-range (LR) number units when referring to both.", "Finally, we note that although the Linzen NA-task contained mixed stimuli from many types of conditions, the plural unit was found to have a substantial effect on average on network performance.", "The singular unit didn't show a similar effect in this case, which highlights the importance of using carefully crafted stimuli, as in the nounPP and nounPPAdv tasks, for understanding network dynamics.", "Taken together, these results suggest a highly local coding scheme of grammatical number when processing long-range dependencies.", "Visualizing Gate and Cell-State Dynamics To understand the functioning of the number units, we now look into their gate and state dynamics during sentence processing.", "We focus on the nounPP NA-task, which is the simplest NA-task that includes a long-range dependency with an interfering noun, in both SP and PS conditions.", "Recall the standard LSTM memory update and output rules (Hochreiter and Schmidhuber, 1997): C t = f t C t 1 + i t (cid:101) C t (1) h t = o t tanh( C t ) , (2) where f t , i t , o t (0 , 1) are gating scalars computed by the network, and (cid:101) C t ( 1 , 1) is an update candidate for cell value.", "5 Units 1-650 belong to the first layer, 651-1300 to the second.", "All units detected by our analyses come from the latter.", "unit, showing the desired gate and cell dynamics.", "The four conditions are represented with separated curves pink for plural subject, light blue for singular, and dashed lines for incongruent conditions.", "Gate and cell activity at time points unrelated to solving the NA-task are masked with white, as we do not make precise predictions for them.", "The update rule of the LSTM cell has two terms (Eq. 1).", "6 In the first, f t C t 1 , the forget gate controls whether to keep the previous cell content ( f t = 1 : perfect remembering) or forget it ( f t = 0 : complete forgetting).", "In the second, i t C t , the 6 We abuse notation here, using the symbols denoting whole layers in equations (1) and (2) to denote the components of single cells.", "input gate controls whether the information currently presented to the network, as encoded by C t , should be written onto the cell ( i t = 1 : full access) or not ( i t = 0 ).", "The singular unit can thus use these gates to reliably store number information across long-range dependencies.", "Specifically, the unit can (enumeration follows the same order as the panels in Figure 1c): (1) encode subject number via C t subject with different values for singular and plural; (2) open the input gate only when a singular subject is presented ( i t subject = 1 in light-blue curves only ) and protect it from interfering nouns ( i t = 0 , t subject < t < t verb ); (3) at the same time, clear the cell from previously stored information ( f t subject = 0 ) and then store subject number across the entire dependency ( f t = 1 , t subject < t < t verb ); (4) this will result in stable encoding of subject number in the cell C t throughout the dependency; (5) finally, output subject number at the right moment, when predicting the verb form ( o t verb 1 = 1 ) (Eq. 2).", "Figures 1a and 1b present the actual gate and cell dynamics of the singular and plural units.", "Both units follow the general solution for reliable number storage described above.", "Note that for C t and i t , and as a result also for C t , the plural unit mirrors' the singular unit with respect to subject number (pink curves of PP and PS vs. Light blue curves of SS and SP).", "This is in accordance with the results of the ablation experiments, which showed that ablating these units had an effect that depended on the grammatical number of the subject (Table 2).", "This provides complementary support for the identification of these units as singular' and plural'.", "A single divergence between the solution depicted in Figure 1c and the actual dynamics of the number units is that input gate activity is smaller, but not zero, at the time step immediately following the subject.", "One speculative explanation is that this might be useful to process compound nouns.", "In these cases, subject number information is stored with the second noun, whereas in the case of simple nouns there is no risk' of encountering an interfering noun immediately after the subject, making the delay in closing the gate safe.", "The singular and plural units had emerged at the second layer of the network.", "This seems appropriate since number information needs to be directly projected to the output layer for correct verb-form prediction.", "Moreover, number-unit output should be projected differently to singular and plural verb forms in the output layer, only increasing activity in output units representing the suitable form.", "For example, for the singular unit, since singular subjects are encoded with a negative value ( C t verb 1 < 1 in figure 1a), the more negative its efferent weights to singular verb forms in the output layer, the higher the probabilities of these verb forms would be.", "Figure 1d shows the efferent weights of the LR-number units to all verbs in our data-sets.", "We found that, indeed, the efferent weights to the singular and plural verb forms are segregated from each other, with weight signs that correspond to the negative encoding of subject number used by both singular and plural units.", "Two other arbitrary units, 651 and 1300 , and the syntax unit 1150 to be described below (Section 4.3) do not have segregated efferent weights to verb forms, as expected.", "Performance on the easier NA-tasks (Simple, Adv, 2Adv) was not impaired by single-unit ablations.", "This suggests that number may be encoded also elsewhere in the network, perhaps via a more distributed code.", "To verify this, we tested whether subject number can be decoded from the whole pattern of activities in the network (excluding the two LR-number units) and whether this decoding is stable across time (see Giulianelli et al., 2018, for similar observations and related methods).", "We expected this distributed activity to track number in a small time window after the subject, but, unlike the LR-number units, to be affected by incongruent intervening nouns.", "We trained a linear model to predict the grammatical number of the subject from network activity in response to the presentation of the subject, and tested its prediction on test sets from all time points (King and Dehaene, 2014), in incongruent conditions only of the nounPP task.", "We used Area under of Curve (AUC) to evaluate model performance.", "Figure 2 shows decoding across time of subject number from cell activity of each number unit separately and from cell activity of the entire network without these two units (Full model minus LR-units').", "Results show that number information can be efficiently decoded from other units in the network, and that this information can be carried for several time steps (relatively high AUC up to the second determiner).", "However, the way in which these units encode number is sensitive to the last encountered noun, with AUC decreasing", "to zero around the second noun (cars'), whereas test performance of the models trained on cell activity of the LR-number units is consistently high.", "This confirms that number prediction is supported both by the LR-number units, and by distributed activation patterns of other short-range (SR) number units.", "The latter, however, are not syntax-sensitive, and simply encode the number of the last noun encountered.", "A full description of the SR-number units is beyond our scope.", "However, we note that 10 SR-number units in the second layer of the network were identified, which had efferent weights with a similar segregated structure as that of the LR units (Figure 1d).", "These units were indeed sensitive to the last encountered noun: subject number could be decoded from single-unit cell activity during its presentation (AUC > 0 . 9 ), but activity swaps' once an interfering noun appears (i.e., AUC decreases to zero in a generalization-across-time analysis).", "Finally, to validate the role of SR-number units in encoding number for easier NA-tasks, we ablated both SR and LR number units (12 in total) or SR units only (10 in total) and evaluated network performance on these NA-tasks.", "Both experiments resulted in a significant reduction in task performance compared to 1,000 random equi-size ablations ( p < 0 . 01 in all eas-ier' tasks).", "Intriguingly, we observed qualitatively that LR units are almost always making the right prediction, even when the network predicts the wrong number.", "The wrong outcome, in such cases, might be due to interference from the syntax-insensitive SR units.", "We leave the study of LR-SR unit interplay to future work.", "We saw how the input and forget gates of the LR-number units control the flow of subject-number information.", "It remains unclear, however, how the dynamics of these gates are controlled by the network.", "We hypothesized that other units in the network may encode information about the syntactic structure of the sentence, and thus about the subject-verb dependency.", "These units could then control and coordinate the opening and closing of the input and forget gates of the number units.", "To identify such 'syntax' units, we tested from which units syntactic information can be efficiently decoded.", "We used depth of the syntactic tree as a proxy for syntactic structure (Nel-son et al., 2017) and trained an L2-regularized regression model to predict syntactic tree-depth from the hidden-state activity of all units.", "In all experiments, we used the data presented in Section 3 above and performed a nested 5-fold cross-validation procedure.", "Word frequency, which was added as a covariate to the model, had a negligible effect on the results.", "Syntactic tree-depth was found to be efficiently decodable from network activity ( R 2 test set = 0 . 85 0 . 009 ; covariate-corrected).", "A small subset of syntax' units had relatively high weights in the regression model (mean weight = 7 . 6 10 4 , SD= 7 . 86 10 2 ; cutoff for outlier weights was set to three SDs).", "Since the interpretation of the regression weights may depend on possible correlations among the features, we also tested the causal effect of these units on NA-task performance.", "Ablating the syntax units together resulted in significant performance reduction in NA-tasks that have an interfering noun: Linzen NA-task: p = 0 .", "024 ,", "nounPPAdv-(a) Input gate", "SP: p = 0 .", "011 , nounPPAdv-PS: p = 0 .", "034 , nounPP-SP: p < 0 .", "001 and marginally significant in nounPP-PS: p = 0 .", "052 (compared to 1000 random ablations of subsets of units of the same size).", "To gain further insight regarding the functioning of the syntax units, we next visualized their gate and cell dynamics during sentence processing.", "We found that cell activity of unit 1150 , which also had one of the highest weights in the regression model, was remarkably structured.", "The activity of this unit increases across the entire subject-verb dependency and drops abruptly right after.", "Figures 3a and 3b show cell activity of this unit during the processing of stimuli from the 2Adv and nounPP tasks.", "We found the same dynamics in cases where another verb occurs between subject and main verb, as in subject relatives (Figure 3c), and in exceptionally long-distance dependencies with two interfering nouns and verbs (Figure 3d).", "Taken together, these results suggest that unit 1150 consistently encodes subject-verb dependencies in a syntax-sensitive manner.", "Other syntax units did not show an easily interpretable dynamics and had no clear interactions with the number units in the analysis discussed next.", "This suggests that they perform different syntactic, or possibly other, functions.", "We finally look at the connections that were learned by the LSTM between syntax unit 1150 , which appears to be more closely involved in tracking subject-verb agreement, and the LR number units, as well as at the connections between the LR-number units themselves.", "For each unit pair, there are 4 connection types, one for each component of the target cell (to the 3 gates and to the update candidate).", "We focus on input and forget gates, as they control the flow and storage of number information.", "Figures 4a and 4b show the distributions of all afferent recurrent weights to the input and forget gates of the LR-number units, scaled by the maximal activity h t of the pre-synaptic units during the nounPP task (this scaling evaluates the effective input to the units and did not change the conclusions described below).", "We found that the weights from the syntax unit to the forget gate of both 776 and 988 are exceptionally high in the positive direction compared to all other afferent connections in the network ( z score = 8 . 1 , 11 . 2 , respectively) and those to their input gates exceptionally negative ( z score = 16 . 2 , 7 . 2 ).", "Since the cell activity of syntax unit 1150 is positive across the entire subject-verb dependency (e.g., Figure 3d), the connectivity from the syntax unit drives the number unit forget gates towards one ( W f 776 , 1150 h 1150 (cid:29) 0 and W f 988 , 1150 h 1150 (cid:29) 0 ; t subject < t < t verb ) and their input gates towards zero ( W i 776 , 1150 h 1150 (cid:28) 0 and W i 988 , 1150 h 1150 (cid:28) 0 ).", "Looking at the right-hand-side of Eq.", "(1), this means that the first term becomes dominant and the second vanishes, suggesting that, across the entire dependency, the syntax unit conveys a re-member flag' to the number units.", "Similarly, when the activity of the syntax unit becomes negative at the end of the dependency, it conveys an update flag'.", "Last, we note that the reciprocal connectivity between the two LR-number units is always positive, to both input and forget gates (with | z score | > 3 for the 776 -to-988 direction).", "Since their activity is negative throughout the subject-verb dependency (Figures 1a and 1b), this means that they are mutually inhibiting , thus steering towards an unequivocal signal about the grammatical number of the subject to the output layer.", "We provided the first detailed description of the underlying mechanism by which an LSTM language-model performs long-distance number agreement.", "Strikingly, simply training an LSTM on a language-model objective on raw corpus data brought about single units carrying exceptionally specific linguistic information.", "Three of these units were found to form a highly interactive local network, which makes up the central part of a neural' circuit performing long-distance number agreement.", "One of these units encodes and stores grammatical number information when the main subject of a sentence is singular, and it successfully carries this information across long-range dependencies.", "Another unit similarly encodes plurality.", "These number units show that a highly local encoding of linguistic features can emerge in LSTMs during language-model training, as was previously suggested by theoretical studies of artificial neural networks (e.g., Bowers, 2009) and in neuroscience (e.g., Kutter et al., 2018).", "Our analysis also identified units whose activity correlates with syntactic complexity.", "These units, as a whole, affect performance on the agreement tasks.", "We further found that one of them encodes the main subject-verb dependency across various syntactic constructions.", "Moreover, the highest afferent weights to the forget and input gates of both LR-number units were from this unit.", "A natural interpretation is that this unit propagates syntax-based remember and update flags that control when the number units store and release information.", "Finally, number is also redundantly encoded in a more distributed way, but the latter mechanism is unable to carry information across embedded syntactic structures.", "The computational burden of tracking number information thus gave rise to two types of units in the network, encoding similar information with distinct properties and dynamics.", "The relationship we uncovered and characterized between syntax and number units suggests that agreement in an LSTM language-model cannot be entirely explained away by superficial heuristics, and the networks have, to some extent, learned to build and exploit structure-based syntactic representations, akin to those conjectured to support human-sentence processing.", "In future work, we intend to explore how the encoding pattern we found varies across network architectures and hyperparameters, as well as across languages and domains.", "We also would like to investigate the timecourse of emergence of the found behaviour over training time.", "More generally, we hope that our study will inspire more analyses of the inner dynamics of LSTMs and other sequence-processing networks, complementing the currently popular black-box probing approach.", "Besides bringing about a mechanistic understanding of language processing in artificial models, this could inform work on human-sentence processing.", "Indeed, our study yields particular testable predictions on brain dynamics, given that the computational burden of long-distance agreement remains the same for artificial and biological neural network, despite implementation differences and different data sizes required for language acquisition.", "We conjecture a similar distinction between SR and LR units to be found in the human brain, as well as an interaction between syntax-processing and feature-carrying units such as the LR units, and plan to test these in future work.", "We would like to thank Kristina Gulordava, Jean-Remi King, Tal Linzen, Gabriella Vigliocco and Christophe Pallier for helpful feedback and comments on the work." ]
[ "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "objective", "result", "abstain", "abstain", "abstain", "result", "result", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other" ]
[ "In many settings it is important for one to be able to understand why a model made a particular prediction.", "In NLP this often entails extracting snippets of an input text responsible for' corresponding model output; when such a snippet comprises tokens that indeed informed the model's prediction, it is a faithful explanation.", "In some settings, faithfulness may be critical to ensure transparency.", "Lei et al. (2016) proposed a model to produce faithful rationales for neural text classification by defining independent snippet extraction and prediction modules.", "However, the discrete selection over input tokens performed by this method complicates training, leading to high variance and requiring careful hyperparameter tuning.", "We propose a simpler variant of this approach that provides faithful explanations by construction.", "In our scheme, named FRESH, arbitrary feature importance scores (e.g., gradients from a trained model) are used to induce binary labels over token inputs, which an extractor can be trained to predict.", "An independent classifier module is then trained exclusively on snippets provided by the extractor; these snippets thus constitute faithful explanations, even if the classifier is arbitrarily complex.", "In both automatic and manual evaluations we find that variants of this simple framework yield predictive performance superior to end-to-end' approaches, while being more general and easier to train.", "1 1 Introduction Neural models dominate NLP these days, but it remains difficult to know why such models make specific predictions for sequential text inputs.", "This problem has been exacerbated by the adoption of deep contextualized word representations, whose architectures permit arbitrary and interdependent 1 Code is available at https://github.com/ successar/FRESH interactions between all inputs, making it particularly difficult to know which inputs contributed to any specific prediction.", "Concretely, in a bidirectional RNN or Transformer model, the contextual embedding for a word at position j in instance x may encode information from any or all of the tokens at positions 1 to j -1 and j +1 to | x | .", "Consequently, continuous scores such as attention weights (Bah-danau et al., 2015) induced over these contextualized embeddings reflect the importance not of individual inputs, but rather of unknown interactions between all input tokens.", "This makes it misleading to present heatmaps of these scores over the original token inputs as an explanation for a prediction (Jain and Wallace, 2019; Wiegreffe and Pinter, 2019; Serrano and Smith, 2019).", "The key missing property here is faithfulness (Lipton, 2018): An explanation provided by a model is faithful if it reflects the information actually used by said model to come to a disposition.", "In some settings the ability of a model to provide faithful explanations may be paramount.", "For example, without faithful explanations, we cannot know whether a model is exploiting sensitive features such as gender (Pruthi et al., 2020).", "We propose an approach to neural text classification that provides faithful explanations for predictions by construction.", "Following prior work in this direction (Lei et al., 2016), we decompose our model into independent extraction and prediction modules, such that the latter uses only inputs selected by the former.", "This discrete selection over inputs allows one to use an arbitrarily complex prediction network while still being able to guarantee that it uses only the extracted input features to inform its output.", "The main drawback to this rationalization approach has been the difficulty of training the two components jointly under only instance-level Query: What is the only difference between a reflection in a mirror and the actual image ?", "| Answer: It is exactly the same | Label: False [Human] You have seen your own reflection in a mirror .", "The person looking back at you looks just like you .", "Where does that reflected person appear to be standing ?", "Yes , they appear to be on the other side of the mirror .", "That is really strange to think about , but very cool .", "Have you ever waved at your reflection in a mirror ?", "The reflected image will wave back at you .", "Here is something to try next time you stand in front of a mirror .", "Wave to your reflection with your right hand .", "What hand do you think the reflection will wave back with ?", "The same hand ?", "A different hand ?", "You will notice something interesting .", "The reflection waves back with the hand on the same side as you , but it is their left hand .", "The image in a reflection is reversed .", "This is just like the image of the sign above .", "Light rays strike flat shiny surfaces and are reflected .", "The reflections are reversed .", "[Lei et al.] You have seen your own reflection in a mirror .", "The person looking back at you looks just like you .", "Where does that reflected person appear to be standing ?", "Yes , they appear to be on the other side of the mirror .", "That is really strange to think about , but very cool .", "Have you ever waved at your reflection in a mirror ?", "The reflected image will wave back at you .", "Here is something to try next time you stand in front of a mirror .", "Wave to your reflection with your right hand .", "What hand do you think the reflection will wave back with ?", "The same hand ?", "A different hand ?", "You will notice something interesting .", "The reflection waves back with the hand on the same side as you , but it is their left hand .", "The image in a reflection is reversed .", "This is just like the image of the sign above .", "Light rays strike flat shiny surfaces and are reflected .", "The reflections are reversed .", "[FRESH] You have seen your own reflection in a mirror .", "The person looking back at you looks just like you .", "Where does that reflected person appear to be standing ?", "Yes , they appear to be on the other side of the mirror .", "That is really strange to think about , but very cool .", "Have you ever waved at your reflection in a mirror ?", "The reflected image will wave back at you .", "Here is something to try next time you stand in front of a mirror .", "Wave to your reflection with your right hand .", "What hand do you think the reflection will wave back with ?", "The same hand ?", "A different hand ?", "You will notice something interesting .", "The reflection waves back with the hand on the same side as you , but it is their left hand .", "The image in a reflection is reversed .", "This is just like the image of the sign above .", "Light rays strike flat shiny surfaces and are reflected .", "The reflections are reversed .", "supervision (i.e., without token labels).", "This has necessitated training the extraction module via reinforcement learning namely REINFORCE (Williams, 1992) which exhibits high variance and is particularly sensitive to choice of hyperparameters.", "Recent work (Bastings et al., 2019) has proposed a differentiable mechanism to perform binary token selection, but this relies on the reparameterization trick , which similarly complicates training.", "Methods using the reparameterization trick tend to zero out token embeddings, which may adversely affect training in transformer-based models, especially when one is not fine-tuning lower layers of the model due to resource constraints, as in our experiments.", "To avoid the complexity inherent to training under a remote supervision signal, we introduce Faithful Rationale Extraction from Saliency tHresholding ( FRESH ), which disconnects the training regimes of the extractor and predictor networks, allowing each to be trained separately.", "We still assume only instance-level supervision; the trick is to define a method of selecting snippets from inputs rationales (Zaidan et al., 2007) that can be used to support prediction.", "Here we propose using arbitrary feature importance scoring techniques to do so.", "Notably, these need not satisfy the faithfulness' criterion.", "In this paper we evaluate variants of FRESH that use attention (Bahdanau et al., 2015) and gradient methods (Li et al., 2016; Simonyan et al., 2014) as illustrative feature scoring mechanisms.", "These provide continuous scores for features; we derive discrete rationales from them using simple heuristics.", "An independent network then uses only the extracted rationales to make predictions.", "Disconnecting the training tie between the independent rationale extractor and prediction modules means that FRESH is faithful by construction: The snippet that is ultimately used to inform a prediction can be presented as a faithful explanation because this was the only text available to the predictor.", "In contrast to prior discrete rationalization methods, FRESH greatly simplifies training, and can accommodate any feature importance scoring metric.", "In our experiments, we also find that it yields superior predictive performance.", "In addition to being faithful (and affording strong predictive performance), extracted rationales would ideally be intuitive to humans, i.e., plausible .", "To evaluate this we run a small user study (section 8) in which humans both evaluate the readability of extracted rationales and attempt to classify instances based on them, effectively serving as a prediction module in the FRESH framework.", "An example illustrating this property is presented in Figure", "1. 2 Related Work Types of explainability.", "Lipton (2018); Doshi-Velez and Kim (2017) and Rudin (2019) provide overviews on definitions and characterizations of interpretability.", "Lertvittayakumjorn and Toni (2019) classify three possible uses of text explanations: ( i ) revealing model behavior, ( ii ) justifying model predictions, and ( iii ) helping humans investigate uncertain predictions.", "Attempting to guarantee the faithfulness of a feature selection or explanation generation method is a more challenging question than finding explanations which humans find acceptable (Rudin, 2019).", "But the benefits of developing such methods is profound: Faithful explanations provide a means to reveal a s i <latexit sha1_base64=\"4IM4h66ALecpZBG48PUUBFr80pg=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mqoMeiF48V7Qe0oWy2k3bpZhN2N0IJ/QlePCji1V/kzX/jts1BWx8MPN6bYWZekAiujet+O4W19Y3NreJ2aWd3b/+gfHjU0nGqGDZZLGLVCahGwSU2DTcCO4lCGgUC28H4dua3n1BpHstHM0nQj+hQ8pAzaqz0oPu8X664VXcOskq8nFQgR6Nf/uoNYpZGKA0TVOuu5ybGz6gynAmclnqpxoSyMR1i11JJI9R+Nj91Ss6sMiBhrGxJQ+bq74mMRlpPosB2RtSM9LI3E//zuqkJr/2MyyQ1KNliUZgKYmIy+5sMuEJmxMQSyhS3txI2oooyY9Mp2RC85ZdXSatW9S6qtfvLSv0mj6MIJ3AK5+DBFdThDhrQBAZDeIZXeHOE8+K8Ox+L1oKTzxzDHzifP1pAjdc=</latexit> y i <latexit sha1_base64=\"Kr9zKiAScfd9h9AHI+C+F2nCG10=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mqoMeiF48V7Qe0oWy2k3bpZhN2N0Io/QlePCji1V/kzX/jts1BWx8MPN6bYWZekAiujet+O4W19Y3NreJ2aWd3b/+gfHjU0nGqGDZZLGLVCahGwSU2DTcCO4lCGgUC28H4dua3n1BpHstHkyXoR3QoecgZNVZ6yPq8X664VXcOskq8nFQgR6Nf/uoNYpZGKA0TVOuu5ybGn1BlOBM4LfVSjQllYzrErqWSRqj9yfzUKTmzyoCEsbIlDZmrvycmNNI6iwLbGVEz0sveTPzP66YmvPYnXCapQckWi8JUEBOT2d9kwBUyIzJLKFPc3krYiCrKjE2nZEPwll9eJa1a1buo1u4vK/WbPI4inMApnIMHV1CHO2hAExgM4Rle4c0Rzovz7nwsWgtOPnMMf+B8/gBjZI3d</latexit> x i <latexit sha1_base64=\"HYbfjgGaRCmI8j+M0Errr+OmJEA=\">AAAB6nicbVBNS8NAEJ34WetX1aOXxSJ4KkkV9Fj04rGi/YA2lM120i7dbMLuRiyhP8GLB0W8+ou8+W/ctjlo64OBx3szzMwLEsG1cd1vZ2V1bX1js7BV3N7Z3dsvHRw2dZwqhg0Wi1i1A6pRcIkNw43AdqKQRoHAVjC6mfqtR1Sax/LBjBP0IzqQPOSMGivdP/V4r1R2K+4MZJl4OSlDjnqv9NXtxyyNUBomqNYdz02Mn1FlOBM4KXZTjQllIzrAjqWSRqj9bHbqhJxapU/CWNmShszU3xMZjbQeR4HtjKgZ6kVvKv7ndVITXvkZl0lqULL5ojAVxMRk+jfpc4XMiLEllClubyVsSBVlxqZTtCF4iy8vk2a14p1XqncX5dp1HkcBjuEEzsCDS6jBLdShAQwG8Ayv8OYI58V5dz7mrStOPnMEf+B8/gBh3o3c</latexit> In a new movie Amazing acting and script.", "These are the support model supp , the rationale extractor model ext , and the classifier pred .", "This is in contrast to standard end-to-end neural classifiers that induce soft importance scores over contextualized (hence entangled) representations of inputs; as discussed above, because such representations may include information about other inputs, these are not necessarily faithful.", "These scores are binarized by ext either by a parameterized model trained on the output scores, or by discretization heuristics.", "Finally, pred is trained (and tested) only on text provided by ext .", "Figure 1 depicts this proposed framework.", "In 2", "TheapproachproposedbyLeietal.(2016)wasveryre-centlyextendedinYuetal.(2019b),inwhichathirdcompo-nent(inadditionto gen and enc ) was introduced to, in part, encourage comprehensiveness of extracted rationales.", "How-ever,thebasicmodelandoptimizationprocedureremainsthesameasinLeietal.(2016).", "Another advantage to this approach is in the potentialofusingthelighter pred modelasareplace-mentfor supp ,bothinaninferencescenariowhere it can consume fewer tokens and act faster, and in a large-scale training mode where it can consume more instances at a more efficient rate, once we have faith in ext .", "In a computer-aided human classification system, this difference can become vital as humans take substantially longer time to read full documents and produce predictions than if provided rationales, and their time tends to cost significantly more than that of equivalent com-Figure 2: A schematic of the FRESH approach.", "(1) The first model, supp , is trained end-to-end for prediction but used only to importance score' features.", "These scores can be derived via any method, e.g., gradients or attention, and are not required to faithfully explain model outputs.", "Scores are heuristically discretized into binary labels.", "(2) An extraction module ext may be a parameterized sequence tagging model trained on the pseudo-targets derived in (1), or heuristics over importance scores directly, creating a new dataset (cid:104) x, y (cid:105) comprising pairs of extracted rationales only.", "(3) This new dataset is used to train a final classifier, pred , which only ever sees rationales.", "We train supp end-to-end to predict y , ultimately using its outputs only to extract continuous feature importance scores from instances in X .", "These scores are binarized by ext either by a parameterized model trained on the output scores, or by discretization heuristics.", "Finally, pred is trained (and tested) only on text provided by ext .", "Figure 1 depicts this proposed framework.", "In 2 The approach proposed by Lei et al. (2016) was very recently extended in Yu et al. (2019b), in which a third component (in addition to gen and enc ) was introduced to, in part, encourage comprehensiveness of extracted rationales.", "How-ever,thebasicmodelandoptimizationprocedureremainsthesameasinLeietal.(2016).", "Another advantage to this approach is in the potentialofusingthelighter pred modelasareplace-mentfor supp , bothinaninferencescenariowhere it can consume fewer tokens and act faster, and in a large-scale training mode where it can consume more instances at a more efficient rate, once we have faith in ext .", "Issues with current explainability methods in NLP.", "A recent line of work in NLP has begun to critically examine the use of certain methods for constructing heatmaps' over input tokens to explain predictions.", "In particular, existing feature attribution methods may not provide robust, faithful explanations (Feng et al., 2018; Jain and Wallace, 2019; Wiegreffe and Pinter, 2019; Serrano and Smith, 2019; Brunner et al., 2020; Zhong et al., 2019; Pruthi et al., 2020).", "388 389 390 391 392 393 394 395 396 397", "Kim et al. 2016 states: a method is interpretable if a user can correctly predict the method's result; they conducted user studies to test this.", "In a similar plausibility vein, others have proposed testing whether humans like rationales (Ehsan et al., 2018, 2019).", "We follow these efforts by eliciting human judgments on rationales, although we view plausibility as a secondary aim here.", "Wiegreffe and Pinter (2019) argue for classifying model interpretability into two groups: faithfulness and plausibility.", "Lei et al. (2016) note that a desirable set of criteria for rationales is that they are sufficient, short, and coherent.", "Yu et al. (2019) extend these criteria by additionally arguing for comprehensiveness, which dictates that a rationale should contain all relevant and useful information.", "Prior efforts (Lei et al., 2016; Yu et al., 2019; Bastings et al., 2019) have proposed methods that produce faithful explanations via a two-model setup, defining a generator network that imposes hard attention over inputs and then passes these to a second model for prediction.", "Yu et al. (2019) extend this by adding a third adversarial model into the framework.", "These models are trained jointly, which is difficult because hard attention is discrete and necessitates recourse to reinforcement learning, i.e., REINFORCE (Williams, 1992), or the reparameterization trick (Bastings et al., 2019).", "We now propose FRESH, our framework for training explainable neural predictors.", "We begin by describing the two-model, discrete rationale selection approach introduced by Lei et al. (2016) (3.1), which serves as the starting point for our framework, detailed in 4.", "Consider a standard text classification setup in which we have n input documents X = { x 1 , ..., x n } , x i V l i , where l i denotes the number of tokens in document x i , and V the vocabulary, and their assigned labels y = { y 1 , ..., y n } , y i Y .", "Lei et al. propose a model comprising a generator ( gen ) and an encoder ( enc ).", "gen is tasked with extracting rationales from inputs x i , formalized as a binary mask over tokens sampled from a Bernoulli distribution: z i gen( x i ) { 0 , 1 } l i .", "enc makes predictions y = enc( x i , z i ) on the basis of the unmasked tokens.", "minimize enc , gen n (cid:88) i =1 E z i gen( x i ) L (enc( x i , z i ) , y i ) .", "(1) This objective (1) is difficult to optimize as it requires marginalizing over all possible rationales z .", "Parameter estimation is therefore performed via an approximation approach that entails drawing samples from gen( x ) and averaging their associated gradients during the learning process.", "Lei et al. (2016) found that this REINFORCE-style estimation works well for rationale extraction, but may have high variance as a result of the large state space of possible rationales under consideration, which is difficult to efficiently explore.", "The loss function L used by Lei et al. (2016) is a squared (cid:96) 2 loss between the prediction enc( x, z ) and the reference label y , with added regularization terms placed on the binary mask z to encourage rationale conciseness and contiguity.", "We modify the conciseness term so that the model is not penalized as long as a predefined desired rationale length d has not been passed: ( z ) = 1 max (cid:18) 0 , | z | L d (cid:19) (cid:124) (cid:123)(cid:122) (cid:125) conciseness + 2 (cid:88) t | z t z t 1 | L 1 (cid:124) (cid:123)(cid:122) (cid:125) contiguity .", "To avoid recourse to REINFORCE, we introduce FRESH, in which we decompose the original prediction task into three sub-components, each with its own independent model.", "These are the support model supp , the rationale extractor model ext , and the classifier pred .", "2 We train supp end-to-end to predict y , using its outputs only to extract continuous feature importance scores from instances in X .", "These scores are binarized by ext either using a parameterized model trained on the output scores, or via direct discretization heuristics.", "Finally, pred is trained (and tested) only on text provided by ext .", "Figure 2 depicts this proposed framework.", "A central advantage of our decomposed setup lies in the arbitrariness of the rationale extraction 2 This is the most general framing, but in fact supp and ext may be combined by effectively defining ext as an application of heuristics to extract snippets on the basis of scores provided by supp ; any means of procuring importance' scores for the features comprising instances and converting these to extracted snippets to pass to pred will suffice.", "mechanism.", "Any function over supp 's predictions that assigns scores to the input tokens intended to quantify their importance can serve as an input to ext .", "Note that this means even posthoc scoring models (applied after the model has completed training) are permissible.", "Examples of such functions include gradient-based methods and LIME (Ribeiro et al., 2016).", "Notably, the importance scoring function need not faithfully identify features that actually informed the predictions from supp .", "This means, e.g., that one is free to use token-level attention (over contextualized representations) the final rationales provided by FRESH will nonetheless remain faithful with respect to pred .", "The importance scores are used only to train ext heuristically, for example by treating the top k tokens (with respect to importance scores) for a given example as the target rationale.", "The key design decision here is designing such heuristics that map continuous importance scores to discrete rationales.", "Any strategy for this will likely involve trading conciseness (shorter rationales) against performance (greater predictive accuracy).", "For explainability, we can present users with the snippet(s) that pred used to make a prediction as an explanation (from ext ), and we can be certain that the only tokens that contributed to the prediction made by pred are those included in the this text.", "In addition to transparency, this framework may afford efficiency gains in settings in which humans are tasked with classifying documents; in this case we can use ext to present only the (short) relevant snippets.", "Indeed, we use exactly this approach as one means of evaluation in Section 8.", "The high-level framework described above requires making several design choices to opera-tionalize; we propose and evaluate a set of such choices in this work, detailed below.", "Specifi-cally, we must specify a feature importance scoring mechanism for supp (Section 5.1), and a strategy for inducing discrete targets from these continuous scores (5.2).", "In addition, we need to specify a trained or heuristic extractor architecture ext .", "In this work, all instances of pred exploit BERT-based representations.", "3 3 For fair comparison, we have modified all baselines (Lei et al., 2016; Bastings et al., 2019) to similarly capitalize on BERT-based representations.", "All models considered in this work are based on Bidirectional Encoder Representations from Transformer (BERT) encoders (Devlin et al., 2019) and its variants, namely RoBERTa (Liu et al., 2019) and SciBERT (Beltagy et al., 2019); see Appendix B for more details.", "For sake of brevity, we simply refer to all of these as BERT from here on.", "We define supp as a BERT encoder that consumes either a single input (in the case of standard classification) or two inputs (e.g., in the case of question answering tasks) separated by the standard [SEP] token.", "While we emphasize that the proposed framework can accommodate arbitrary input feature scoring mechanisms, we consider only a few obvious variants here, leaving additional exploration for future work.", "Specifically, we evaluate attention scores (Bahdanau et al., 2015) and input gradients (Li et al., 2016; Simonyan et al., 2014).", "Attention scores are taken as the self-attention weights induced from the [CLS] token index to all other indices in the penultimate layer of supp ; this excludes weights associated with any special tokens added.", "BERT uses wordpiece tokenization; to compute a score for a token, we sum the self-attention weights assigned to its constituent pieces.", "BERT is also multi-headed, and so we average scores over heads to derive a final score.", "A necessary step in our framework consists of mapping from the continuous feature scores provided by supp to discrete labels, or equivalently, mapping scores to rationales which will either be consumed directly by pred or be used to train a sequence tagging model ext .", "We consider a few heuristic strategies for performing this mapping.", "Contiguous.", "Select the span of length k that corresponds to the highest total score (over all spans of length k ).", "We call these rationales contiguous .", "Topk .", "Extract as a rationale the topk tokens (with respect to importance scores) from a document, irrespective of contiguity (each word is treated independently).", "We refer to these rationales as non-contiguous .", "These strategies may be executed per-instance or globally (across an entire dataset), reflecting the flexibility of FRESH.", "Empirically, per-instance and global approaches performed about the same; Doc.", "we report results for the simpler, per-instance approaches (additional results in Appendix E).", "We experiment with two variants of ext .", "The first is simply direct use of the importance scores provided by supp and discretization heuristics over these; this does not require training an explicit ext model.", "We also consider a parameterized extractor model that independently makes token-wise predictions from BERT representations.", "Using an explicit extraction model allows us to mix in direct supervision on rationales alongside the pseudo-targets derived heuristically from supp .", "Tying the sequential token predictions made by ext via a Conditional Random Field (CRF) layer (Lafferty et al., 2001) may further improve performance, but we leave this for future work.", "Stanford Sentiment Treebank (SST) (Socher et al., 2013).", "Sentences labeled with binary sentiment (neutral sentences have been removed).", "Evidence Inference (Lehman et al., 2019).", "Biomedical articles describing randomized controlled trials.", "The task is to infer the reported relationship between a given intervention and comparator with respect to an outcome, and to identify a snippet within the text that supports this.", "The original dataset comprises lengthy full-text articles; we use an abstract-only subset of this data.", "reviews labeled for sentiment accompanied by rationales on dev and test sets (DeYoung et al., 2020).", "MultiRC (Khashabi et al., 2018).", "Passages and questions associated with multiple correct answers.", "Following DeYoung et al. (2020), we convert this to a binary classification task where the aim is to categorize answers as True or False based on a supporting rationale.", "For datasets where human rationale annotations are available, we set k to the average human rationale annotation length, rounded to the nearest ten percent.", "For the rest, we set k = 20% .", "For generality, all models considered may consume both queries and texts, as is required for MultiRC and Evidence Inference.", "Rationales can be extracted from only from the text; this typically dominates the query in length, and is more informative in general.", "Further implementation details (including hyperparameters) are provided in Appendix A. Hyperparameter sensitivity and variance.", "To achieve conciseness and contiguity, Lei et al. (2016) impose a regularizer on the encoder that comprises two terms (Equation 2) with associated hyperparameters ( 1 , 2 ).", "In practice, we have found that one needs to perform somewhat extensive hyperparameter search for this model to realize good performance.", "This is inefficient both in the sense of being time-consuming, and in terms of energy (Strubell et al., 2019).", "By contrast, FRESH requires specifying and training independent module components, which incurs some energy cost.", "But there are no additional hyperparameters, and so FRESH does not require extensive hyperparameter search, which is typically the most energy-intensive aspect of model training.", "We quantify this advantage by reporting the variances over different hyperparameters we observed for (Lei et al., 2016) and the compute time this required to conduct this search in Appendix B. In addition to being sensitive to hyperparameters, a drawback of REINFORCE-style training is that it can exhibit high variance within a given hyperparameter setting.", "To demonstrate this, we report the variance in performance of our proposed approach and of Lei et al. (2016) as observed over five different random seeds.", "We also find that both Lei et al. (2016) and Bastings et al. (2019) tend to degenerate and predict either complete or empty text as rationale.", "To make results comparable to FRESH, at inference time, we restrict the rationale to specified desired length k before passing it to the corresponding classifier.", "We first evaluate the performance achieved on datasets by the pred models trained on different ext -extracted rationales, compared to each other and to Lei et al. (2016)'s end-to-end rationale extraction framework.", "As an additional baseline, we also evaluate a variant of the differentiable binary variable model proposed in Bastings et al. 2019.", "This baseline do not require any hyperparameter search.", "In general, we would expect predictive performance to positively correlate with rationale length, and so we evaluate predictive performance (accu-racy or F1-score) across methods using a fixed rationale length for each dataset.", "use the entire train sets for the respective datasets, and fix the rationale length as described in 6.2 to ensure fair comparison across methods.", "We observe that despite its simplicity, FRESH performs nearly as well as Full text while using only 10-30% of the original input text, thereby providing transparency.", "FRESH achieves better average performance than Lei et", "al.'s end-to-end method, with the exception of AGNews, in which case the models are comparable.", "It also consistently fares better than Bastings et", "al.'s system.", "Of the two feature scoring functions considered, [CLS] self-attention scores tend to yield better results, save for on the MultiRC and Movies datasets, on which gradients fare better.", "With respect to discretizing feature scores, the simple topk strategy seems to perform a bit better than the contiguous heuristic, in what we expect to be traded off against a greater coherence of the contiguous rationales.", "As seen in Table 2, FRESH exhibits lower variance across runs, and does not require hyperparameter search (further analysis in Appendix B).", "Varying rationale length.", "Figure 3 plots F1 scores across datasets and associated standard deviations achieved by the best rationale variant of Lei et al. (2016) and FRESH at two different target rationale lengths.", "These results demonstrate the effectiveness of FRESH even in constrained settings.", "Note, we had to re-perform hyperparameter search for a different rationale length in case of (Lei et al., 2016) model.", "Incorporating human rationale supervision.", "In some settings it may be feasible to elicit direct supervision on rationales, at least for a subset of 0.0 0.2 0.5 1.0 0.50 0.75 M a c r o F 1 Ev.", "training examples.", "Prior work has exploited such signal during training (Zhang et al., 2016; Strout et al., 2019; Small et al., 2011).", "One of the potential advantages of explicitly training the extraction model ext with pseudo-labels for tokens (de-rived from heuristics over importance scores) is the ability to mix in direct supervision on rationales alongside these derived targets.", "We evaluate whether direct rationale supervision improves performance on two datasets for which we have human rationale annotations (Ev-idence Inference and MultiRC).", "In both cases we provide models with varying amounts of rationale-level supervision (0%, 20%, 50% and 100%), and again compare the best variants of Lei et al. (2016) and our model.", "For the former, we introduce an additional binary cross entropy term into the objective for that explicitly penalizes the extractor for disagreeing with human token labels.", "Explicitly training a sequence tagging model as ext over heuristic targets from supp did not improve results in our experiments.", "However, as shown in Figure 4 and Figure 5, mixing in rationale-level supervision when training ext did improve performance on the Evidence Inference dataset by a small amount, although not for MultiRC.", "This suggests that explicit rationale supervision may at least sometimes improve performance, and this is not possible without a parameterized ext model.", "In Lei et al. (2016)'s framework, direct supervision provides considerable performance improvement in the case of Evidence Inference (although still suffering from variance effects), and did not affect performance on MultiRC.", "We have proposed FRESH as an architecture which, in addition to exceeding performance of previous training regimes, provides a guarantee for extracting rationales which are faithful .", "However, as noted in the introduction, another desirable trait of rationales is that they are judged as good by humans.", "To assess the plausibility of the resulting rationales (Herman, 2017; Wiegreffe and Pinter, 2019), we design a human user study.", "4 We evaluate the following attributes of plausibility: Sufficiency.", "Can a human predict the correct label given only the rationale?", "This condition aligns with Kim et al. 2016, with Lei et al. 2016, and with the confidence and adequate justification criteria of Ehsan et al. 2019.", "In our experiment, we simply substitute a human user for pred and evaluate performance.", "Readability and understandability.", "We test the user's preference for a certain style of rationale beyond their ability to predict the correct label.", "Our hypothesis is that humans will prefer contiguous to non-contiguous rationales.", "This condition aligns with coherency (Lei et al., 2016), human-likeness and understandability (Ehsan et al., 2019).", "We compare extracted rationales on two tasks, Movies and MultiRC, both of which include reference human rationales (DeYoung et al., 2020).", "We did not choose evidence inference for this set of experiments since the task requires expert knowledge.", "Recall that the rationalization task for the Movies dataset involves selecting those words or phrases associated with positive or negative sentiment.", "For MultiRC, the rationale must contain sufficient context to allow the user to discern whether the provided answer to the question is true, based on the information in the passage.", "We extract rationales, both contiguous and noncontiguous, from 100 randomly-selected test set instances for the following methods: (1) human (reference label) rationales, (2) randomly selected rationales of length k , (3) rationales from the best Lei et al. 2016 models, and (4) rationales from the best FRESH models.", "We present each extracted rationale to three annotators.", "5 We ask them to perform the following tasks:", "1. Classify examples as either Positive or Negative (Movies), or as True or False (MultiRC);", "2. Rate their confidence on a 4-point Likert scale from not confident (1) to very confident (4);", "3. Rate how easy the text is to read and understand on a 5-point Likert scale from very difficult (1) to very easy (5).", "The first two tasks are designed to evaluate suffi-ciency, and the third readability and understandability.", "We provide images of the user interface in Appendix C. We validate the user interface design with gold-label human rationales.", "As expected, when using these rationales Turkers are able to perform the labelling task with high accuracy, and they do so with high confidence and readability (first rows of Tables 3 and 4).", "On average, annotators exhibit over 84% and 89% inter-annotator agreement on Movies and MultiRC, respectively.", "6 5 We use Amazon Mechanical Turk for the annotation task, and compensate Turkers at a rate of $0.24 per HIT.", "Pay rate is calculated based on the median HIT completion time in a preliminary experiment (2 minutes) and an hourly wage of $7.20.", "We require annotators to be within the U.S., but we do not explicitly test for English language proficiency.", "6 We assign the majority predicted document label and averaged Likert value for confidence and readability across the 3 annotators for each instance.", "We report human Accuracy as Rationale Human Confidence Readability Source Acc.", "We report results in Tables 3 and", "4. We observe that humans perform comparably to the trained model (Table 2) at predicting document labels given only the model-extracted rationales.", "Humans perform at least as well using our extracted rationales as they do with other methods.", "They also exhibit a strong preference for contiguous rationales, supporting our hypothesis.", "Lastly, we observe that confidence and readability are high.", "Thus while our primary goal is to provide faithful rationales, these results suggest that those provided by FRESH are also reasonably plausible.", "This shows that faithfulness and plausibility are not mutually exclusive, but also not necessarily correlative.", "We have proposed Faithful Rationale Extraction from Saliency tHresholding (FRESH), a simple, flexible, and effective method to learn explainable neural models for NLP.", "Our method can be used with any feature importance metric, is very sim-a measure of how well our annotators have done at predicting the correct document label from only the extracted rationale.", "All metrics are averaged over the 100 test documents.", "ple to implement and train, and empirically often outperforms more complex rationalized models.", "FRESH performs discrete rationale selection and ensures the faithfulness of provided explanations regardless of the complexity of the individual components by using independent extraction and prediction modules.", "This allows for contextualized models such as transformers to be used, without sacrificing explainability (at least at the level of rationales).", "Further, we accomplish this without recourse to explicit rationale-level supervision such as REINFORCE or the reparameterization trick; this greatly simplifies training.", "We showed empirically that FRESH outperforms existing models, recovering most of the performance of the original black-box' model.", "Additionally, we found FRESH rationales to be at least as plausible to human users as comparable end-to-end methods.", "We acknowledge some important limitations of this work.", "Here we have considered explainability as an instance-specific procedure.", "The final explanation provided by the model is limited to the tokens provided by the extraction method.", "Our framework does not currently support further pruning (or expanding) this token set once the rationale has been selected.", "In addition, while we do have a guarantee under our model about which part of the document was used to inform a given classification, this approach cannot readily say why this specific rationale was selected in the first place.", "Nor do we clearly understand how the pred uses extracted rationale to perform its classification.", "We view these as interesting directions for future work.", "We thank colleagues at Georgia Tech and Northeastern for their feedback, including Sandeep Soni, Diyi Yang, Ian Stewart, and Jiaao Chen.", "We also thank the anonymous reviewers for many useful comments and suggestions.", "This work was supported in part by the Army Research Office (W911NF1810328), and by the National Science Foundation (CAREER award 1750978).", "YP is a Bloomberg Data Science PhD Fellow." ]
[ "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "abstain", "result", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "objective", "abstain", "method", "method", "abstain", "abstain", "abstain", "result", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "result", "abstain", "abstain", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "method", "abstain", "abstain", "objective", "method", "abstain", "other", "other", "other", "other" ]
[ "Xiang Lisa Li Department of Computer Science Johns Hopkins University [email protected]", "Alexander M. Rush Department of Computer Science Cornell Tech [email protected]", "Abstract", "Text generation often requires high-precision output that obeys task-specific rules.", "This fine-grained control is difficult to enforce with off-the-shelf deep learning models.", "In this work, we consider augmenting neural generation models with discrete control states learned through a structured latent-variable approach.", "Under this formulation, task-specific knowledge can be encoded through a range of rich, posterior constraints that are effectively trained into the model.", "This approach allows users to ground internal model decisions based on prior knowledge, without sacrificing the representational power of neural generative models.", "Experiments consider applications of this approach for text generation.", "We find that this method improves over standard benchmarks, while also providing fine-grained control.", "A core challenge in using deep learning for NLP is developing methods that allow for controlled output while maintaining the broad coverage of data-driven methods.", "While this issue is less problematic in classification tasks, it has hampered the deployment of systems for conditional natural language generation (NLG), where users often need to control output through task-specific knowledge or plans.", "While there have been significant improvements in generation quality from automatic systems (Mei et al., 2016; Dusek and Jurcicek, 2016; Lebret et al., 2016b), these methods are still far from being able to produce controlled output (Wiseman et al., 2017).", "Recent state-of-the-art system have even begun to utilize manual control through rule-based planning modules (Moryossef et al., 2019; Puduppully et al., 2019).", "These models generate fluent output and provide flexible representations of their conditioning.", "Unfortunately, auto-regressive decoders are also globally dependent, which makes it challenging to incorporate domain constraints.", "Research into controllable deep models aims to circumvent the all-or-nothing dependency tradeoff of encoder-decoder systems and expose explicit higher-level decisions.", "One line of research has looked at global control states that represent sentence-level properties for the full decoder.", "For example, Hu et al. (2017) uses generative adversarial networks where the attributes of the text (e.g., sentiment, tense) are exposed.", "Another line of research exposes fine-level properties, such as phrase type, but requires factoring the decoder to expose local decisions, e.g. Wiseman et al. (2018).", "This work proposes a method for augmenting any neural decoder architecture to incorporate fine-grained control states.", "The approach first modi-fies training to incorporate structured latent control variables.", "Then, training constraints are added to anchor the state values to problem-specific knowledge.", "At test time, the control states can be ignored or utilized as grounding for test-time constraints.", "Technically, the approach builds on recent advances in structured amortized variational inference to enforce additional constraints on the learned distribution.", "These constraints are enforced through efficient structured posterior calculations and do not hamper modeling power.", "We demonstrate that the method can improve accuracy and control, while utilizing a range of different posterior constraints.", "In particular on two large-scale data-to-text generation datasets, E2E (Novikova et al., 2017) and WikiBio (Lebret et al., 2016a), our method increases the performance of benchmark systems while also producing outputs that respect the grounded control states.", "Our code is available at https://github.com/XiangLi1999/ PosteriorControl-NLG .", "Consider a conditional generation setting where the input consists of an arbitrary context x and the output y 1: T is a sequence of target tokens.", "We are interested in modeling latent fine-grained, discrete control states z = z 1: T each with a label in C .", "We assume that these states are weakly-supervised at training through problem-specific constraints.", "The goal is to induce a model of p ( y | x ) = (cid:80) z p ( y, z | x ) .", "Concretely, our experiments will focus on a data-to-text generation problem where x corresponds to a table of data, and y 1: T is a textual description.", "We hope to induce control states z that indicate which table fields are being described, and our weak supervision corresponds to indicators of known alignments.", "For a neural decoder, where h t ( y 1: t 1 , z 1: t 1 ) is the hidden state at time-step t , we might generate the latent class z t C and next token y t as,", "Here g is a parameterized embedding function and W, b are model parameters from .", "The log-likelihood of the model is given by L ( ) = log p ( y | x ) .", "The key latent term of interest is the posterior distribution p ( z | x, y ) , i.e. the probability of over state sequences for a known output.", "The decoder parameterization makes this distribution intractable to compute in general.", "We instead use variational inference to define a parameterized variational posterior distribution, q ( z | x, y ) , from a preselected family of possible distributions Q .", "1 To fit the model parameters , we utilize the evidence lower bound (for any variational parameters ), L ( ) ELBO ( , ) = E z q ( z | x,y ) [log p ( y, z | x )] + H[ q ( z | x, y )] 1 Since our family is over a combinatorial set of z 1: T , this corresponds to a structured variational inference setting.", "Several recent works have shown methods for effectively fitting neural models with structured variational inference (Johnson et al., 2016; Krishnan et al., 2017; Kim et al., 2019).", "We therefore use these techniques as a backbone for enforcing problem-specific control.", "See 4 for a full description of the variational family used.", "Posterior regularization (PR) is an approach for enforcing soft constraints on the posterior distribution of generative models (Ganchev et al., 2010).", "Our goal is to utilize these soft constraints to enforce problem specific weak supervision.", "Traditionally PR uses linear constraints which in the special case of expectation maximization for exponential families leads to convenient closed-form training updates.", "As this method does not apply to neural generative models, we resort to gradient-based methods.", "In this section, we develop a form of posterior regularization that accommodates the neural variational setting.", "Starting with the log-likelihood objective, L ( ) , PR aims to add distributional constraints on the posterior.", "These soft constraints are expressed as a distributional penalty, R p ( x, y ) 0 .", "For example, if we have partial information that a specific control state takes on label c we can add a constraint R p ( x, y ) = 1 p ( z t = c | x, y ) .", "We might also consider other distributional properties, for instance penalizing the entropy of a specific posterior marginal, R p ( x, y ) = H z (cid:48) ( z t = z (cid:48) | x, y ) .", "See 5 for more constraint examples.", "PR uses these soft constraints to regularize the model.", "Ideally we would penalize the posterior directly, but as noted above, computing this term in a blackbox model is intractable.", "We therefore follow Ganchev et al. (2010) and use a relaxed version with a surrogate posterior q ( z | x, y ) , LPR ( ) = L ( ) (1) min [KL[ q || p ( z | x, y )] + R q ( x, y )] We can write this in terms of a variational lower-bound on the relaxed PR objective.", "This allows us to relate the q in the PRLBO to the variational posterior in the ELBO simply by", "expanding the KL and rearranging terms, PRLBO ( , ) = ELBO ( , ) R q ( x, y ) To train, we jointly maximize over both terms in the PRLBO: the model parameters and the variational parameters (which tightens the bounds).", "Following standard practice, we use an amortized inference network, i.e. a variational autoencoder (Kingma and Welling, 2014; Mnih and Gregor, 2014; Rezende et al., 2014), to define .", "We now discuss how to efficiently compute the PRLBO under a structured variational family.", "We need a q ( z | x, y ) for which we can efficiently (1) take samples, (2) compute entropy, and (3) compute the distributional penalties.", "This motivates the use of a factored conditional random field (CRF), defined by a potential function ( x, y, z ) .", "At training time, x, y are observed and z is the latent variable that denotes the control states.", "We then specify a variational posterior distribution: q ( z | x, y ) = ( x,y,z ) (cid:80) z (cid:48) ( x,y,z (cid:48) ) .", "In this work, we focus on the semi-Markov CRF (Gales and Young, 1993; Sarawagi and Cohen, 2005), a common CRF family used in generation (Wiseman et al., 2018).", "It divides tokens into segmental spans, which are useful for generating entity mentions and commonly used phrases.", "This model divides the potential function into three parts: the emission potential for a span of tokens given Algorithm 1: Generic Semi-Markov Algorithm.", "a state, denoted as ( e ) ; the transition potential between states, ( t ) ; and the length potential of span length given a state, ( l ) .", "Suppose our control states define a span from i (inclusive) to j (exclu-sive) labeled by c , we denote it as z i : j = c .", "The potential function of a labeled sequence is defined: ( x, y, z ) = (cid:89) i<j<k ( t ) ( z i : j ,z j : k ) ( l ) ( j i ) ( e ) ( x, y i : j , z i : j ) (3) For computational efficiency, we restrict all segment length to be L .", "2 With this model, we can use the forward-backward algorithm for all required inferences: exact sampling, computing partition function, entropy, and posterior marginals q ( z i : j = c | x, y ) , useful for term (3).", "In Algorithm 1, we give a 2 The time complexity to compute the posterior moments of the full semi-Markov CRF is O ( |C| 2 nL ) .", "generic semi-Markov algorithm (Sarawagi and Cohen, 2005).", "We store two tables and (cid:48) , both of size T |C| .", "t ( c ) denotes the event that there is a transition at time t from state c .", "(cid:48) t ( c ) denotes the event that there is a emission starting from time t at state c .", "Then we have the recursion for (cid:48) t ( c ) by summing over different span length, and we have the recursion for t ( c ) that sums over all different state transitions.", "The algorithm is generic in the sense that different ( , ) operators allow us to compute different needed terms.", "For example, computing the partition function Z = (cid:80) z (cid:48) ( x, y, z (cid:48) ) requires the ( + , ) semiring (Goodman, 1999; Li and Eisner, 2009), other distributional terms can be computed by using the same algorithm with alternative semirings and backpropagation 3 .", "To make the PR model concrete, we consider the problem of incorporating weak supervision from heuristic alignment in a data-to-text generation task.", "Assume that we are tasked with describing a table x consisting of global field names F each with a text value v , e.g. x f = v .", "Not all global fields may be used in a given x , we use f x to indicate an 3 We need four terms:", "(a) log-partition term log (cid:80) z (cid:48) ( x, y, z (cid:48) ) requires the log semiring (logsumexp , +) .", "The posterior marginals q ( z | x, y ) requires backpropagating from the log-partition term;", "(b) max score max z ( x, y, z ) : (max , +) max semiring and argmax arg max z ( x, y, z ) by (subgradient) backpropagation,", "(c) entropy through an expectation semiring (cid:104) p 1 , r 1 (cid:105) (cid:104) p 2 , r 2 (cid:105) = (cid:104) p 1 p 2 , p 1 r 2 + p 2 r 1 (cid:105) , and (cid:104) p 1 , r 1 (cid:105) (cid:104) p 2 , r 2 (cid:105) = (cid:104) p 1 + p 2 , r 1 + r 2 (cid:105) , with 1 = (cid:104) 1 , 0 (cid:105) .", "To initialize, all the emission, transition and length scores takes the form (cid:104) , log (cid:105) .", "The algorithm returns (cid:104) Z, R (cid:105) , and the true entropy is RZ + log Z .", "(d) exact sampling through one backward pass and one forward filtering backward sampling, where forward uses the log-partition semiring and backpropagation is by categorical sampling.", "We would like control states to indicate when each field is used in generation.", "Our alignment heuristic is that often these fields will be expressed using the identical text as in the table.", "While this heuristic obviously does not account for all cases, it is very common in natural language generation tasks as evidence by the wide use of copy attention based approaches (Gu et al., 2016; Gulcehre et al., 2016).", "To utilize these alignments, we use the notation ( i, j, f ) A ( x, y ) to indicate that a span i : j in the training text y overlaps directly with a field f x .", "Table 2 gives an example of the notation.", "One-to-One Constraints We first consider one-to-one constraints where we assume that we have a static, mapping from fields to states : F (cid:55) C .", "Given this mapping, we need to add penalties to encourage the semi-Markov model to overlap with the given weak supervision.", "To enforce soft alignments, we define three posterior constraint types and their computation as shown in Table 1 (Left).", "The three constraints are", "i) Inclusion: if a span in y aligns with a field value f , then label that span ( f ) the state allocated to that field;", "ii) Exclusion: A span should only have a state ( f ) , if it aligns with the field value of type f ;", "iii) Coverage.", "The usage count of state ( f ) should be 1 if f in x .", "One-to-Many Constraints We also consider the case when it is infeasible to specify a hard mapping between the fields and the states.", "For example, F could be unbounded or large, whereas we hope to keep the cardinality of states small for computational efficiency.", "We propose a method of inducing a dynamic soft mapping ( c | f ) as we train the model, and impose constraints on the mapping from table field to the state names.", "First, we would like the distribution of state given table field to be consistent, so one table field is mapped to roughly 1 state.", "Second, we want to make use of the state space as much as possible by requiring a diverse usage of states.", "In order to enforce these properties we introduce the dynamic mapping as a second amortized variational distribution ( c | f ; M ) = softmax( Mf ) which gives the probability that a table field f takes on state c .", "As shown in Table 1 (Right), we define three constraints that regularize the local q with respect to the global :", "i) Sparsity: Each vocabulary entry in should have low entropy;", "ii) Fit: The global should represent the class name distribution posterior of each table field by minimizing the cross entropy between types ( c | f ) and tokens q ( z i : j | x, y ) for all ( i, j, f ) A ( x, y ) ;", "iii) Diversity: the aggregate class label distribution over all the token in a sentence should have high entropy.", "In addition to previously mentioned work, other researchers have noted the lack of control of deep neural networks and proposed methods at sentence-level, word-level, and phrase-level.", "For example Peng et al. (2018) and Luo et al. (2019) control the sentiment in longer-form story generation.", "Others aim for sentence-level properties such as sentiment, style, tense, and specificity in generative neural models (Hu et al., 2017; Oraby et al., 2018; Zhang et al., 2018; Shen et al., 2017).", "Closest to this work is that of Wiseman et al. (2018) who control phrase-level content by using a neuralized hidden semi-Markov model for generation itself.", "Our work differs in that it makes no independence assumption on the decoder model, uses a faster training algorithm, and proposes a specific method for adding constraints.", "Finally, there is a line of work that manipulates the syntactic structure of generated texts, by using some labeled syntactic attribute (e.g., parses) or an exemplar (Deriu and Cieliebak, 2018; Colin and Gardent, 2018; Iyyer et al., 2018; Chen et al., 2019).", "While our work uses control states, there is no inherent assumption of compositional syntax or grammar.", "Posterior regularization (PR) is mostly used in standard EM settings to impose constraints on the posterior distribution that would otherwise be intractable (or computationally hard) in the prior.", "Ganchev et al. (2010) applies posterior regularization to word alignment, dependency parsing, and part-of-speech tagging.", "Combining powerful deep neural networks with structured knowledge has been a popular area of study: Xu et al. (2019) applies PR to multi-object generation to limit object overlap; Bilen et al. (2014) focuses on object detection, and uses PR features to exploit mutual exclusion.", "In natural language processing; Hu et al. (2016a,b) propose an iterative distillation procedure that transfers logic rules into the weights of neural networks, as a regularization to improve accuracy and interpretability.", "Finally, the core of this work is the use of amortized inference/variation autoencoder to approximate variational posterior (Kingma and Welling, 2014; Mnih and Gregor, 2014; Rezende et al., 2014).", "We rely heavily on a structure distribution, either linear chain or semi-Markov, which was introduced as a structured VAEs (Johnson et al., 2016; Krishnan et al., 2017; Ammar et al., 2014).", "Our setting and optimization are based on Kim et al. (2019), who introduce a latent tree variable in a variational autoencoding model with a CRF as the inference network, and on Yin et al. (2018) who use an encoder-decoder model as the inference network.", "Data and Metrics We consider two standard neural generation benchmarks: E2E (Novikova et al., 2017) and WikiBio (Lebret et al., 2016a) datasets, with examples shown in Figure 1.", "The E2E dataset contains approximately 50K examples with 8 distinct fields and 945 distinct word types; it contains multiple test references for one source table.", "We evaluate in terms of BLEU (Papineni et al., 2002), NIST (Belz and Reiter, 2006), ROUGE-L Table ( x ): name[Clowns] eatType[coffee shop] food[Chinese] customer-rating[1 out of 5] area[riverside] near[Clare Hall] Ref.1 : Clowns is a coffee shop in the riverside area near Clare Hall that has a rating 1 out of 5 .", "(Lin, 2004), CIDEr (Vedantam et al., 2015) and METEOR (Lavie and Agarwal, 2007), using the official scoring scripts 4 .", "The WikiBio dataset contains approximately 700K examples, 6K distinct table field types, and 400K word types approximately; it contains one reference for one source table.", "We follow the metrics from (Lebret et al., 2016a) and evaluate the BLEU, NIST, and ROUGE-4 scores.", "Architecture and Hyperparameters For all tasks, we use an encoder-decoder LSTM for the generative model.", "We follow recent state-of-the-art works in parametrizing our encoder, and we use copy attention and dual attention (Gu et al., 2016; Gulcehre et al., 2016; Liu et al., 2018): full model architectures are given in the supplement.", "The inference network scores are computed using a BiLSTM.", "We compute the emission scores ( e ) using span embeddings (Wang and Chang, 2016; Kitaev and Klein, 2018; Stern et al., 2017); transition scores ( t ) by dot product between embedding vectors for the class labels; lengths ( l ) is kept uniform, as in Wiseman et al. (2018).", "Additional details are in the supplement.", "At training time, we use a rate for alleviating posterior collapse in the ELBO: warm-up the ELBO objective by linearly annealing the coefficient on the term (cid:80) Tt =1 log p ( z t | z <t , y <t ) and H[ q ( z | x, y )] from 0 to 1, as implemented in Kim et al. (2019).", "We use the REINFORCE algorithm to do Monte Carlo estimation of the stochastic gradient.", "We choose the control variate to be the mean of the samples (Mnih and Rezende, 2016).", "At decoding time, we only use the generative model.", "We use beam search with length normaliza-4 Official E2E evaluation scripts available at https:// github.com/tuetschek/e2e-metrics tion to jointly generate both the control states and the sentences.", "To obtain controlled generation, we observe the control states, and apply constrained beam search to p ( y | x, z ) .", "Baselines For generation on E2E, we compare externally against 4 systems: E2E-B ENCHMARK (Dusek and Jurccek, 2016) is an encoder-decoder network followed by a reranker used as the shared task benchmark; NTEMP , a controllable neuralized hidden semi-Markov model; NTEMP +AR, the product of experts of both a NTemp model and an autoregressive LSTM network (Wiseman et al., 2018); SHEN 19 (Shen et al., 2019) is an pragmatically informed model, which is the current state-of-the-art system on E2E dataset.", "We also compare internally with ablations of our system: ENCDEC is a conditional model p ( y | x ) trained without control states.", "PC 0 is posterior control model with no constraints.", "It uses structured encoder with the PR coefficient set to 0 .", "PC is our model with hard constraints, which assumes fully-observed control states.", "These control states are obtained by mapping tokens with lexical overlap to their designated state; otherwise we map to a generic state.", "We train a seq2seq model p ( y, z | x ) with full supervision of both control states and target text.", "Our main model is PC , which applies PR with coefficient given by hyperparameter .", "For WikiBio, we compare externally against 5 systems: NTEMP and NTEMP +AR as above; LEBRET 16 (Lebret et al., 2016a), which uses copy attention and an NNLM; LIU 18 (ENCDEC ), which is our base encoder-decoder LSTM model, and LIU 18 (Field Gating) which uses a field gating table encoder and a decoder with dual attention (Liu et al., 2018).", "For internal comparison on WikiBio, we compare between the one-to-one and one-to-E2E BLEU NIST ROUGE CIDEr MET validation E2E-B ENCH * 69.25 8.48 72.6 2.40 47.0 ENCDEC * 70.81 8.37 74.1 2.48 48.0 NTEMP 64.53 7.66 68.6 1.82 42.5 NTEMP +AR 67.70 7.98 69.5 2.29 43.1 PC 0 69.10 8.32 72.6 2.35 47.3 PC 69.36 8.36 71.3 2.29 46.4 PC 72.93 8.63 75.5 2.54 48.4 test E2E-B ENCH * 65.93 8.59 68.5 2.23 44.8 SHEN 19* 68.60 8.73 70.8 2.37 45.3 ENCDEC * 66.34 8.55 68.0 2.18 44.3 NTEMP 55.17 7.14 65.7 1.70 41.9 NTEMP +AR 59.80 7.56 65.0 1.95 38.8 PC 67.12 8.52 68.7 2.24 45.4 WikiBio BLEU NIST R-4 test LEBRET 16* 34.7 7.98 25.8 LIU 18(E NCDEC )* 43.7 -40.3 LIU 18(FieldGating)* 44.9 -41.2 NTEMP 34.2 7.94 35.9 NTEMP +AR 34.8 7.59 38.6 PC one-to-one 44.7 9.92 43.3 PC one-to-many 44.2 9.59 41.5 Table 3: Automatic metrics for text generation.", "many constraints in 5.", "PC one-to-one applies the One-to-One posterior constraints (left of Table 1).", "PC one-to-many applies the One-to-Many posterior constraints (right of Table 1).", "Table 3 shows the main results for the E2E and WikiBio, comparing to both standard neural models and controllable systems.", "On E2E (left), our posterior control model outperforms the neural benchmark system on all validation metrics and most of the test metrics.", "It also achieves results comparable or better than a specialized encoder-decoder system.", "It has significantly better performance than the controllable NTemp and NTemp+AR in all metrics on both validation and test.", "This demonstrates that the PC model provides interpretable and controllable states without sacrificing any representation power or generation performance.", "For internal comparison, having soft constraints on the posterior outperforms the system PC (forced hard constraints) and PC 0 (no constraints).", "Anecdotally, we find that if two fields have the same value, then the hard coding system is often forced into the wrong decision.", "Similarly removing posterior regularization altogether leads to a slightly weaker performance than our controlled model.", "On the larger WikiBio dataset (right) our model also significantly outperforms both the controllable NTemp and NTemp+AR baselines in all three metrics.", "It gives improvements over Liu et al. (2018)'s strong encoder-decoder style model.", "The promising result from WikiBio dataset suggests that the method scales to larger datasets and the PR style works well in handling large field spaces.", "In addition, we find that dynamic constraints are feasible compared with static constraints (we believe this is because the modeling burden on PC one-to-many is heavier since it also needs to figure out the clus-tering).", "Overall, the dynamic framework opens up the possibility of generalizing to work well with a wider set of constraints.", "Qualitative Analysis Table 4 shows how control states (shown by different colors) are used in generated sentences.", "We use examples generated by the PC system on the WikiBio dataset.", "We obtain outputs by beam search over control states and words.", "The first block contains examples with relatively complete coverage by the semantically grounded control states, including name, birth date, death date, occupation and nationality.", "We note that when a control state is selected, the textual span covered by the control state tend to respect truthfulness by copying from the table.", "The second block shows a longer example that uses less of the source, but still remain truthful with respect to the table.", "Table 5 (left) qualitatively demonstrates the multi-modality of output of the system on E2E PC billy ruge -lrbc.", "We particularly note how the final system is trained to associate control states with field types.", "Here we fix the prior on z to 8 different sequences of class labels shown in different colors, and do constrained beam search on the generative model by holding z fixed, and decoding from the model p ( y | x, z ) .", "Controllability Next we consider a quantitive experiment on model control.", "Assuming we have a mapping from control states to fields, ideally, at test time z should use the right states from the source x .", "5 Let S = { ( i, j, f ) : z i,j = c, f x, ( f ) = c } be the field states used by z .", "Define the field word overlap between x and y as, # match = (cid:88) ( i,j,f ) S unigram-overlap ( y i : j , x f ) We can compute precision , recall , and coverage under this metric, # match (cid:80) ( i,j,f ) S ( j i ) , # match (cid:80) f x | x f | , |S| | c : c x | .", "Under these metrics we see the following control metrics on the E2E dataset, 5 On E2E dataset, we remove the binary table field, family friendly which is never expressed by lexical match.", "The PC model with soft posterior constraints performs better than having hard constraints on all three metrics.", "Having P = 1 means that the control states are a strong signal to copy from the table, and C = 1 means that control states learn to cover all table fields.", "On WikiBio, the model has a precision of 0 .", "83 on the, meaning that on average, when we generate a good control state, 83% of the generated tokens will match the table content.", "Since only a fraction of the source table in WikiBio is used, recall and coverage are less applicable.", "Distributional Metrics Table 5 (right) shows distributional metrics related to the optimization of the generative model and the inference network.", "The reconstruction perplexity, Rec.", "is much lower than the full perplexity, PPL and the KL divergence between the variational posterior and the conditional prior is highly non-zero.", "These observations indicate that latent variables are being used in a non-trivial way by the generative model.", "It also suggests the variational model is not experiencing posterior collapse.", "Limitations Given the promise of PR as a technique for inducing control states, it is worth noting some of the current limitations to our specific application of the method.", "Currently, we use simple rules which do not generalize well to paraphrase.", "Our weak supervision relies on direct overlap to align states and fails on aligning phrases like less then 10 dollars that are expressed as cheap .", "Additionally, while at test time, our method is comparable to a standard decoder model, it does require slightly longer to train due to both the dynamic program and the requirement to compute multiple samples.", "This work introduces a method for controlling the output of a blackbox neural decoder model to follow weak supervision.", "The methodology utilizes posterior regularization within a structured variational framework.", "We show that this approach can induce a fully autoregressive neural model that is as expressive as standard neural decoders but also utilizes meaningful discrete control states.", "We show this decoder is effective for text generation while inducing meaningful discrete representations.", "Induction of grounded control states opens up many possible future directions for this work.", "These states can be used to provide integration with external rule-based systems such as hard constraints at inference time.", "They also can be used to provide tools for human-assisted generation.", "Another direction is to improve the sources of weak supervision and such as interactive new constraints provided by users.", "One could also explore alternative posterior constraints based on pre-trained models for summarization or paraphrase tasks to induce semantically grounded latent variables.", "Finally, it would be interesting to explore alternative training methods for these models, such as reducing reliance on hard sampling through better relaxations of structured models.", "Thanks to Yoon Kim, Jambay Kinley, and Tristan Yang for ideas and discussion.", "AMR was supported by NSF CAREER 1845664 and IIS 1901030.", "XLL was supported by a Sony Research Award.", "We thank the anonymous reviewers for helpful comments." ]
[ "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "other", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "other", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "method", "objective", "other", "method", "other", "other", "other", "other", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "method", "method", "method", "method", "other", "method", "method", "method", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other" ]
[ "Knowledge Graph (KG) and attention mechanism have been demonstrated effective in introducing and selecting useful information for weakly supervised methods.", "However, only qualitative analysis and ablation study are provided as evidence.", "In this paper, we contribute a dataset and propose a paradigm to quantitatively evaluate the effect of attention and KG on bag-level relation extraction (RE).", "We find that (1) higher attention accuracy may lead to worse performance as it may harm the model's ability to extract entity mention features; (2) the performance of attention is largely influ-enced by various noise distribution patterns, which is closely related to real-world datasets; (3) KG-enhanced attention indeed improves RE performance, while not through enhanced attention but by incorporating entity prior; and (4) attention mechanism may exacerbate the issue of insufficient training data.", "Based on these findings, we show that a straightforward variant of RE model can achieve sig-nificant improvements (6% AUC on average) on two real-world datasets as compared with three state-of-the-art baselines.", "Our codes and datasets are available at https://github.com/zig-kwin-hu/how-KG-ATT-help.", "Relation Extraction (RE) is crucial for Knowledge Graph (KG) construction and population.", "Most recent efforts rely on neural networks to learn effi-cient features from large-scale annotated data, thus correctly extract the relationship between entities.", "To save the manual annotation cost and alleviate the issue of data scarcity, distant supervision relation extraction (DSRE) (Mintz et al., 2009) is proposed and becomes increasingly popular as it can automatically generate large-scale labeled data.", "DSRE is based on a simple yet effective principle: if there is a relation between two entities in KG, then all sentences containing mentions of both entities are assumed to express this relation and will form a sentence bag as its annotations.", "Although effective, distant supervision may introduce noise to a sentence bag when the assump-tion fails some sentences are not describing the target relation (Zeng et al., 2015) (a.k.a. noisy anno-tation).", "To alleviate the negative impacts of noise, recent studies (Lin et al., 2016; Ji et al., 2017; Du et al., 2018; Li et al., 2020) leveraged attention to select informative instances from a bag.", "Furthermore, researchers introduced KG embeddings to enhance the attention mechanism (Hu et al., 2019; Han et al., 2018a).", "The basic idea is to utilize entity embeddings as the query to compute attention scores, so that the sentences with high attention weights are more likely to be valid annotations (Zhang et al., 2019).", "Previous studies have shown performance gain on DSRE with attention module and KG embeddings, however, it's still not clear how these mechanisms work, and, are there any limitations to apply them?", "In this paper, we aim to provide a thorough and quantitative analysis about the impact of both attention mechanism and KG on DSRE.", "By analyzing several public benchmarks including NYT-FB60K (Han et al., 2018a), we observe lots of disturbing bags all of the bag's sentences are valid or noisy annotations, which shall lead to the failure of attention.", "As shown in Figure-1, all of annotations in the first disturbing bag are valid, while the learned attentions assign the second annotation with a very low weight, which suggests an ineffi-cient utilization of annotations and exacerbates the data sparsity issue.", "Or, in the second bag, all sentences are noisy, can attention and KG still improve the performance?", "If so, how do they work and to what extent can they tolerate these disturbing bags?", "Answering these questions is crucial since this type of noise is common in practice.", "The unveiling of their working mechanism shall shed light on future research direction, not limited to DSRE.", "To achieve this, we propose a paradigm based on newly curated DSRE benchmark, BagRel-Wiki73K extracted from FewRel (Han et al., 2018b) and Wikidata 1 , for quantitative analysis of attention and KG.", "With extensive experiments, we conclude the following innovative and inspiring findings: (1) The accuracy of attention is inversely proportional to the total noise ratio and disturbing bag ratio of training data; (2) attention effectively selects valid annotations by comparing their contexts with the semantics of relations, thus tends to rely more on the context to make predictions.", "However, it somehow lowers the model's robustness to noisy sentences that do not express the relation; (3) KG-enhanced attention indeed improves RE performance, surprisingly not via enhanced attention accuracy, but by incorporating entity features to reduce the demand of contexts when facing noise; (4) attention could hurt the performance especially when there is no sufficient training data.", "Based on the above observations, we propose a new straightforward yet effective model based on pre-trained B ERT (Devlin et al., 2018) for RE with C oncatenated KG E mbedding, namely BRE+CE .", "Instead of in-bag attention, it breaks the bag and ensembles the results of all sentences belonging to the bag.", "For each sentence, we directly incorporate entity embeddings into BERT, rather than to enhance attentions, to improve the robustness of extracting both context and mention features.", "BRE+CE sig-nificantly outperforms existing state-of-the-arts on two publicly available datasets, NYT-FB60K (Han et al., 2018a) and GIDS-FB8K (Jat et al., 2018), by 6% AUC on average.", "We summarize our contributions as follows: To the best of our knowledge, our proposed framework is the first work to quantitatively analyze the working mechanism of Knowledge Graph and attention for bag-level RE.", "We have conducted extensive experiments to inspire and support us with the above findings.", "We demonstrate that a straightforward method based on the findings can achieve improvements on public datasets.", "To address the issue of insufficient annotations, Mintz et al. (2009) proposed distant supervision to generate training data automatically, which also introduces much noise.", "From then, DSRE becomes a standard solution that relies on multi-instance learning from a bag of sentences instead of a single sentence (Riedel et al., 2010; Hoffmann et al., 2011).", "Attention mechanism (Lin et al., 2016) accelerates this trend via strong ability in handling noisy instances within a bag (Liu et al., 2017; Du et al., 2018).", "Aside from intra-bag attention, Ye and Ling (2019) also designed inter-bag attention simultaneously handling bags with the same relation.", "To deal with only-one-instance bags, Li et al. (2020) utilized a new selective gate (SeG) framework to independently assign weights to each sentence.", "External KG is also incorporated to enhance the attention module (Han et al., 2018a; Hu et al., 2019).", "However, due to the lack of sentence-level ground truth, it is difficult to quantitatively evaluate the performance of the attention module.", "Previous researchers tend to provide examples as case study.", "2 Therefore, we aim to fill in this research gap by constructing a dataset and providing a framework for thorough analysis.", "Knowledge Graph (KG) is a directed graph G = { E, R, T } , where E denotes the set of entities, R denotes the set of relation types in G , and T = { ( h, r, t ) } E R E denotes the set of triples.", "KG embedding models, e.g., RotatE (Sun et al., 2019), can preserve the structure information in the learned vectors e h , e t and e r .", "We adopt TransE (Bordes et al., 2013) in experiments.", "Bag-level relation extraction (RE) takes a bag of sentences B = { s 1 , s 2 , . . . , s m } as input.", "Each sentence s i in the bag contains the same entity pair ( h, t ) , where h, t E .", "The goal is to predict a relation y R between ( h, t ) .", "Attention-based Bag-level RE uses attention to assign a weight to each sentence within a bag.", "Given a bag B from the dataset D , an encoder is first used to encode all sentences from B into vectors { s (cid:48) 1 , s (cid:48) 2 , . . . , s (cid:48) m } separately.", "Then, an attention module computes an attention weight i for each sentence and outputs the weighted sum of { s (cid:48) i } as s to denote B : i = v y s (cid:48) i (1) i = exp( i ) m (cid:80) j =1 exp( j ) (2) s = m (cid:88) i =1 i s (cid:48) i (3) where v y is the label embedding of relation y in the classification layer, we denote this attention module as ATT in the rest of paper.", "KG-enhanced attention aims to improve v y with entities e h and e t (Han et al., 2018a): r ht = e h e t (4) i = r ht tanh( W s s (cid:48) i + b s ) (5) where r ht is regarded as latent relation embedding.", "We mark this way of computing i as KA .", "W s and b s are learnable parameters.", "Given a bag representation s , the classification layer further predicts a confidence of each relation: o = W b s + b b (6) P ( y | B ) = Softmax ( o ) (7) where o is a logit vector.", "W b and b b are learnable parameters.", "During training, the loss is computed by: L = n (cid:88) i =0 log ( P ( y i | B i )) (8) where n is the number of training bags in D .", "Since the classification layer is linear, we can rewrite the bag's logit vector o using a weighted sum of each sentence's logit vector o : o i = W b s (cid:48) i + b b (9) o = m (cid:88) i =1 i o i (10) From equation 10, we can see that the model's output on the whole bag depends on three aspects: (1) the model's output on valid sentences within the bag; (2) the model's output on noisy sentences within the bag; (3) the attention weight assigned to valid sentences and noisy ones.", "To quantitatively evaluate the effect of attention and KG on Bag-level RE, we first define two metrics to measure the noise pattern (Section 4.1).", "Then, we construct a KG and a Bag-level RE dataset (Section 4.2).", "Finally, we introduce a general evaluation framework to assess attention, KG and the entire RE model (Section 4.3).", "To analyze how attention module functions on different noise patterns, we first design 2 metrics to describe the noise pattern: Noise Ratio (NR) and Disturbing Bag Ratio (DR).", "Noise Ratio (NR) represents the proportion of noisy sentences in the dataset.", "Given a bag B i and its relation label y i , a sentence s ij B i is noisy if its context does not express y i .", "Suppose Isn ( s ij , y i ) is an indicator function to tell whether s ij is noise.", "Then NR is defined as: NR = n (cid:80) i =1 | B i | (cid:80) j =1 Isn ( s ij , y i ) n (cid:80) i =1 | B i | (11) where | B i | is the size of B i , n is the total number of bags.", "Disturbing Bag Ratio (DR) means the proportion of disturbing bags in the dataset.", "A bag is disturbing if all sentences in it are valid or all sentences are noisy.", "Formally, we use function Isd ( B i ) to indicate whether a bag is disturbing or not: Isd ( B i ) = | B i | (cid:89) j =1 Isn ( s ij , y i ) + | B i | (cid:89) j =1 (1 Isn ( s ij , y i )) (12) Then we define DR as follows: DR = n (cid:80) i =1 Isd ( B i ) n (13) 4.2 Dataset Construction Based on FewRel and Wikidata, we construct a Bag-level RE dataset containing multiple training sets with different noise patterns, a test set and a development set.", "For each sentence in the bags, there is a ground truth attention label indicating whether it is a valid sentence or noise.", "We also construct a KG containing all entities in the RE dataset by retrieving one-hop triples from Wikidata.", "Synthesize Sentence FewRel is a sentence-level RE dataset, including 80 relations.", "For each relation, there are 700 valid sentences.", "Each sentence has a unique entity pair.", "Every sentence along with its entities and relation label form a tuple ( s, h, t, y ) .", "We thus synthesize valid and noisy sentences for the same entity pair for data augmentation.", "The first step is to divide sentences of each relation into 3 sets: train FewRel , test FewRel and dev FewRel , where each set has 500, 100 and 100 sentences.", "Then, for each tuple ( s, h, t, y ) in the set, we aim to augment it to a bag B , where all of its sentences contain ( h, t ) .", "Besides, the sentences in B are either the original s , or a synthesized valid sentence, or a synthesized noisy sentence.", "We synthesize sentences in the form of ( s (cid:48) , h, t, y, z ) , where z denotes the attention label (1 for valid, 0 for noisy).", "In specific, to synthesize a sentence, we randomly replace the source pair of entity mentions with other target entity pairs while keeping the context unchanged.", "Thus, if the contexts express the same relation type with the entity pair, we can automatically assign an attention label.", "We illustrate the synthesizing process in Figure", "2. ( s 2 , h 2 , t 2 , crosses ) is a sentence from train FewRel .", "To generate a valid sentence, we randomly select another sentence ( s 1 , h 1 , t 1 , crosses ) which is labeled with the same relation as s 2 from train FewRel .", "Then we replace its entity mentions h 1 and t 1 as h 2 and t 2 .", "The output is ( s 4 , h 2 , t 2 , crosses , 1) .", "Since its context correctly describe crosses , we regard s 4 as valid.", "For the noisy sentence, we randomly select a sentence ( s 3 , h 3 , t 3 , isA ) under another relation.", "With similar process for s 4 , we get a synthesize sentence ( s 5 , h 2 , t 2 , crosses , 0) .", "Because the context of s 5 does not express target relation, we label it as a noise.", "Training Sets with Different Noise Patterns As defined in Section 4.1, we use NR and DR to measure the noise pattern of Bag-level RE dataset.", "By controlling the number of synthesized noisy sentences in each bag and the total ratio of noise among all sentences, we can construct several training sets with different patterns.", "In the following sections, we denote a training set of which the NR is x and DR is y as train x,y .", "Higher x and y indicate noisy sentences and disturbing bags account for larger proportion.", "For example, in Figure 2, assuming there are 4 sentences in train FewRel , for each sentence, we synthesize two noisy sentences that form the bag together with the original sentence.", "Thus each bag contains 3 sentences: 1 valid and 2 noisy, and its NR is 2/3 and DR is", "0. For the other 3 sets, the number of synthesized noisy sentences equals the sum of original valid sentences and synthesized valid sentences.", "Thus they all have a NR of 1/2.", "Since we define bags containing no valid sentences or no noisy sentences as disturbing bags, the third set and fourth set have 2 and 4 disturbing bags, with a DR of 1/2 and 1, respectively.", "Test Set and Development Set We also construct a test and a development set.", "Similar as the second set in Figure 2, each bag in the test/dev sets contains two sentences, the NR of both sets is 1/2 while the DR is", "0. I.e., in every bag of test/dev sets, there is one valid sentence and one noisy sentence.", "Instead of multiple test sets of different noise patterns, we only have one test set so that the evaluation of different models is consistent.", "To avoid information leak, when construct train x,y , test and development sets, the context of synthesized sentences only come from train FewRel , test FewRel and development FewRel , respectively.", "The final BagRel contains 9 train sets, 1 test and 1 development set, as listed in Table", "1. The NR of the training sets has three options: 1/3, 1/2 or 2/3, and similarly, DR can be 0, 1/2 or", "1. The NR of both test and development sets are 1/2, while their DR are", "0. All data sets contain 80 relations.", "For training sets whose NR are 1/3, 1/2 and 2/3, every bag in these sets contains 3, 2 and 3 sentences, respectively.", "KG Construction To evaluate the impact of KG on attention mechanism, we also construct a KG based on Wikidata.", "Denoting the set of entities appearing in FewRel as E , we link each entity in E to Wikidata by its Freebase ID, and then extract all triples T = ( h, r, t ) in Wikidata where h, t E .", "To evaluate the effect of structural information from KG, we also construct a random KG whose triple set is T .", "Specifically, for each triple ( h, r, t ) in T , we corrupt it into ( h, r, t ) by replacing r with a random relation r (cid:54) = r .", "Thus the prior knowledge within the KG is destroyed.", "KG-73K and KG73K-random have the same scale: 72,954 entities, 552 relations and 407,821 triples.", "Finally, we obtain BagRel-Wiki73K, including the Bag-level RE sets and KG-73K.", "We first define several measurements to evaluate the effect of the attention mechanism and KG: Attention Accuracy (AAcc), Area Under precision-recall Curve (AUC), AUC on Valid sentences (AUCV) and AUC on Noisy sentences (AUCN).", "AAcc measures the attention module's ability to assign higher weights to valid sentences than noisy sentences.", "Given a non-disturbing bag (a bag containing both valid and noisy sentences) B i = { ( s j , h i , t i , y i , z j ) } and the predicted probability distribution p i , the AAcc of this bag is calculated by the following formula: AAcc i = m (cid:80) j =1 m (cid:80) k =1 I ( z j ) I (1 z k ) I ( p ij > p ik ) m (cid:80) j =1 I ( z j ) m (cid:80) j =1 I (1 z j ) (14) where m = | B i | is the size of B i , I ( ) is an indicator function which returns 1 or 0 if the input is True or False.", "By m (cid:80) j =1 I ( z j ) m (cid:80) j =1 I (1 z j ) , we count how many valid-noisy sentence pairs contained in B i .", "With m (cid:80) j =1 m (cid:80) k =1 I ( z j ) I (1 z k ) I ( p ij > p ik ) , we count how many pairs show higher weight on the valid sentence.", "Then the AAcc of the whole data set is computed as AAcc = ( n (cid:80) i =1 AAcc i ) /n where n is the number of bags in the data set.", "AAcc is designed specifically for non-disturbing bags.", "On disturbing bags, with all sentences noisy or valid, it is meaningless to evaluate attention module's performance.", "So in test/dev sets of our BagRel-Wiki73k, all bags are non-disturbing bags.", "Then without distraction, the evaluation results can better present how the attention module works.", "AUC is a standard metric to evaluate DSRE model's performance on bag-level test set.", "As mentioned in section 3, attention-based model's performance on non-disturbing bags relies on three aspects: (1)AAcc, (2) model's performance on valid sentences and (3) model's performance on noisy sentences.", "So we use AUCV and AUCN to measure the second and the third aspects, respectively.", "The difference between AUC and AUCV is that AUC is computed on the original test set D = { B i } , while AUCV is AUC computed on the V alid-only test set D v = { B vi } .", "Compared with B i , B vi has the same label but removes all noisy sentences within it.", "Thus there is no noisy context feature in D v , then models can utilize both entity mentions and contexts to achieve a high AUCV.", "On the opposite, AUCN is AUC computed on the N oise-only test set D n = { B ni } , where B ni removes all valid sentences in B i .", "Since all context features in D n are noisy, to achieve a high AUCN, models have to ignore context and rely more on mention features to make predictions.", "AUC, AUCV and AUCN range from 0 to 1 , and a higher value of the 3 metrics indicates that a model makes better prediction on the whole bag, valid sentences and noisy sentences, respectively.", "To evaluate the effects of attention and KG, we design two straightforward Bag-level RE models without the attention module, BRE and BRE+CE .", "By comparing their performance with BRE+ATT (BRE with attention module) and BRE+KA (BRE with KG-enhanced attention module), we can have a better understanding of the roles of ATT and Knowledge-enhanced ATT.", "BRE uses BERT (Devlin et al., 2018) as the encoder.", "Specifically, we follow the way described in (Peng et al., 2020; Soares et al., 2019): entity mentions in sentences are highlighted with special markers before and after mentions.", "Then the concatenation of head and tail entity representations are used as the representation s (cid:48) .", "Since BRE does not have attention mechanism, it breaks the bags and compute loss on each sentence: L = n (cid:88) i =1 | B i | (cid:88) j =1 log ( P ( y i | s ij )) (15) P ( y i | s ij ) = softmax ( W b s (cid:48) ij + b b ) (16) BRE can be viewed as a special case of BRE+ATT.", "Its attention module assigns all sentences in all bags with the same attention weight", "1. During inference, given a bag, BRE uses the mean of each sentence's prediction as the whole bag's prediction: P ( y i | B i ) = ( | B i | (cid:88) j =1 P ( y i | s ij )) / | B i | (17) BRE+CE concatenates an additional feature vector r ht with BERT output, where r ht is defined based on entity embeddings of h and t .", "The concatenated vector is used as the representation of the sentence and fed into the classification layer.", "Whether attention mechanism promotes RE model's performance?", "How KG affects the attention mechanism?", "Whether attention aggravates data sparsity?", "For fair comparison, all of baselines share the same encoding structure as BRE.", "The attention-based models include BRE+ATT,BRE+KA and BRE+SeG, where SeG (Li et al., 2020) is an advanced attention mechanism which achieves the state-of-the-art performance on NYT-FB60K.", "Briefly, SeG uses sigmoid instead of softmax to compute attention weights of each instance in a bag.", "The models without attention are BRE and BRE+CE.", "To check the effect of noise pattern, we train model on different train sets.", "As a reminder, train x,y is a train set whose NR and DR is x and y , respectively.", "We train BRE+ATT on 9 different training sets with different noise patterns.", "As shown in Figure 3, we can see that: (1) higher noise ratio (NR) makes the model harder to highlight valid sentences, leading to a lower attention accuracy (AAcc); (2) higher disturbing bag ratio (DR) results in lower AAcc, indicating that disturbing bags challenge the attention module.", "Based on these results, we claim that the noise pattern within the training set largely affects the attention module's effectiveness.", "To quantitatively analyze the effect of attention mechanism, we compare the performance of BRE and BRE+ATT in Table 2, keeping other variables of the model unchanged.", "Particularly, a higher Model AUC AAcc AUCV AUCN BRE-train 12 , 0 .910 NA .932 .850 BRE+ATT-train 12 , 0 .878 .881 .941 .434 BRE+ATT-train 12 , 12 .897 .751 .932 .711 BRE+ATT-train 12 , 1 .896 .713 .925 .759 Table 2: Test results of models trained on different train set.", "AUCV indicates the stronger ability of the model itself in an ideal setting without any noise, and a higher AUCN indicates higher robustness of model to noise.", "Surprisingly, when using the same training set train 12 , 0 , the AUC of the attention-enhanced model is lower than the AUC of the model without attention ( 0 . 878 v.s. 0 . 910 ).", "In addition, BRE+ATT has lowest AUC using train 12 , 0 , which has no disturbing bags.", "The highest AAcc ( 0 . 881 ) also suggests that the attention module does effectively select valid sentences.", "Why the most effective attention module leads to the worst performance?", "The reason is that BRE+ATT-train 12 , 0 has a much lower AUCN, which indicates that it is less robust to noisy sentences.", "Is it true that an effective attention module shall hurt model's robustness to noise ?", "This is actually against our intuition.", "To answer it, we draw Figure 4 by assigning fixed attention weights to sentences during training.", "Specifically, each bag in train 12 , 0 has a valid sentence and a noisy sentence, and we assign fixed attention weight to the valid and 1 to the noisy one, instead of computing with attention module.", "Then we test the resulting model's AUCN and AUCV performance.", "We can see that when the valid sentences receive higher attention weights, the AUCV curve rises slightly, indicating the model's performance indeed gets enhanced.", "Meanwhile, the AUCN curve goes down sharply.", "This demonstrates the effective attention weakens the model's robustness to noise.", "The reason is that the model with a high-performance attention module prefers to utilize context information instead of entity mention features.", "Thus, it usually fails if most contexts are noisy.", "Thus we can explain the results in Table", "2. train 12 , 0 has the highest AAcc, indicating that it assigns very low weights to noisy sentences.", "Thus the gain from AUCV can not make up the loss from AUCN, resulting a worse AUC.", "it has an underlying drawback that it might hurt the model's ability to predict based on entity mention features, which are important in RE tasks (Li et al., 2020) (Peng et al., 2020), leading to worse overall performance.", "To measure KG's effect on the combined with attention mechanism, we compare the results of KA with ATT, while keeping other parts of the model unchanged.", "As shown in Table", "3. When trained on train 1 2 , 0 , the KG-enhanced model (KA-train 12 , 0 ) has lower AAcc than the model without KG (ATT-train 12 , 0 ) ( 0 . 857 v.s. 0 . 881 ), while the AUC is higher ( 0 . 932 v.s. 0 . 878 ).", "This is because the KA version has a higher AUCN ( 0 . 560 ) and comparable AUCV and AAcc.", "Thus, the KG-enhanced model achieves better performance on noisy bags, leading to a better RE performance.", "In addition, comparing Table 2 and Table 3, KA shows lower AAcc and higher AUCN than ATT on all three train sets.", "This also demonstrates that KG does not promote model's performance by improving attention module's accuracy, but by enhancing the encoder and classification layer's robustness to noisy sentences.", "This makes sense because the information from KG focuses on entities instead of contexts.", "By incorporating KG, the model relies more on entity mention features instead of noisy contexts feature, thus becomes better at classifying noisy sentences.", "Moreover, comparing BRE+KA rand 's performance with BRE+KA on train 12 , 0 , we can observe that after incorporating entity embeddings learned from a random KG, BRE+KA rand has a much lower attention accuracy.", "This indicates that misleading knowledge would hurt attention mechanism.", "Attention module assigns low weights to part of training sentences.", "When training data is insufficient, not making full use of all training examples could aggravate the data sparsity issue.", "Thus we compare performance of models trained on subsets of train 12 , 1 2 .", "From Figure 5, we can see that along with the decreasing size of training data, the performance gap between BRE+ATT and BRE+CE becomes larger.", "This is because the latter one fully utilizes every example by assigning the same weight 1 to all sentences.", "We also check each model's attention weights.", "BRE+SeG assigns all sentences with weights > 0 .", "9 , so its performance drop is similar to the model without attention.", "Thus, we claim that traditional attention mechanism could exacerbate the model's ability to insufficient data.", "This motivates us a better attention mechanism for few-shot settings.", "We leave it in the future.", "From results in Table 2 and Table 3, we can see that the performance of BRE+CE is stable when", "the ratio of disturbing bags changes.", "In comparison, BRE+ATT and BRE+KA show varying results across different train sets.", "On train 12 , 1 which has the most disturbing bags, BRE+CE outperforms BRE+ATT and BRE+KA, demonstrating that BRE+CE could be a competitive method for Bag-level DSRE.", "Furthermore, the model's improvement on NYT-FB60K is promising (around 13% AUC).", "This is due to two reasons: (1) NYT-FB60K is a noisy dataset containing prevalent disturbing bags, which is similar to our synthesized datasets.", "(2)NYT-FB60K is highly imbalanced and most relation types only have limited training data, while all relation types in our balanced datasets have the same number of training examples; thus BRE+CE and BRE achieve much higher improvement on NYT-FB60K compared with synthesized datasets.", "In conclusion, the high performance not only validates our claim that attention module may not perform well on noisy and insufficient training data, but also verifies that our thorough analysis on attention and KG have practical significance.", "From results in Table 5, we provide a straight comparison between models with KG (BRE+KA, BRE+CE) and models without KG (BRE+ATT, BRE).", "Apparently, both methods of utilizing KG (combined with attention and concatenated as additional features) outperforms methods not using KG.", "This demonstrates the prior knowledge from KG is beneficial for relation extraction task.", "Except our naive BRE+CE, we expect that a carefully designed mechanism incorporating KG can lead to higher improvement.", "We leave it in the future.", "In conclusion, we construct a set of datasets and propose a framework to quantitatively evaluate how attention module and KG work in the bag-level RE.", "Based on the findings, we demonstrate the effectiveness of a straightforward solution on this task.", "Experiment results well support our claims that the accuracy of attention mechanism depends on the noise pattern of the training set.", "In addition, although effectively selecting valid sentences, attention mechanism could harm model's robustness to noisy sentences and aggravate the data sparsity issue.", "As for KG's effects on attention, we observe that it promotes model's performance by enhancing its robustness with external entity information, instead of improving attention accuracy.", "In the future, we are interested in developing a more general evaluation framework for other tasks, such as question answering, and improving the attention mechanism to be robust to noise and insufficient data, and an effective approach to incorporate the KG knowledge to guide the model training.", "This research/project is supported by NExT Research Centre.", "This research was also conducted in collaboration with SenseTime.", "This work is partially supported by A*STAR through the Industry Alignment Fund Industry Collaboration Projects Grant, by NTU (NTUACE2020-01) and Ministry of Education (RG96/20), and by the National Research Foundation, Prime Minister's Office, Singapore under its Energy Programme (EP Award No. NRF2017EWT-EP003-023) administrated by the Energy Market Authority of Singapore." ]
[ "abstain", "abstain", "objective", "result", "result", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "result", "abstain", "objective", "method", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "other", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "result", "abstain", "result", "objective", "other", "other", "other" ]
[ "Aspect terms extraction and opinion terms extraction are two key problems of fine-grained Aspect Based Sentiment Analysis (ABSA).", "The aspect-opinion pairs can provide a global profile about a product or service for consumers and opinion mining systems.", "However, traditional methods can not directly output aspect-opinion pairs without given aspect terms or opinion terms.", "Although some recent co-extraction methods have been proposed to extract both terms jointly, they fail to extract them as pairs.", "To this end, this paper proposes an end-to-end method to solve the task of Pair-wise Aspect and Opinion Terms Extraction (PAOTE).", "Furthermore, this paper treats the problem from a perspective of joint term and relation extraction rather than under the sequence tagging formulation performed in most prior works.", "We propose a multi-task learning framework based on shared spans, where the terms are extracted under the supervision of span boundaries.", "Meanwhile, the pair-wise relations are jointly identified using the span representations.", "Extensive experiments show that our model consistently outperforms state-of-the-art methods.", "Fine-grained aspect-based sentiment analysis (ABSA) or opinion mining is a field of study that analyzes people's detailed insights towards a product or service.", "Aspect terms (AT) extraction and opinion terms (OT) extraction are two fundamental subtasks in ABSA (Pang and Lee., 2008; Liu, 2012).", "Aspect terms, also named as opinion targets, are the word sequences in the sentence describing attributes or features of the targets.", "Opinion terms, sometimes called opinion words, are those expressions carrying subjective attitudes.", "For example, Both authors contributed equally to this research.", "co-extraction and pair extraction of AT and OT.", "in the sentence Otherwise, this place has great service and prices and a nice friendly atmosphere , the aspect terms are service , prices and atmosphere , and the opinion terms are great and nice friendly .", "Recently, a new research focus, which aims at co-extracting the aspect and opinion terms (Wang et al., 2016, 2017; Li and Lam, 2017; Wang and Pan, 2018; Yu et al., 2019), has drawn increasing attention in both academia and industry.", "Such methods use joint models and have achieved great progress on both subtasks.", "However, the extracted AT and OT are not in pairs, and the corresponding relations between them are not well extracted.", "As the example sentence shown in Figure 1, ( service, great ), ( prices, great ) and ( atmosphere, nice friendly ) are three aspect-opinion pairs.", "In contrast, the co-extraction methods can only output the AT set { service, prices, atmosphere } and the OT set { great, nice friendly } jointly.", "The aspect-opinion pairs can deploy more fine-grained sentiment analysis for review text and will benefit many downstream applications, such as opinion summarization and product profiling.", "By referring to the aspect-opinion pairs in a review sentence, customers can get a glimpse of the pros and cons of a product or service in a short time.", "Based on the promising results in previous AT and OT extraction, one possible solution for aspect-opinion pair extraction is to decouple the whole task into two subtasks.", "Firstly, all aspect terms need to be extracted from the sentences.", "Then, the OT corresponding to each AT can be extracted using a Target-oriented Opinion Words Extraction (TOWE) method (Fan et al., 2019).", "Though this two-stage pipeline approach can extract aspect-opinion pairs, it will suffer from error propagation and the pairs extracting performance will rely heavily on the accuracy of AT extraction.", "To this end, an end-to-end method that can automatically extract AT and OT as pairs is essential for fine-grained sentiment analysis and opinion mining.", "Considering the significance of the aspect-opinion pairs in review sentences, this paper targets at a new subtask for fine-grained ABSA, named PAOTE (Pair-wise Aspect and Opinion Terms Ex-traction).", "Given a review sentence, the objective of PAOTE is to extract all the (AT, OT) pairs.", "Different from the traditional co-extraction task of AT and OT, PAOTE outputs AT and OT in pairs while the co-extraction task only outputs them in separate sets as shown in Figure 1. Most of the previous AT and OT extraction methods formulate the task as a sequence tagging problem (Wang et al., 2016, 2017; Wang and Pan, 2018; Yu et al., 2019), specifically using a 5-class tag set: { BA (beginning of aspect), IA (inside of aspect), BP (beginning of opinion), IP (inside of opinion), O (others) } .", "However, the sequence tagging methods suffer from a huge search space due to the com-positionality of labels for extractive ABSA tasks, which has been proven in (Lee et al., 2017b; Hu et al., 2019).", "And as the example in Figure 1, the sequence tagging methods get into trouble when there exist one-to-many or many-to-one relations between AT and OT in the sentence.", "In this paper, we propose a span-based multi-task framework to jointly extract both the AT/OT and the pair-wise relations.", "Motivated by prior works (Lee et al., 2017a; Luan et al., 2018), the proposed framework firstly learns word-level representations using a base encoder and then enumerates all possible spans on the input sentence.", "By sharing the generated span representations, the AT/OT can be extracted under the supervision of span boundaries and class labels.", "Meanwhile, the pair-wise relations can be identified by computing the span-span correspondence.", "We further design different encoder structures for the framework.", "To validate the effectiveness of our method, we conduct a serial of experiments based on public datasets.", "The comparison results show that the proposed framework can efficiently avoid the cascading errors between tasks and outperforms the state-of-the-art pipeline and joint methods.", "1) We propose an end-to-end model for a new task PAOTE.", "To the best of our knowledge, it is the first end-to-end model that can jointly extract the AT/OT and the pair-wise relations between them.", "2) We design a novel span-based multi-task neural network for PAOTE.", "It can overcome the drawbacks of sequence tagging methods by taking advantage of the span-level information.", "And the mutual impact between AT/OT and their pair-wise relations can be identified in this model.", "3) We conduct extensive experiments and the results show that our proposed model outperforms the state-of-the-art methods.", "For fine-grained ABSA, the aspect terms extraction and opinion terms extraction are two basic subtasks, which has been studied in numerous prior works (Hu and Liu, 2004; Popescu and Etzioni, 2005; Wu et al., 2009; Li et al., 2010; Qiu et al., 2011; Liu et al., 2012, 2013, 2015; Yin et al., 2016; Xu et al., 2019; Devlin et al., 2019).", "More recently, many works concentrate on co-extracting AT and OT using joint models.", "Most of the works treat the task as a sequence tagging problem.", "Wang et al. proposed a joint Recursive Neural Conditional Random Fields (RNCRF) model by using the dependency parse tree to capture dual-propagation among AT and OT (Wang et al., 2016).", "Then they extended their research and constructed a Recursive Neural Structural Correspondence Network (RN-SCN) for cross-domain aspect and opinion terms co-extraction (Wang and Pan, 2018).", "Another outstanding work, Coupled Multi-Layer Attentions (CMLA) network, learns attentions for AT and OT (Wang et al., 2017).", "However, all these co-extraction methods do not consider the AT and OT as pairs.", "For the pair-wise aspect and opinion terms extraction, an obvious solution is a two-stage pipeline strategy.", "The first stage is to extract aspect terms.", "Li et al. proposed a state-of-the-art model that can extract aspect terms by using the truncated history attention and the selective transformation network (Li et al., 2018).", "Then in the second stage, the target-oriented opinion terms can be extracted with the given aspect terms.", "This subtask has been proposed in a recent work (Fan et al., 2019), where they develop a target-fused sequence tagging method.", "However, the opinion detection heavily depends on the extracted aspect accuracy, which suffers from error propagation.", "Our framework is the first to joint perform the two subtasks into an end-to-end model.", "Moreover, our method does not need any external lexicons or parsers and can effectively deal with multiple relations.", "Joint Entity and Relation Extraction (JERE), which aims to detect entity mentions and their semantic relations simultaneously in text, is an important task in information extraction.", "The earliest works mostly depend on feature engineering approaches (Kate and Mooney, 2010; Hoffmann et al., 2011; Li and Ji, 2014; Miwa and Sasaki, 2014).", "In recent studies, neural models for JERE have shown supe-rior performance (Katiyar and Cardie, 2016; Zhang et al., 2017; Miwa and Bansal, 2016; Zheng et al., 2017).", "Moreover, neural multi-task learning has been shown effective in enhancing the interaction between entities and relations.", "In this paper, we adopt a JERE paradigm to solve the PAOTE task and develop a multi-task framework by extending previous unified setups (Luan et al., 2018) and end-to-end span-based models (Lee et al., 2017a, 2018).", "Given an input sentence S = { w 1 , w 2 , ..., w N } of N words, the PAOTE task is to extract a set of all the aspect terms AT = { at 1 , at 2 ,", ".., at i } , a set of all the opinion terms OT = { ot 1 , ot 2 , ..., ot j } and a set of all the (AT, OT) pairs P = { ( at m , ot n ) , ... } from the sentence.", "Note that the at m AT and the ot n OT could be a single word or a phrase.", "Inspired by JERE methods, we process the task in a span-based term-relation joint extraction scheme rather than as a sequence tagging problem.", "Firstly, all possible spans SP = { s 1 , s 2 , ..., s K } are enumerated from the given sentence, where each span is a slice (up to a reasonable length l s ) of the input sentence.", "Based on the candidate spans, the outputs are two folds:", "1) the term types T for all spans SP , aiming at the AT/OT recognition;", "2) the pair-wise relation R for all pair of spans SP SP , aiming at the (AT, OT) pair identification.", "Formally, the two subtasks are defined as follows: Term Recognition is to assign a unique term label T { A, O, null } to each candidate span s c , where A denotes s c AT , O denotes s c OT and null denotes that the span does not belong to AT or OT .", "Pair-wise Relation Identification is to assign a binary label R { T rue, F alse } to each ordered span pair ( s c 1 , s c 2 ) .", "Note that the pair-wise relation is defined as a directed relation which always starts from an aspect term and points to an opinion term.", "So in this formulation, s c 1 acts as AT and s c 2 acts as OT.", "T rue denotes that s c 1 and s c 2 are correctly associated.", "The overall architecture of our span-based multitask framework ( SpanMlt ) is shown in Figure 2. Given an input sentence, a base encoder is adopted to learn contextualized word representations.", "Then, a span generator is deployed to enumerate all possible spans, which are represented based on the hidden outputs of the base encoder.", "For the multitask learning setup, the span representations are shared for two output scorers.", "The term scorer is to assign the term label with the highest score to each span.", "And the relation scorer is to evaluate the pair-wise correspondence between every two spans and assign a binary label to each span pair.", "Given an input sentence { w 1 , w 2 , ..., w N } , a span s i = { w START( i ) , ..., w END( i ) } is a single word or phrase with a starting index START( i ) and an ending index END( i ) .", "And the maximum length of s i is l s : 1 START( i ) END( i ) N (1) END( i ) START( i ) < l s (2) The span generator is a component enumerating all possible spans to generate the candidates for aspect or opinion terms.", "Then each span will be represented by using the contextualized word representations learned from various base encoders.", "Noting that SpanMlt is a general framework, we can potentially leverage any network as the encoder to learn word-level representations, which would be shared by higher-level modules.", "In this paper, we implement two different encoders.", "One is the Figure 2: The overall architecture of the span-based multi-task framework, which alternatively takes a BERT structure or a BiLSTM structure as the base encoder to learn representations for input words and candidate spans.", "BiLSTM with pre-trained word embeddings, which has been widely used in numerous neural-based models for NLP tasks.", "The other is BERT (Devlin et al., 2018), a pre-trained bidirectional transformer encoder which has achieved state-of-the-art performances across a variety of NLP tasks.", "For the BiLSTM encoder, the input vectors { x 1 , x 2 , ..., x N } are generated for the word sequence firstly.", "Motivated by (Lee et al., 2017a; Luan et al., 2018), two strategies are involved in building the vector representations:", "1) pre-trained word embeddings and 1-dimension CNN over characters;", "2) fixed ELMo embeddings.", "Then, a bidirectional LSTM network is used to encode each word x t : h t = [ LSTM( x t ); LSTM( x t )] , t [1 , N ] (3) where h t is the concatenated hidden output of BiLSTM.", "To better learn vector representations combined with the syntactic head information for each candidate span, we further employ a self-attention layer over the word vectors in the span.", "Following previous works (Yang et al., 2016; Zhou et al., 2016), the attention is implemented with a feed forward neural network (FFNN): u t = FFNN ( h t , ) (4) i,t = exp ( u t ) END( i ) (cid:80) k =START( i ) exp ( u k ) (5) h i = END( i ) (cid:88) k =START( i ) i,t u t (6) where is the parameters for FFNN, and h i is a weighted sum of word vectors in span s i .", "Therefore, based on the BiLSTM encoder, the final representation p i for span s i can be concatenated as: p i = [ h START ( i ); h END ( i ); h i ; ( i )] (7) where ( i ) is the feature vector encoding the size of the span s i .", "For the BERT encoder, the input sequence is generated by concatenating a [CLS] token, the original word sequence, and a [SEP] token.", "Each token is converted into an input vector x t by summing the token, segment, and position embeddings.", "Assume BERT( ) is the base (or fine-tuned) BERT model.", "The hidden representation for each token can be obtained: h t = BERT( x t ) (8) Then the span vector representation p i is directly generated by h START( i ) and h END( i ) : p i = [ h START ( i ); h END ( i )] (9) Unlike the BiLSTM encoder, we do not use the self-attention or the feature vector for the BERT encoder.", "Since the transformer of BERT has already utilized the attention mechanism and can learn suf-ficient contextualized information.", "And from our preliminary investigations and experiments, most complicated structures may damage the availability of BERT architecture and increase the training difficulty, which will be discussed in Section 4. 3.5 Objective To construct the loss function for joint training, we use FFNNs over shared span representations to compute the scores of how likely a span s i has a term label y T i , and how likely a span pair ( s i , s j ) has a relation label y R i,j , respectively.", "For the term score, each span representation p i is fed into an FFNN, and then is normalized with the softmax function to output the probability of the term label:", "Thus, the loss function for the term extraction subtask can be formulated using the span-level cross-entropy error between the predicted distribution P ( y T i | s i ) and the gold distribution P ( y T i | s i ) :", "For the pair-wise relation score between two spans ( s i , s j ) , we first compute the probability that a span is in a relation: f R s = FFNNR ( p i , R ) (13)", "In order to reduce the number of generated pairs, we sort the spans according to their scorers f R s i and only the topk spans are selected to be paired.", "Then, to measure the correspondence between two spans, the representation p i for span s i , the representation p j for span s j , and an element-wise multiplication p i (cid:12) p j are concatenated as the input of FFNN: f R i,j = FFNNR ([ p i ; p j ; p i (cid:12) p j ] , R ) (14) The span scores and the correspondence score are summed and fed into the output softmax function: P ( y R i,j | ( s i , s j )) = Softmax( f R s i + f R s j + f R i,j ) (15) Thus, the loss function for the pair-wise relation extraction subtask can be formulated using the pair-level cross-entropy error between the predicted distribution P ( y R i,j | ( s i , s j )) and the gold distribution P ( y R i,j | ( s i , s j )) : Loss ( R ) = k (cid:88) i =1 k (cid:88) j =1 P ( y R i,j | ( s i , s j )) log ( P ( y R i,j | ( s i , s j ))) (16) Finally, losses from the term scorer and the relation scorer are combined as the training objective of the SpanMlt framework: J ( ) = T Loss ( T ) + R Loss ( R ) (17) where T and R are two hyper-parameters to balance the two tasks.", "We evaluate our framework on two sets of public datasets, which are both in LAPTOP and RESTAURANT domains from Semeval 2014 Task 4, Semeval 2015 Task 12 and Semeval 2016 Task 5. One is provided by (Fan et al., 2019), where the AT and OT pairs are labeled.", "The other is provided by (Wang et al., 2017, 2016), where only the aspect terms and opinion terms are labeled.", "Since we are the first to study the joint extraction task of pair-wise AT and OT, there is no available end-to-end model in the literature to be compared.", "To better evaluate our method, we first compare the AT/OT extraction performances with several widely used sequence tagging models which are constructed by different encoder structures.", "Then we compare with three joint models, which have achieved state-of-the-art results in AT&OT co-extraction.", "To evaluate the extraction of (AT, OT) pairs, we further implement a pipeline approach HAST+TOWE.", "Moreover, since we formulate our problem as a joint term and relation extraction task, we also compare with a joint entity and relation extraction method JERE-MHS.", "These baselines are introduced as follows: BiLSTM+CRF A sequence tagging method with a BiLSTM network built on top of pre-trained word embeddings, followed by a CRF output layer to perform BIO classification.", "BERT+CRF A sequence tagging method based on a BERT encoder.", "The output hidden states of input words are taken as the features for CRF.", "BERT+BiLSTM+CRF A sequence tagging method based on a BERT encoder.", "The output hidden states of input words are fed into a BiLSTM structure and then followed by an output CRF layer.", "RNCRFA joint model of recursive neural network and CRF, proposed by (Wang et al., 2016) for single-domain AT and OT extraction.", "CMLA A joint model of multi-layer attentions proposed by (Wang et al., 2017).", "GMTCMLA A global inference model based on CMLA proposed by (Yu et al., 2019).", "RNSCN A joint model proposed by (Wang and Pan, 2018) for cross-domain aspect and opinion terms extraction.", "HAST+TOWE (pipeline) A pipeline approach where the AT are first detected using a model proposed by (Li et al., 2018).", "Then given the predicted AT, the OT are extracted using a recent TOWE method (Fan et al., 2019).", "In this way, the pair-wise relation between AT and OT can be established.", "JERE-MHS A model for joint entity-relation extraction, proposed by (Bekoulis et al., 2018).", "Although there are a number of complicated models for JERE, few works can simultaneously classify the entity types and the relation types.", "This method is the outstanding one which can be appropriate to solve our PAOTE task.", "For the BiLSTM encoder, we use the 300d GloVe word embeddings pre-trained on unlabeled data of 840 billion tokens 1 .", "We use a 3-layer BiLSTM with 100-dimension hidden states.", "The 8-dimensional char embeddings are randomly initialized.", "For the character CNN, the filter size is 50 with window sizes of 3, 4 and 5. The ELMo embeddings, pre-trained by a 3-layer BiLSTM with 1024 hidden states are fixed and not fine-tuned during the training stage.", "We use 0.4 dropout for the BiLSTMs and 0.5 dropout for the embeddings.", "The FFNNs are 50-dimensional with 2 hidden layers.", "The learning rate is set to be 0.005 for Adam optimizer.", "15res and 16res train set to get the domain-specific BERT finetune models, for LAPTOP and RESTAURANT respectively.", "The maximum sequence length is 512 with a batch size of 8.", "The FFNNs are 512-dimensional with a single hidden layer.", "The learning rate is set to 2e-5 for Adam optimizer.", "The maximum length of generated spans is set to 8 and top 40% are candidate for pairs.", "T and R are both set to 1.0.", "We randomly split 10% of the train sets as dev sets for tuning the hyperparameters.", "Note that, all the baseline methods are implemented using their publicly released source codes.", "All the compared models are trained with best settings and the results for test sets are reported when it achieves the best performances on the dev sets.", "We report F1 scores that measure the performance of our model and all the compared methods respectively for the three subtasks: AT extraction, OT extraction, and pair-wise relation extraction.", "An extracted AT or OT is regarded as a correct prediction when the boundaries of the span are identical to the ground-truth, and the term label is accurately assigned.", "An extracted pair-wise relation is correct only when both AT and OT are accurately identified and the relation label is accurately predicted.", "The main results are shown in Table 1. Our SpanMlt framework consistently achieves the best scores, both for the AT/OT extraction task and the pair-wise relation extraction task.", "For AT/OT extraction, the performance of sequence tagging methods is not satisfactory and the BERT-based models perform worst among all these methods.", "This suggests that BERT may not work well when the dataset for fine-tuning is small.", "The AT and OT co-extraction models perform much better than sequence tagging methods, indicating that the inter-Models 14lap 14res 15res 16res AT OT Pair AT OT Pair AT OT Pair AT OT Pair SpanMlt-BERT base 80.41 78.12 62.88 84.46 84.07 72.06 75.12 78.14 60.48 79.38 84.13 67.96 SpanMlt-BERT finetune 80.78 79.71 65.75 84.26 84.11 72.72 77.71 78.47 61.06 80.95 84.92 69.58 SpanMlt-BiLSTM 81.30 77.58 64.41 83.02 83.42 73.80 80.14 76.48 59.91 82.44 83.87 67.72 attention 78.69 76.83 62.88 82.55 81.22 71.97 79.48 75.12 59.22 81.90 83.50 67.21 char embeddings 75.22 71.09 56.20 76.06 78.90 64.20 79.01 74.41 59.06 78.85 81.55 64.17 SpanMlt-BiLSTM-ELMo 84.51 80.61 68.66 87.42 83.98 75.60 81.76 78.91 64.68 85.62 85.33 71.78 Table 3: Comparisons for SpanMlt with different base encoders.", "actions between AT and OT are significant for term extraction.", "However, all these joint models fail to associate AT and OT as pairs.", "For pair-wise AT/OT extraction, the HAST+TOWE pipeline method outperforms most other models on aspect detection, but the F1 scores of opinion extraction and pair extraction is much lower than that of SpanMlt, which is primarily due to the error propagation.", "Another joint entity and relation extraction method, namely JERE-MHS, performs worse than HAST for aspect extraction, but better than TOWE for opinion extraction.", "To evaluate the efficacy of SpanMlt on separate AT or OT extraction more intuitively, we further compare with two state-of-the-art models on the larger public datasets from (Wang et al., 2016, 2017), which has no (AT, OT) pair labeled.", "Table 2 shows that our SpanMlt also achieves comparable results.", "The minor gap is because there exist some sentences only with AT or OT and without pair-wise relations in this dataset.", "Thus leads our method to fail to involve the impact of pair-wise relations.", "Base Encoders.", "To further investigate the efficacy of different base encoders for our framework, namely, BiLSTM encoder and BERT encoder, we do experiments as shown in Table 3. The BiLSTM encoder with ELMo embeddings performs the best, which indicates the importance of initialized input embeddings.", "When using pre-trained Glove embeddings for BiLSTM encoder, the results are also satisfactory.", "An ablation study for the two key components, attention mechanism and char embeddings for BiLSTM encoder, suggests that both components are helpful for improving the performance.", "The BERT base encoder performs better in OT extraction but is inferior to the BiLSTM without ELMo in AT extraction.", "By using the BERT finetune model, the performance is improved, which indicates that introducing domain-specific information can help BERT to learn better contextualized word presentations.", "Figure 3 shows Figure 3: F1 curves on 14lap dataset for the two tasks, using the base BERT model or fine-tuned BERT models with increasing training steps.", "F1 curves with increasing training steps for fine-tuning BERT on our 14lap train set.", "We can see that the score first increases and achieves the highest at 5000-6000 steps.", "But then it decreases as the steps increasing.", "This result demonstrates that despite the domain-specific information is useful, too many steps on fine-tuning the pre-trained BERT models may not benefit the downstream tasks.", "Multi-task Setup.", "We evaluate the effect of multitask learning for the term extraction subtask and the pair-wise relation extraction subtask defined in our SpanMlt framework.", "Table 4 reports the F1 scores for an ablation study on 14lap test set.", "It is observed that the performance improves when learning the two tasks jointly compared with each single task.", "In addition, to investigate the balance between the two subtasks for multi-task learning, we also draw the F1 curves when adjusting the loss weights T and R , as shown in Figure 4. By varying T / R , we can see that the model attains the best performance at 1 .", "00 for AT/OT extraction and 1 .", "25 for pair-wise relation extraction.", "Nevertheless, our multi-task framework is relatively robust when varying the weight settings for the two subtasks.", "Parameter Sensitivity.", "Figure 5 shows F1 scores with different maximum span length l s and different top k of candidate spans to generate pairs on 14lap test set.", "We can see that F1 scores first increases as l s becomes larger.", "But it slows the growth when the maximum span length is larger than 8.", "This indicates that too small l s could not include all the useful words to generate the spans with accurate boundaries.", "Nevertheless, the extraction performance is not sensitive to maximum span length.", "For example, the difference between 8 and 20 are not statistically significant.", "For the number of candidate spans to generate pairs, top k , we can observe similar trends as that of span length.", "Too small k may cause that many correct AT and OT are not included in the candidate set, while large k will not improve extraction performance and may cost more training time.", "As mentioned previously, SpanMlt is able to identify one-to-many or many-to-one relationships between aspect and opinion terms.", "To verify that, we pick some examples from the test set of 14lap and show the prediction results of SpanMlt and the pipeline approach HAST+TOWE, as presented in Table 5. In the first two cases, we can see that SpanMlt can correctly assign the same opinion term for two appositive aspect terms.", "While the pipeline method is less effective when dealing the one-to-many relations either by missing the correct AT (e.g. updates ) or assigning the incorrect OT (e.g. problems ).", "Moreover, we find that our method may sometimes fail to recognize term boundaries (e.g., log into the system in case 3).", "There are also some bad cases due to the fact that our method fails to extract all pairs (e.g. Windows8 and not want in case 4 are missed).", "In this paper, we study a novel task Pair-wise Aspect and Opinion Terms Extraction (PAOTE).", "We treat this task as a joint term and relation extraction problem and develop a span-based multi-task learning framework (SpanMlt).", "Our framework can effectively learn contextualized information with various base encoders.", "Specifically, we try two different encoders (BiLSTM encoder and BERT encoder).", "Then a span generator enumerates all possible spans and each span is represented based on the outputs of the encoders.", "For joint optimizing the objectives of term extraction and pair-wise relation extraction, the two subtasks share the span representations and the losses are combined.", "The experimental results demonstrate that our SpanMlt significantly outperforms all the compared methods.", "For future works, we will explore pair-wise AT and OT extraction together with aspect category and sentiment polarity classification.", "This research is supported in part by the National Natural Science Foundation of China under Grant 61702500." ]
[ "abstain", "abstain", "abstain", "abstain", "objective", "method", "objective", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "other", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "method", "method", "abstain", "objective", "objective", "objective", "abstain", "abstain", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "abstain", "other", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "other", "abstain", "abstain", "abstain", "method", "other", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "result", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "other" ]
[ "We describe a new semantic parsing setting that allows users to query the system using both natural language questions and actions within a graphical user interface.", "Multiple time series belonging to an entity of interest are stored in a database and the user interacts with the system to obtain a better understanding of the entity's state and behavior, entailing sequences of actions and questions whose answers may depend on previous factual or navigational interactions.", "We design an LSTM-based encoder-decoder architecture that models context dependency through copying mechanisms and multiple levels of attention over inputs and previous outputs.", "When trained to predict tokens using supervised learning, the proposed architecture substantially outperforms standard sequence generation baselines.", "Training the architecture using policy gradient leads to further improvements in performance, reaching a sequence-level accuracy of 88.7% on artificial data and 74.8% on real data.", "Wearable sensors are being increasingly used in medicine to monitor important physiological parameters.", "Patients with type I diabetes, for example, wear a sensor inserted under the skin which provides measurements of the interstitial blood glucose level (BGL) every 5 minutes.", "Sensor bands provide a non-invasive solution to measuring additional physiological parameters, such as temperature, skin conductivity, heart rate, and acceleration of body movements.", "Patients may also self-report information about discrete life events such as meals, sleep, or stressful events, while an insulin pump automatically records two types of insulin interventions: a continuous stream of insulin called the basal rate, and discrete self-administered insulin dosages called boluses.", "The data acquired from sensors and patients accumulates rapidly and leads to a substantial data overload for the health provider.", "To help doctors more easily browse the wealth of generated patient data, we built a graphical user interface (GUI) that displays the various time series of measurements corresponding to a patient.", "As shown in Figure 1, the GUI displays the data corresponding to one day, whereas buttons allow the user to move to the next or previous day.", "While the graphical interface was enthusiastically received by doctors, it soon became apparent that the doctor-GUI interaction could be improved substantially if the tool also allowed for natural language (NL) interactions.", "Most information needs are highly contextual and local.", "For example, if the blood glucose spiked after a meal, the doctor would often want to know more details about the meal or about the bolus that preceded the meal.", "The doctor often found it easier to express their queries in natural language (e.g. show me how much he ate\", did he bolus before that\"), resulting in a sub-optimal situation where the doctor would ask this type of local questions in English while a member of our team would perform the clicks required to answer the question,", "e.g. click on the meal event, to show details such as amount of carbohydrates.", "Furthermore, there were also global questions , such as How often does the patient go low in the morning and the evening\", whose answers would require browsing the entire patient history in the worst case, which would be very inefficient. This motivated us to start work on a new system component that would allow the doctor to interact using both natural language queries and direct actions within the GUI. A successful solution to the task described in this paper has the potential for applications in many areas of medicine where sensor data and life events are pervasive. Intelligent user interfaces for the proposed task will also benefit the exploration and interpretation of data in other domains such as experimental Figure 1: GUI window displaying 1 day worth of data. physics, where large amounts of time series data are generated from high-throughput experiments. 2 Task Definition Given an input from the user (a NL query or a direct GUI interaction), the aim is to parse it into a logical form representation that can be run by an inference engine in order to automatically extract the answer from the database. Table 1 shows sample inputs paired with their logical forms. For each input, the examples also show relevant previous inputs from the interaction sequence. In the following sections we describe a number of major features that, on their own or through their combination, distinguish this task from other semantic parsing tasks. 2.1 Time is essential All events and measurements in the knowledge base are organized in time series. Consequently, many queries contain time expressions, such as the relative midnight\" or the coreferential then\", and temporal relations between relevant entities, expressed through words such as after\" or when\".", "This makes processing of temporal relations essential for a good performance.", "Furthermore, the GUI serves to anchor the system in time, as most of the information needs expressed in local questions are relative to the day shown in the GUI, or the last event that was clicked.", "The user can interact with the system 1) directly within the GUI (e.g. mouse clicks); 2) through natural language questions; or 3) through a combination of both, as shown in Examples 1 and 2 in Table 1.", "Although the result of every direct interaction with the GUI can also be obtained using natural language questions, sometimes it can be more conve-Example 1 Click on Exercise event at 9:29am.", "nient to use the GUI directly, especially when all events of interest are in the same area of the screen and thus easy to move the mouse or hand from one to the other.", "For example, a doctor interested in what the patient ate that day can simply click on the blue squares at the top of the bottom pane in Figure 1, one after another.", "Sometimes a click can be used to anchor the system at a particular time during the day, after which the doctor can ask short questions implicitly focused on that region in time.", "An example of such hybrid behavior is shown in Example 2, where a click on a Bolus event is followed by a question about a snack, which implicitly should be the meal right after the bolus.", "Most of the time, doctors have information needs that can be satisfied by clicking on an event shown in the GUI or by asking factual questions about a particular event of interest from that day.", "In contrast, a different kind of interaction happens when the doctor wants to change what is shown in the tool, such as toggling on/off particular time series (e.g. toggle on heart rate\"), or navigating to a different day (e.g. go to next day\", look at the previous day\"). Sometimes, a question may be a combination of both, as in What is the first day they have a meal without a bolus?\", for which the expectation is that the system navigates to that day and also clicks on the meal event to show additional information and anchor the system at the time of that meal.", "The user interacts with the system through a sequence of questions or clicks.", "The logical form of a question, and implicitly its answer, may depend on the previous interaction with the system.", "Examples 1 to 3 in Table 1 are all of this kind.", "In example 1, the pronoun that\" in question 2 refers to the answer to question 1. In example 2, the snack refers to the meal around the time of the bolus event that was clicked previously this is important, as there may be multiple snacks that day. In example 3, the adverb then\" in question 5 refers to the time of the event that is the answer of the previous question.", "As can be seen from these examples, sequential dependencies can be expressed as coreference between events from different questions.", "Coreference may also happen within questions, as in question 4 for example.", "Overall, solving coreferential relations will be essential for good performance.", "To train and evaluate semantic parsing approaches, we created two datasets of sequential interactions: a dataset of real interactions (Section 3.1) and a much larger dataset of artificial interactions (Section 3.2).", "We recorded interactions with the GUI in real time, using data from 9 patients, each with around 8 weeks worth of time series data.", "In each recording session, the tool was loaded with data from one patient and the physician was instructed to explore the data in order to understand the patient behavior as usual, by asking NL questions or interacting directly with the GUI.", "Whenever a question was asked, a member of our study team found the answer by navigating in and clicking on the corresponding event.", "After each session, the question Event Types Physiological Parameters : BGL, BasalRate, TemporaryBasal, Carbs, GSR, InfusionSet, AirTemperature, SkinTemperature, HeartRate, StepCount.", "segments were extracted manually from the speech recordings, transcribed, and timestamped.", "All direct interactions (e.g. mouse clicks) were recorded automatically by the tool, timestamped, and exported into an XML file.", "The sorted list of questions and the sorted list of mouse clicks were then merged using the timestamps as key, resulting in a chronologically sorted list of questions and GUI interactions.", "Mouse clicks were automatically translated into logical forms, whereas questions were parsed into logical forms manually.", "A snapshot of the vocabulary for logical forms is shown in Table 2, showing the Event Types, Constants, Functions, Predicates, and Commands.", "Every life event or physiological measurement stored in the database is represented in the logical forms as an event object e with 3 major attributes:", "e.type ,", "e.date , and", "e.time .", "Depending on its type, an event object may contain additional fields.", "For example, if", "e.type = BGL , then it has an attribute", "e.value .", "If", "e.type = Meal , then it has attributes", "e.food and", "e.carbs .", "We use e ( i ) to represent the event appearing in the i th previous logical form (LF).", "Thus, to reference the event mentioned in the previous LF, we use e ( 1) , as shown for question Q 5 .", "If more than one event appears in the previous LF, we use an additional index j to match the event index in the previous LF.", "Coreference between events is represented simply using the equality operator,", "e.g. e = e ( 1)", ".The dataset contains logical forms for 237 interactions: 74 mouse clicks and 163 NL queries.", "The number of annotated real interactions is too small for training an effective semantic parsing model.", "To increase the number of training examples, we designed and implemented an artificial data generator that simulates user-GUI interactions, with sentence templates defining the skeleton of each entry in order to maintain high-quality sentence structure and grammar.", "This approach is similar to (Weston et al., 2015), with the difference that we need a much higher degree of variation such that the machine learning model does not memorize all possible sentences, and consequently a much richer template database.", "We therefore implemented a template language with recursive grammar, that can be used to define as many templates and generate as many data examples as desired.", "We used the same vocabulary as for the real interactions dataset.", "To generate contextual dependencies (e.g. event coreference), the implementation allows for more complex combo templates where a sequence of templates are instantiated together.", "A more detailed description of the template language and the simulator implementation is given in (Chen et al., 2019) and Appendix A, together with illustrative examples.", "The simulator was used to generate 1,000 interactions and their logical forms: 312 mouse clicks and 688 NL queries.", "This section describes two baseline models: a standard LSTM encoder-decoder for sequence generation SeqGen (Section 4.1) and its attention-augmented version SeqGen+Att2In (Section 4.2).", "This last model will be used later in Section 5 as a component in the context-dependent semantic parsing architecture.", "As shown in Figure 2, the sequence-generation model SeqGen uses Long Short-Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) units in an encoder-decoder architecture (Bahdanau et al., 2017; Cho et al., 2014), composed of a bidirectional LSTM for the encoder over the input sequence X and an LSTM for the decoder of the output LF sequence Y .", "We use Y t = y 1 , . . . , y t to denote the sequence of output tokens up to position t .", "We use Y to denote the generated logical form.", "The initial state s 0 is created by running the bi-LSTM encoder over the input sequence X and concatenating the last hidden states.", "Starting from the initial hidden state s 0 , the decoder produces a sequence of states s 1 , . . . , s T , using embeddings e ( y t ) to represent the previous tokens in the sequence.", "A softmax is used to compute token probabilities at each position as follows: p ( y t | Y t 1 , X ) = softmax ( W h s t ) (1) s t = h ( s t 1 , e ( y t 1 )) The transition function h is implemented by the LSTM unit.", "This model (Figure 3) is similar to SeqGen , except that it attends to the current input (NL query or mouse click) during decoding.", "Equation 2 defines the corresponding attention mechanism Att2In used to create the context vector d t : e tj = v Ta tanh( W a f j + U a s t 1 ) (2) tj = exp( e tj ) (cid:80) mk =1 exp( e tk ) , d t = c t = n (cid:88) j =1 tj f j Here f j is the j -th hidden states for Bi-LSTM corresponding to x j and tj is an attention weight.", "In Figure 4 we show our proposed semantic parsing model, SP+Att2All+Copy ( SPAAC ).", "Similar to the baseline models, we use a bi-directional LSTM to encode the input and another LSTM as the decoder.", "Context-dependency is modeled using two types of mechanisms: attention and copying .", "The attention mechanism (Section 5.1) is comprised of 3 models: Att2HisIn attending to the previous input, Att2HisLF attending to the previous logical form, and the Att2In introduced in Section 4.2 that attends to the current input.", "The copying mechanism (Section 5.2) is comprised of two models: one for handling unseen tokens, and one for handling coreference to events in the current and previous logical forms.", "t e tk = v Tb tanh( W b r k + U b s t 1 ) (3) tk = exp( e tk ) (cid:80) m 2 l =1 exp( e tl ) , c t = n (cid:88) k =1 tk r k", "e tj = v cT tanh( W c l j + U c s t 1 ) tj = exp( e tj ) (cid:80) nj =1 exp( e tj ) , c t = n (cid:88) j =1 tj", "where l j is the j -th hidden state of the decoder for the previous logical form Y 1 .", "The context vector used in the decoder is comprised of the context vectors from the three attention models Att2In , Att2HisIn and Att2HisLF : d t = concat ( c t , c t , c t ) (5) 5.2 Copying Mechanisms In order to handle out-of-vocabulary (OOV) tokens and coreference (REF) between entities in the current and the previous logical forms, we add two special tokens OOV and REF to the vocabulary.", "Inspired by the copying mechanism in (Gu et al., 2016), we train the model to learn which token in the current input X = { x j } is an OOV by minimizing the following loss: L oov ( Y ) = Y.l (cid:88) t =1 X.l (cid:88) j =1 log p o ( O j | s Xj , s Yt ) (6) where X.l is the length of current input, Y.l is the length of the current logical form, s Xj is the LSTM state for x j and s Yt is the LSTM state for y t , O j { 0 , 1 } is a label indicating whether x j is an OOV.", "We use logistic regression to compute the OOV probability, i.e. p o ( O j = 1 | s Xj , s Yt ) = ( w To [ s Xj , s Yt ]) .", "Similarly, to solve coreference, the model is trained to learn which entity in the previously generated logical form Y 1 = { y j } is coreferent with the entity in the current logical form by minimizing the following loss: L ref ( Y )= Y.l (cid:88) t =1 Y 1", "where Y 1", ".l is the length of the previous generated logical form, Y.l is the length of the current logical form, s Y 1 j is the LSTM state at position j in Y 1 and s Yt is the LSTM state for position t in Y , and R j { 0 , 1 } is a label indicating whether y j is an entity referred by y t in the next logical form Y .", "We use logistic regression to compute the coreference probability, i.e. p r ( R j = 1 | s Y 1 j , s Yt ) = ( w Tr [ s Y 1 j , s Yt ]) .", "Finally, we use Teacher forcing (Williams and Zipser, 1989) to train the model to learn which token in the vocabulary (including special tokens OOV and REF ) should be generated, by minimizing Figure 4: Context-dependent semantic parsing architecture.", "the following token generation loss: L gen ( Y ) = Y.l (cid:88) t =1 log p ( y t | Y t 1 , X ) (8) where Y.l is the length of the current logical form.", "The supervised learning model SPAAC-MLE is obtained by training the semantic parsing architecture from Figure 4 to minimize the sum of the 3 negative log-likelihood losses:", "At inference time, beam search is used to generate the LF sequence (Ranzato et al., 2015; Wiseman and Rush, 2016).", "During inference, if the generated token at position t is OOV , we copy the token from the current input X that has the maximum OOV probability, i.e. arg max j p o ( O j = 1 | s Xj , s Yt ) .", "Similarly, if the generated entity token at position t is REF , we copy the entity token from the previous LF Y 1 that has the maximum coreference probability, i.e. arg max j p r ( R j = 1 | s Y 1 j , s Yt ) .", "All models described in this paper are evaluated using sequence-level accuracy, a discrete metric where a generated logical form is considered to be correct if it is equivalent with the ground truth", "logical form.", "This is a strict evaluation measure in the sense that it is sufficient for a token to be wrong to invalidate the entire sequence.", "At the same time, there can be many generated sequences that are correct,", "e.g.", "any reordering of the clauses from the ground truth sequence is correct.", "The large number of potentially correct generations can lead MLE-trained models to have sub-optimal performance (Paulus et al., 2017; Rennie et al., 2017; Zeng et al., 2016; Norouzi et al., 2016).", "Furthermore, although teacher forcing (Williams and Zipser, 1989) is widely used for training sequence generation models, it leads to exposure bias (Ran-zato et al., 2015): the network has knowledge of the ground truth LF tokens up to the current token during training, but not during testing, which can lead to propagation of errors at generation time.", "Like Paulus et al. (2017), we address these problems by using policy gradient to train a token generation policy that aims to directly maximize sequence-level accuracy.", "We use the self-critical policy gradient training algorithm proposed by Rennie et al. (2017).", "We model the sequence generation process as a sequence of actions taken according to a policy, which takes an action (to-ken y t ) at each step t as a function of the current state (history Y t 1 ), according to the probability p ( y t | Y t 1 ) .", "The algorithm uses this probability to define two policies: a greedy, baseline policy b that takes the action with the largest probability, i.e. b ( Y t 1 ) = arg max y t p ( y t | Y t 1 ) ; and a sampling policy s that samples the action according to the same distribution, i.e. s ( Y t 1 ) p ( y t | Y t 1 ) .", "The baseline policy is used to generate a sequence Y b , whereas the sampling policy is used to generate another sequence Y s .", "The reward R ( Y s ) is then defined as the difference between the sequence-level accuracy ( A ) of the sampled sequence Y s and the baseline sequence Y b .", "The corresponding self-critical policy gradient loss is: LRL = R ( Y s ) LMLE ( Y s ) = (cid:16) A ( Y s ) A ( Y b ) (cid:17) LMLE ( Y s ) (10) Thus, minimizing the RL loss is equivalent to maximizing the likelihood of the sampled Y s if it obtains a higher sequence-level accuracy than the baseline Y b .", "All models are implemented in Tensorflow using dropout to deal with overfitting.", "For both datasets, 10% of the data is put aside for validation.", "After tuning on the artificial validation data, the feed-forward neural networks dropout rate was set to 0.5 and the LSTM units dropout rate was set to 0.3.", "The word embeddings had dimensionality of 64 and were initialized at random.", "Optimization is performed with the Adam algorithm.", "For each dataset, we use five-fold cross evaluation, where the data is partitioned into five folds, one fold is used for testing and the other folds for training.", "The process is repeated five times to obtain test results on all folds.", "We use an early-stop strategy on the validation set.", "The number of gradient updates is typically more than 20,000.", "All the experiments are performed on a single NVIDIA GTX1080 GPU.", "The models are trained and evaluated on the artificial interactions first.", "To evaluate on real interactions, the models are pre-trained on the entire artificial dataset and then fine-tuned using real interactions.", "SPAAC-RL is pre-trained with MLE loss to provide more efficient policy exploration.", "We use sequence level accuracy as evaluation metric for all models: a generated sequence is considered correct if and only if all the generated tokens match the ground truth tokens.", "We report experimental evaluations of the proposed models SPAAC-MLE and SPAAC-RL and baseline models SeqGen , SeqGen+Att2In on the Models Artificial Real SeqGen 51.8 22.2 SeqGen+Att2In 72.7 35.4 SPAAC-MLE 84.3 66.9 SPAAC-RL 88.7 74.8 Table 3: Sequence-level accuracy on the 2 datasets.", "The results in Table 3 demonstrate the importance of modeling context-dependency, as the two SPAAC models outperform the baselines on both datasets.", "The RL model also obtains substantially better accuracy than the MLE model.", "The improvement in performance over the MLE model for the real data is statistically significant at p = 0 .", "05 in a one-tailed paired t-test.", "Analysis of the generated logical forms revealed that one common error made by SPAAC-MLE is the generation of incorrect event types.", "Some of these errors are fixed by the current RL model.", "However, there are instances where even the RL-trained model outputs the wrong event type.", "By comparing Does he always get some sleep around 4:30pm?", "the sampled logical forms Y s and the generated baseline logical forms Y b , we found that sometimes the sampled tokens for event types are the same as those in the baseline.", "An approach that we plan to investigate in future work is to utilize more advanced sampling methods to generate Y s , in order to achieve a better balance between exploration and exploitation.", "Question Answering has been the topic of recent research (Yih et al., 2014; Dong et al., 2015; Andreas et al., 2016; Hao et al., 2017; Abujabal et al., 2017; Chen and Bunescu, 2017).", "Semantic parsing, which maps text in natural language to meaning representations in formal logic, has emerged as an important component for building QA systems, as in (Liang, 2016; Jia and Liang, 2016a; Zhong et al., 2017).", "Context-dependent processing has been explored in complex, interactive QA (Harabagiu et al., 2005; Kelly and Lin, 2007) and semantic parsing (Zettlemoyer and Collins, 2009; Artzi and Zettlemoyer, 2011; Iyyer et al., 2017; Suhr et al., 2018; Long et al., 2016).", "Although these approaches take into account sequential dependencies between questions or sentences, the setting in our work has a number of significant distinguishing features, such as the importance of time data is represented naturally as multiple time series of events and the anchoring on a graphical user interface that also enables direct interactions through mouse clicks and a combination of factual queries and interface commands.", "Dong and Lapata (2016) use an attention-enhanced encoder-decoder architecture to learn the logical forms from natural language without using hand-engineered features.", "Their proposed Seq2Tree architecture can capture the hierarchical structure of logical forms.", "Jia and Liang (2016b) train a sequence-to-sequence RNN model with a novel attention-based copying mechanism to learn the logical forms from questions.", "The copying mechanism has been investigated by Gu et al. (2016) and Gulcehre et al. (2016) in the context of a wide range of NLP applications.", "These semantic parsing models considered sentences in isolation.", "In contrast, generating correct logical forms in our task required modeling sequential dependencies between logical forms.", "In particular, coreference is modeled between events mentioned in different logical forms by repurposing the copying mechanism originally used for modeling out-of-vocabulary tokens.", "We introduced a new semantic parsing setting in which users can query a system using both natural language and direct interactions (mouse clicks) within a graphical user interface.", "Correspondingly, we created a dataset of real interactions and a much larger dataset of artificial interactions.", "The correct interpretation of a natural language query often requires knowledge of previous interactions with the system.", "We proposed a new sequence generation architecture that modeled this context dependency through multiple attention models and a copying mechanism for solving coreference.", "The proposed architecture is shown to outperform standard LSTM encoder-decoder architectures that are context agnostic.", "Furthermore, casting the sequence generation process in the framework of reinforcement learning alleviates the exposure bias and leads to substantial improvements in sequence-level accuracy.", "The two datasets and the implementation of the systems presented in this paper are made publicly available at https://github.com/ charleschen1015/SemanticParsing .", "The data visualization GUI is available under the name OHIO T1DMV IEWER at http:// smarthealth.cs.ohio.edu/nih.html .", "Jiatao Gu, Zhengdong Lu, Hang Li, and Victor OK Li.", "2016.", "Incorporating copying mechanism in sequence-to-sequence learning.", "In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , volume 1, pages 16311640.", "Sepp Hochreiter and Jrgen Schmidhuber.", "1997.", "Long short-term memory.", "Neural computation , 9(8):17351780.", "Robin Jia and Percy Liang.", "2016b.", "Data recombination for neural semantic parsing.", "This work was partly supported by grant 1R21EB022356 from the National Institutes of Health.", "We would like to thank Frank Schwartz and Cindy Marling for contributing real interactions, Quintin Fettes and Yi Yu for their help with recording and pre-processing the interactions, and Sadegh Mirshekarian for the design of the artificial data generation.", "We would also like to thank the anonymous reviewers for their useful comments." ]
[ "objective", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "method", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "result", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "other", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "objective", "other", "objective", "method", "abstain", "objective", "abstain", "abstain", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other" ]
[ "A good translation should not only translate the original content semantically, but also incarnate personal traits of the original text.", "For a real-world neural machine translation (NMT) system, these user traits (e.g., topic preference, stylistic characteristics and expression habits) can be preserved in user behavior (e.g., historical inputs).", "However, current NMT systems marginally consider the user behavior due to: 1) the difficulty of modeling user portraits in zero-shot scenarios, and 2) the lack of user-behavior annotated parallel dataset.", "To fill this gap, we introduce a novel framework called user-driven NMT.", "Specifically, a cache-based module and a user-driven contrastive learning method are proposed to offer NMT the ability to capture potential user traits from their historical inputs under a zero-shot learning fashion.", "Furthermore, we contribute the first Chinese-English parallel corpus annotated with user behavior called UDT-Corpus .", "Experimental results confirm that the proposed user-driven NMT can generate user-specific translations.", "1 1 Introduction In recent years, neural machine translation (NMT) models (Sutskever et al., 2014; Luong et al., 2015; Vaswani et al., 2017) have shown promising quality and thus increasingly attracted users.", "When drawing on a translation system, every user has his own traits, including topic preference, stylistic characteristics, and expression habits, which can be implicitly embodied in their behavior, e.g., the historical inputs of these users.", "A good translation should implicitly mirror user traits rather than Jinsong Su is the corresponding author.", "1 We release our source code and the associated benchmark at https://github.com/DeepLearnXMU/ User-Driven-NMT .", "merely translate the original content, as the example shown in Figure 1.", "However, current NMT models are mainly designed for the semantic transformation between the source and target sentences regardless of subtle traits with respect to user behavior.", "It can be said that the effect of user behavior on translation modeling is still far from utilization, which, to some extent, limits the applicability of NMT models in real-world scenarios.", "More recently, several studies have shown that the prominent signals in terms of personal characteristics can be served as inductive biases and reflected in translation results using domain adaptation approaches, such as personality (Mirkin et al., 2015), gender (Rabinovich et al., 2017), and politeness (Sennrich et al., 2016a).", "However, previously explored signals characterize users from a single dimension, which insufficiently represent fine-grained user traits.", "Furthermore, Michel and Neubig (2018) pay their attention to personalized TED talk translation, in which they train a speaker-specific bias to revise the prediction distribution.", "In contrast with these studies, our work investigates a more realistic online scenario: a real-world MT system serves extensive users, where the user-behavior annotated data covering all users is unavailable.", "Previous methods (Mirkin et al., 2015; Michel and Neubig, 2018) require the users in the training set and the test set to be consistent, therefore can not deal with this zero-shot issue.", "Starting from this concern, we explore user-driven NMT that generates personalized translations for users unseen in the training dataset according to their behavior.", "Specifically, we choose the historical inputs to represent user behavior since they can not only be easily obtained in the real-world scenarios, but also reflect the topic preference, stylistic characteristic, and context of user.", "Moreover, compared with pre-defined or user-specific labels, historical inputs can be updated with current source sentences, which is also in line with realistic scenario.", "In this work, we propose a novel framework for this task, where the NMT model is equipped with a cache module to restore and update historical inputs.", "Besides, in order to further transfer the traits from the seen users to the unseen ones, we design a regularization framework based on contrastive learning (Bose et al., 2018; Yang et al., 2019), which forces our model to decrease the divergence between translations of similar users while increasing the diversity on dissimilar users.", "In order to further train and assess the proposed framework, we construct a new U serD riven Machine T ranslation dataset called UDT-Corpus .", "This corpus consists of 6,550 users with totally 57,639 Chinese sentences collected from a real-world online MT system.", "Among them, 17,099 Chinese sentences are annotated with their English translations by linguistic experts according to the user-specific historical inputs.", "Experimental results demonstrate that the proposed framework facilitates the translation quality, and exactly generates diverse translations for different users.", "To summarize, major contributions of our work are four-fold: We introduce and explore user-driven NMT task that leverages user behavior to enhance translation model.", "We hope our study can attract more attention to explore techniques on this topic.", "We propose a novel framework for user-driven NMT based on cache module and contrastive learning, which is able to model user traits in zero-shot scenarios.", "We collect UDT-Corpus and make it publicly available, which may contribute to the subsequent researches in the communities of NMT and user-driven models.", "Extensive analyses indicate the effectiveness of our work and verify that NMT can profit from user behavior to generate diverse translations conforming to user traits.", "This section mainly includes the related studies of personalized machine translation, cache-based NMT and contrastive learning for NMT.", "Personalized Machine Translation Recently, some researchers have employed domain adaptation (Zhang et al., 2019; Gururangan et al., 2020; Yao et al., 2020) to generate personalized translations.", "For example, Mirkin et al. (2015) show that the translation generated by the SMT model has an adverse effect on the prediction of author personalities, demonstrating the necessity of personalized machine translation.", "Furthermore, Sennrich et al. (2016a) control the politeness in the translation by adding a politeness label on the source side.", "Rabi-novich et al. (2017) explore a gender-personalized SMT system that retains the original gender traits.", "These domain labels represent users in single dimension separately, which are insufficient to distinguish large-scale users in a fine-grained way.", "The most correlated work to ours is Michel and Neubig (2018) which introduces a speaker-specific bias into the conventional NMT model.", "However, these methods are unable to deal with users unseen at the training time.", "Different from them, user-driven NMT can generate personalized translations for these unseen users in a zero-shot manner.", "Cache-Based Machine Translation Inspired by the great success of cache on language modeling (Kuhn and de Mori, 1990; Goodman, 2001; Federico et al., 2008), Nepveu et al. (2004) propose a cache-based adaptive SMT system.", "Tiedemann (2010) explore a cache-based translation model that fills the cache with bilingual phrase pairs extracted from previous sentence pairs in a document.", "Bertoldi et al. (2013) use a cache mechanism to achieve online learning in phrase-based SMT.", "Gong et al. (2011), Kuang et al. (2018), and Tu et al. (2018) further exploit cache-based approaches to leverage contextual information for document-level machine translation.", "Contrast with the document-level NMT that learns to capture contextual information, our study aims at modeling user traits, such as, topic preference, stylistic characteristics, and expression habits.", "Moreover, historical inputs of user has relatively fewer dependencies than the contexts used in document-level translation.", "Contrastive Learning for NMT Contrastive learning has been extensively applied in the communities of computer vision and natural language processing due to its effectiveness and generality on self-supervised learning (Vaswani et al., 2013; Mnih and Kavukcuoglu, 2013; Liu and Sun, 2015; Bose et al., 2018).", "Towards raising the ability of NMT in capturing global dependencies, Wiseman and Rush (2016) first introduce contrastive learning into NMT, where the ground-truth translation and the model output are considered as the positive and contrastive samples, respectively.", "Yang et al. (2019) construct contrastive examples by deleting words from ground-truth translation to reduce word omission errors in NMT.", "Contrast to these studies, we employ contrastive learning to create broader learning signals for our user-driven NMT model, where the prediction distribution of translations with respect to similar users and dissimilar users are considered as positive and contrastive samples, respectively.", "Thus, our model can better transfer the knowledge of the seen users to the unseen ones.", "In order to build a user-driven NMT system, we construct a new dataset called UDT-Corpus containing 57,639 inputs of 6,550 users, 17,099 among them are Chinese-to-English translation examples.", "We collect raw examples from Alibaba Translate which contain the user inputs and the translations given by the translation system.", "For data preprocessing, we first anonymize data and perform data deduplication within each user.", "Then, we utilize a pre-trained n-gram language model KenLM 3 to filter out translation examples with low-quality source data.", "Moreover, we remove such pairs whose source sentence is shorter than 2 words or longer than 100 words.", "In the corpus, we represent each translation example as a triplet (cid:104) X ( u ) , Y ( u ) , H ( u ) (cid:105) , where H ( u ) is the historical inputs of the user u , X ( u ) is the current source sentence and Y ( u ) is the target translation sentence annotated with H ( u ) .", "To obtain 2 https://www.aliyun.com/product/ai/ base_alimt 3 https://github.com/kpu/kenlm .", "such a triplet, we first sequentially sample up to 10 source sentences which are the historical inputs of each user.", "Then, for the given historical inputs, we collect their followed source input paired with the pseudo translation given by the translation system.", "Afterwards, we assign these historical inputs and the current input pairs to two professional annotators and ask them to revise the pseudo translation according to the source sentence and historical inputs.", "Specifically, we first ask one of them to annotate and the other to evaluate, and then resolve annotation disagreements by reviewing.", "During annotation, 91.8% of the original data are revised.", "Moreover, annotators are asked to record whether their revision is affected by user history.", "The result shows that 76.25% of the sentences are impacted.", "In this section, we first give a brief description about the problem formulation of user-driven NMT, and then introduce our proposed framework in detail.", "We choose Transformer (Vaswani et al., 2017) as the basic NMT model due to its competitive performance.", "In fact, our framework is transparent and applicable to other NMT models.", "Figure 2 illustrates the basic framework of the proposed user-driven NMT.", "Most typically, we equip the NMT model with two user-specific caches to exploit user behavior for better translation (See Section 4.2).", "Besides, we augment the conventional NMT training objective with contrastive learning, which allows the model to learn translation diversity across users (See Section 4.3).", "Given the source sentence X and the previously generated words Y <i = y 1 , ..., y i 1 , the conventional NMT model with parameter predicts the current target word y i by P ( y i | X, Y <i ; ) .", "As a significant extension of conventional NMT, user-driven NMT with parameter aims to model P (cid:16) y ( u ) i | X ( u ) , Y ( u ) <i , u ; (cid:17) , that is, generates the translation that can reflect the traits of user u .", "Unlike previous studies (Mirkin et al., 2015; Michel and Neubig, 2018) only caring for generating translations for users seen at the training time, our user-driven NMT mainly focuses on a more realistic online MT scenario, where the users for testing are unseen in the training dataset.", "Moreover, the conventional domain adaptation methods can not be directly applied to this zero-shot scenario.", "Due to the advantages of cache mechanism on dynamic representations (Gong et al., 2011; Kuang et al., 2018; Tu et al., 2018), we equip the conventional Transformer-based NMT model with two user-specific caches to leverage user behavior for NMT: 1) topic cache c ( u ) t that aims at capturing the global and long-term traits of user u ; and 2) context cache c ( u ) c , which is introduced to capture the short-term traits from the recent source inputs of user u .", "During this process, we focus on the following three operations on cache: Cache Representation In order to facilitate the efficient computation of the user behavior encoded by our caches, we define each cache as an embedding sequence of keywords.", "We first calculate TF-IDF values of input words, and then extract words with TF-IDF weights higher than a predefined threshold to represent user behavior.", "Note that the calculation of TF-IDF value of a word mainly depends on its frequency in the document and inverse document frequency in the corpus .", "Since two caches play different roles in the user-driven NMT model, we identify keywords for two caches based on different definitions of document and corpus.", "Specifically, when constructing topic cache c ( u ) t , we treat the historical inputs H ( u ) of the user u as the document and the historical inputs H ( u ) of all users U as the corpus, then define topic cache c ( u ) t as an embedding sequence of historical keywords.", "Unlike the topic cache, for context cache c ( u ) c , we individually consider the current source sentence X ( u ) and historical inputs H ( u ) as the TF-IDF document and corpus, defining c ( u ) c as an embedding sequence of current keywords.", "Besides, in the real-world MT scenario, there exists a large number of users without any historical input.", "For these users, we find the most similar user according to the cosine similarity based on their TF-IDF bag-of-word representations of topic keywords, and initialize the corresponding topic cache with that of the most similar user.", "Updating Caches When using an online MT system, users often continuously input multiple sentences.", "Thus, our caches should be dynamically updated to ensure the accurate encoding of user behavior.", "To update topic cache, we first recalcualte the TF-IDF values of all historical input words, so as to redetermine the keywords stored in this cache.", "As for context cache, we consider it as a filter window sliding across historical inputs, and apply first-in-first-out rule to replace its earliest keywords with the recently input ones.", "Reading from Caches During the translation of the NMT model, we perform a gating operation on c ( u ) t and c ( u ) c , producing a vector r ( u ) that reflects user behavior as follows: r ( u ) = c ( u ) t + (1 ) c ( u ) c (1) = Sigmoid ( W t c ( u ) t + W r c ( u ) c ) , (2) c ( u ) t = MeanPooling (cid:104) c ( u ) t (cid:105) , (3) c ( u ) c = MeanPooling (cid:104) c ( u ) c (cid:105) , (4) where both W t and W r are learnable parameter matrices.", "Then, we directly add r ( u ) into the embedding sequence of original current source sentence X ( u ) , forming a source embedding sequence with user behavior as follows: X ( u ) = { x ( u ) i + r ( u ) } 1 i< | X ( u ) | .", "Finally, the NMT model is fed with X ( u ) to generate the translation for u .", "Due to the limitation of pages, we omit the detailed descriptions of the NMT model.", "Here, L mle is the maximum likelihood translation loss extended from the conventional NMT training objective.", "Formally, it is defined as: L mle = (cid:88) i log P ( y ( u ) i | X ( u ) , Y ( u ) <i , H ( u ) ; ) .", "Specifically, for an input sentence, an ideal user-driven NMT model should be able to generate translations with non-divergent user traits for similar users, while producing translations with diverse user traits for dissimilar users.", "However, using only L mle cannot guarantee this since it separately considers each training instance during the model training.", "To deal with this issue, for each training instance (cid:104) X ( u ) , Y ( u ) , H ( u ) (cid:105) , we first determine the most similar user u + according to the cosine similarity based on their bag-of-keyword representations, and randomly select a user without any same keyword as the dissimilar user u of u .", "Finally, using historical inputs of u + and u , we construct several pseudo training instances to define L cl as follows: L cl = (cid:88) u U max [ d ( X ( u ) , Y ( u ) , H ( u ) , H ( u + ) ) (8) d ( X ( u ) , Y ( u ) , H ( u ) , H ( u ) ) + , 0] , where d (cid:16) X ( u ) , Y ( u ) , H ( u ) , H ( u + ) (cid:17) = || 1 | Y ( u ) | (cid:88) i log P (cid:16) y ( u ) i | X ( u ) , Y ( u ) <i , H ( u ) (cid:17) 1 | Y ( u ) | (cid:88) i log P (cid:16) y ( u ) i | X ( u ) , Y ( u ) <i , H ( u + ) (cid:17) || 2 (9) and is a predefined threshold, which is set to 2 in our experiments.", "Here, we omit the definition of Train Dev Test #user 5,350 600 600 #historical input 33,441 3,629 3,470 #current sentence pairs 14,006 1,557 1,536 Table 1: Dataset for fine-tuning experiments.", "Formally, L cl will encourage the NMT model to minimize the prediction difference between the training instances (cid:104) X ( u ) , Y ( u ) , H ( u ) (cid:105) and (cid:104) X ( u ) , Y ( u ) , H ( u + ) (cid:105) , and maximize the difference between the training instances (cid:104) X ( u ) , Y ( u ) , H ( u ) (cid:105) and (cid:104) X ( u ) , Y ( u ) , H ( u ) (cid:105) .", "In this way, the NMT model can not only exploit pesudo training instances, but also produce more consistent translations with user traits.", "We develop the user-driven NMT model based on Open-NMT Transformer (Klein et al., 2017), and adopt a two-stage strategy to train this model: we first pre-train a Transformer-based NMT model on the WMT2017 Chinese-to-English dataset, and then fine-tune this model to our user-driven NMT model using UDT-Corpus.", "Datasets The WMT2017 Chinese-to-English dataset is composed of the News Commentary v12, UN Parallel Corpus v1.0, and CWMT corpora, with totally 25M parallel sentences.", "To fine-tune our model, we split UDT-Corpus into training, validation and test set, respectively.", "Table 1 provides more detailed statistics of these datasets.", "To improve the efficiency of model training, we train the model using only parallel sentences with no more than 100 words.", "Following common practices, we employ byte pair encoding (Sennrich et al., 2016b) with 32K merge operations to deal with all sentences.", "Training Details Following Vaswani et al. (2017), we use the following hyper-parameters: the word embedding dimension is set to 512, the hidden layer dimension is 2048, the layer numbers of both encoder and decoder are set to 6, and the number of attention heads is set to 8.", "Besides, we use 4 GPUs for training.", "At the pre-training stage, we employ the Adam optimizer with 2 = 0.998.", "We use the batch size of 16,384 tokens and pre-train the model for 200,000 steps.", "Particularly, we adopt the dropout strategy (Srivastava et al., 2014) with rate 0.1 to enhance the robustness of our model.", "When fine-tuning the model, we keep the other settings consistent with the pre-training stage, but reduce the batch size to 2048 tokens and fine-tune the model with early-stopping strategy.", "Evaluation We assess the translation quality with two metrics: one is case-insensitive BLEU ( mteval-v13a.pl , Papineni et al., 2002) 4 and the other is METEOR 5 (Denkowski and Lavie, 2011).", "UD-NMT and compare it with the following baselines: TF .", "It is a Transformer-based NMT model pre-trained on the WMT2017 corpus.", "This model yields 24.61 BLEU score on WMT2017 Chinese-to-English translation task, which is comparable with reported results in (Wan et al., 2020; Zhou et al., 2020), which makes our subsequent experiments convincing.", "TF-FT .", "This model is also a Transformer-based NMT model that is further fine-tuned on the parallel sentences of UDT-Corpus.", "TF-FT + PesuData .", "This model is a variant of TF-FT .", "When constructing it, we pair historical inputs with their translations produced by our online translation system, forming additional data for fine-tuning TF-FT .", "TF-FT + ConcHist (Tiedemann and Scher-rer, 2017).", "In this model, we introduce user behavior into TF-FT by concatenating each input sentence with several historical inputs.", "We mark all tokens in historical inputs with a special prefix to indicate that they are additional information.", "TF-FT + UserBias (Michel and Neubig, 2018).", "It introduces user-specific biases to refine softmax-based predictions of Transformer NMT model.", "We change it to a zero-shot method similar to (Farajian et al., 2017) 4 https://github.com/moses-smt/ mosesdecoder/blob/master/scripts/generic/multi-bleu.perl 5 https://github.com/cmu-mtlab/meteor 5 15 25 35 45 BLEU 33.0 32.9 32.8 32.7 32.6 32.5 32.4 32.3 5 15 25 35 45 BLEU", "since (Michel and Neubig, 2018) can not be directly applied to our scenario.", "In particular, we replace the user ID in the test set with that of the most similar user in the training set.", "Note that the first two baselines, e.g., TF and TF-FT , are conventional NMT models without exploiting user behavior.", "Since cache size directly determines the utility of user behavior, we investigate its effect on the performance of UD-NMT .", "We denote the sizes of topic cache and context cache as s t and s c for simplicity.", "Figure 3 lists the performance of our model with different s t and s c on validation set.", "We observe that s t larger than 25 and s c larger than 35 do not lead to significant improvements.", "For this result, we speculate that small cache sizes are unable to capture sufficient user behavior for NMT.", "However, since the number of keywords are limited, larger cache sizes only bring limited information gain.", "Therefore, we directly use s t = 25 and s c = 35 in the subsequent experiments.", "From Table 2, we observe that our UD-NMT model consistently outperforms all baselines in terms of two metrics.", "Moreover, we draw several interesting conclusions: Model BLEU METEOR s-BLEU d-BLEU s-Sim.", "2) UD-NMT exhibits better than TF-FT + PesuData , which uses the same training data as ours.", "The underlying reason is that UD-NMT can leverage user traits to generate better translations.", "3) Although both TF-FT + UserBias and UD-NMT exploit user behavior for NMT, UD-NMT achieves better performance than TF-FT + UserBias without introducing extra parameters.", "This result demonstrates the advantage of cache on modeling user behavior than introducing user-specific biases into model parameters.", "To explore the effectiveness of different components in our model, we further compare UD-NMT with its several variants, as shown in Table 3.", "Particularly, we propose to evaluate translations using the following variant metrics: s-BLEU , s-Sim.", ", d-BLEU and d-Sim.", ".", "When using s-BLEU, we replace the topic cache of current user with that of his most similar user.", "Keeping the same current input, we calculate the BLEU score with ground-truth as reference and the translation for this similar user as hypothesis.", "As for", "s-Sim., we adopt the same strategy as s-BLEU, but use the translation for original user as reference to evaluate the BLEU score.", "In other words, s-BLEU and d-BLEU assesses the translation quality given unsuitable user.", "Therefore, higher s-BLEU and d-BLEU indicates better model robustness, while s-BLEU and d-BLEU measures how much the translation changes given different user.", "Thus lower s-Sim.", "and d-Sim.", "show larger translation diversity.", "Our conclusions are shown as follows: 1) w/o topic cache .", "To build this variant, we remove topic cache from our model.", "The result in Line 2 indicates that removing topic cache leads to a performance drop, suggesting that topic cache is useful for modeling user behavior.", "2) w/o context cache .", "Unlike the above variant, we only use topic cache to represent user traits in this variant.", "According to the results shown in Line 3, we observe that this change results in a significant performance decline of our model, demonstrating that context cache also effectively captures user behavior for NMT.", "However, the translation diversity among users increases since the model will not be affected by the context cache in this variant, which is the same between different users when calculating s-Sim.", "and", "d-Sim..", "3) w/o similar user initialization .", "In this variant, we do not initialize topic caches of the users without historical inputs using that of the most similar users.", "From Line 4, we observe that the performance of our model degrades without similar user initialization.", "4) w/o contrastive learning .", "In this variant, we remove the contrastive learning from the whole training objective to inspect the performance change of our model.", "As shown in Line 4, the performance of our model drops, proving that the contrastive learning is important for the training of our model.", "Moreover, we can infer from Column 6 and 7 that our model can generate diverse translations.", "Specifically, the translations of dissimilar users has larger diversity than that of similar ones.", "Furthermore, we conclude that our model is robust, since it still performs well when we replace the topic cache of current user with those of other users (See Column 4 and 5).", "Inspired by Yang et al. (2019), we argue that the contrastive learning may increase the prediction diversity of our model between users compared with using the MLE loss.", "To confirm this, we randomly Historical inputs , (Fabric Composition Spandex Style Sexy Type Jumpsuit Color White, Black) 2020 (2020 Autumn and Winter New Hong Kong Style Retro Drop Sleeves Jacket Female Loose Student Plush Crop Top) Src , 15 Ref Oxford Fabric Waterproof and Wear Resistant, 15 Inch in Size TF-FT + PesuData Oxford Textile Fabrics Waterproof and Waterproof, 15 Inches Size TF-FT + UserBias Oxford Woven Fabric is Waterproof and Resistant, 15 Inches in Size UD-FT Oxford Woven Fabric Waterproof and Wear Resistant, 15 Inch Size", "where d ( u + ) ( ) is defined in Equation 9.", "The definition of d ( u + ) mle ( ) is the same with d ( ) , the only difference lies in that the NMT model is only trained by the conventional MLE loss.", "We find that d ( ) has a larger margin than d mle ( ) on 88% of sampled sentence pairs, with an average margin of 0.19.", "The results indicate again that the contrastive learning increases the translation diversity.", "In order to intuitively understand how our cache module exactly affects the translations, we feed our model with the same current source sentence but different users, and display the 1-best translations generated by our model.", "As shown in the Figure 4", "(a), our model is able to produce correct but diverse translations according to different topic caches.", "Moreover, it is interesting to observe that specific topic keywords such as type b arr , neg-atively regulated and modulators are translated to synonymous but out-of-domain phrases if the topic cache does not conform to input sentence.", "On the contrary, the model conversely generates in-domain translation if the topic cache comes from the same topic of input sentence.", "Besides, to further reveal the effect of user behavior, we provide an example in Figure 4", "(b), which lists different translations by compared models for the same inputs.", "The historical inputs indicate that this user may be an apparel seller, since his historical inputs contain the product titles and descriptions of clothing.", "Thus, the keywords Wear Resistant in the source sentence are correlated with this user.", "However, two baselines translate it to Waterproof and Resistant , respectively.", "Moreover, TF-FT + UserBias generates a subjectverbobject structured sentence by adding the auxiliary verb is, which does not conform to the expression habit of the product title.", "By contrast, with the hint of the keywords in historical inputs, our UD-NMT is able to produce suitable translation consistent with the topic preference of this user.", "To further find out weather the improvements of our model are contributed by user traits, we randomly", "randomly sample 100 examples from the test dataset and ask the linguist experts to sort different systems according to the relevance between the generated translations and the historical input.", "The results in Table 4 show that our model can generate translations more in line with history inputs than baseline models in most cases, proving that our method can make better use of user traits.", "We propose user-driven NMT task, which aims to leverage user behavior to generate personalized translations.", "With the help of cache module and contrastive estimation, we successfully build an end-to-end NMT model that is able to capture potential user traits from their historical inputs and generate diverse translations under a zero-shot learning fashion.", "Furthermore, we contribute UDT-Corpus, which is the first Chinese-English parallel corpus annotated with user behavior.", "We expect our study can attract more attention towards this topic.", "It is a promising direction to explore other behavior in future, such as clickthrough and editing operations.", "Moreover, following recent advancements in domain adaptation for NMT, we plan to further improve our model via adversial training based knowledge transfer (Zeng et al., 2018; Yao et al., 2020; Su et al., 2021) and dual knowledge transfer (Zeng et al., 2019).", "The project was supported by National Key Research and Development Program of China (No. 2020AAA0108004 and No. 2018YFB1403202), National Natural Science Foundation of China (No. 61672440), Natural Science Foundation of Fujian Province of China (No. 2020J06001), Youth Innovation Fund of Xiamen (No. 3502Z20206059), and the Fundamental Research Funds for the Central Universities (No. ZK20720200077).", "We also thank the reviewers for their insightful comments." ]
[ "abstain", "abstain", "abstain", "objective", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "result", "abstain", "objective", "result", "objective", "abstain", "abstain", "abstain", "objective", "objective", "objective", "objective", "result", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "objective", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "other", "abstain", "method", "abstain", "method", "abstain", "method", "method", "method", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "other", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "objective", "method", "objective", "abstain", "abstain", "result", "other", "other" ]
[ "Word Sense Disambiguation (WSD) is a historical NLP task aimed at linking words in contexts to discrete sense inventories and it is usually cast as a multi-label classification task.", "Recently, several neural approaches have employed sense definitions to better represent word meanings.", "Yet, these approaches do not observe the input sentence and the sense definition candidates all at once, thus potentially reducing the model performance and generalization power.", "We cope with this issue by reframing WSD as a span extraction problem which we called Extractive Sense Comprehension (ESC) and propose ESCHER , a transformer-based neural architecture for this new formulation.", "By means of an extensive array of experiments, we show that ESC unleashes the full potential of our model, leading it to outdo all of its competitors and to set a new state of the art on the English WSD task.", "In the few-shot scenario, ESCHER proves to exploit training data ef-ficiently, attaining the same performance as its closest competitor while relying on almost three times fewer annotations.", "Furthermore, ESCHER can nimbly combine data annotated with senses from different lexical resources, achieving performances that were previously out of everyone's reach.", "The model along with data is available at https://github.com/ SapienzaNLP/esc .", "Being able to link a piece of raw text to a knowledge base is fundamental in NLP (Navigli, 2009; McCoy et al., 2019; Bender and Koller, 2020), as it can aid neural models to ground their representations on structured resources and enable Natural Language Understanding (Navigli, 2018).", "A task that is key to achieving this goal is Word Sense Disambiguation (WSD), where, given a sentence Work carried out while at the Sapienza University of Rome.", "with a target word, a model has to predict its most suitable meaning from a predefined set of labels, i.e., its senses.", "WSD has not only considerably improved its performance with the advent of deep learning (by around 15 F1 points in 15 years), but it has also shown its benefits in downstream applications such as Neural Machine Translation (Liu et al., 2018; Pu et al., 2018) and Information Extraction (Moro and Navigli, 2013; Delli Bovi et al., 2015), while also being leveraged to enrich the contextual representations of neural models (Peters et al., 2019; Zhang et al., 2019).", "However, WSD has mostly been framed as a multi-label classification task (Raganato et al., 2017b; Hadiwinoto et al., 2019) over a very large vocabulary of discrete senses.", "This formulation may limit a model's capabilities to properly represent word meanings, as each sense is only defined by means of its occurrences in a training set, while its inherent meaning remains linguistically unexpressed.", "Furthermore, rare or unseen senses are either poorly modeled or cannot be modeled at all.", "These problems have recently been mitigated by integrating sense definitions (glosses) within neural architectures (Ku-mar et al., 2019; Huang et al., 2019; Blevins and Zettlemoyer, 2020).", "Yet, despite their large improvements, none of these models attends all the possible definitions of a target word at once, and therefore each lacks the ability to represent both the input context and the candidate definitions together.", "Inspired by the Extractive Reading Comprehension framework (Rajpurkar et al., 2016) in the field of Question Answering (QA), we cope with these issues and reframe the WSD problem as a novel text extraction task, which we have called Extractive Sense Comprehension (ESC).", "In this setting, a model receives as input a sentence with a target word and all its possible sense definitions.", "Then, we request the model to extract the text span associated with the gloss expressing the target word's most suitable meaning.", "Within this framework, we also propose a transformer-based architecture (ESCHER ) that implements the ESC task by attending to the input context and target word definitions jointly.", "Through an extensive experimental setting, we show that ESCHER surpasses former state-of-the-art approaches by a large margin while, at the same time, requiring almost 3 times less training data points to attain performances comparable to its strongest competitor in a few-shot setting.", "Furthermore, thanks to our new formulation, the proposed model can effectively carry out predictions across different sense repositories and combine distinct inventories with unmatched nimbleness, attaining even higher results than when limited to a single resource only.", "1. The Extractive Sense Comprehension task (ESC), i.e., a reframing of the Word Sense Disambiguation problem.", "2. ESCHER : a transformer-based architecture for ESC, outperforming all the other modern architectures on the WSD task.", "3. An extensive study of the proposed model in different training regimes, i.e., in 0-shot, few-shot and fully-supervised settings.", "4. A study on combining data annotated with distinct lexicographic resources.", "Besides its performance advantages, ESC also comes with other benefits: it does not require a large output vocabulary, and it eases the joint use of corpora annotated with different inventories.", "Word Sense Disambiguation (WSD) is one of the long-standing problems in lexical semantics, introduced for the first time in the context of Machine Translation by Weaver (1949).", "WSD aims at linking a word in context to its most suitable meaning in a predefined sense inventory, which is usually a dictionary where each entry defines a concept via a definition (gloss) and a set of examples.", "Most approaches to WSD rely on WordNet (Miller et al., 1990) as the underlying inventory of senses for the English language, and SemCor (Miller et al., 1993) as training corpus.", "WordNet organizes lexical-semantic information by means of a graph where sets of synonyms are grouped into synsets (concepts) and edges are typed semantic relations.", "While early neural models used WordNet as a mere repository of senses (Raganato et al., 2017b; Hadiwinoto et al., 2019), more recent approaches have started to exploit sense definitions (Kumar et al., 2019; Blevins and Zettlemoyer, 2020) and relational information (Bevilacqua and Navigli, 2020; Conia and Navigli, 2021).", "Sense definitions, in particular, have been shown to be effective for modeling word senses (Luo et al., 2018; Kumar et al., 2019), as they provide information orthogonal to that available in the training data.", "This has been further investigated under different perspectives by Huang et al. (2019, GlossBERT), Blevins and Zettlemoyer (2020, BEM) and Bevilacqua et al. (2020, Generationary).", "GlossBERT casts the WSD problem as a binary classification task where, given a word in context and one of its dictionary definitions, it determines whether this definition matches the word meaning expressed in the context.", "BEM employs a bi-encoder to represent the target word and its sense definitions within the same space.", "Generationary, instead, has predefined sense inventories at its disposal and directly generates a definition given a word in its context.", "The strength of these approaches lies in the fact that glosses allow senses that are under-represented within the training corpus to be modeled, hence mitigating the long-standing paucity of sense-annotated data (Pasini, 2020).", "Nevertheless, none of the above approaches can exploit all definitions at once: indeed, glosses are either provided one at a time (GlossBERT), modeled with one vector only and independently from each other (BEM), or used individually as target text to be generated (Genera-tionary).", "Our new formulation (ESC) for the WSD problem stands out from previous approaches inasmuch as it is the first to access the input context and all the target word's definitions together, while, at the same time, dropping the requirement of a predefined sense inventory.", "Indeed, differently from its competitors, our proposed approach (ESCHER ) can scale effectively across different lexical resources even when they were not available at the time of training.", "In what follows, we first formalize the Extractive Sense Comprehension task (Section 3.1), then in-[...]", "troduce ESCHER , a transformer-based architecture for ESC (Section 3.2), and finally put forward a novel approach for mitigating the bias towards the most frequent meanings (Kilgarriff, 2004) within training data (Section 3.3).", "To unleash the full potential of attention-based models on the Word Sense Disambiguation task, we reframe WSD as a span-extraction problem.", "Formally, given a sense inventory S , we first define the definitional context D w for the target word w as the concatenation of all the possible definitions d 1 , . . . , d k in S for w , i.e., D w = w d 1 1 . . . w d 1 | d 1 | . . . w d k 1 . . . w d k | d k | , where w d z i is the i -th word of the gloss d z ( 1 z k ).", "Then, we reformulate the task as follows: given a target word w , a context c in which w occurs and the definitional context D w , a model has to find the interval [ i , j ] in D w which identifies the correct definition d D w of w in c .", "This formulation, on the one hand, aids to better characterize word meanings, thanks to the inclusion of all the target word definitions as additional input.", "On the other hand, it also relieves the burden of a large output vocabulary typically in the order of tens of thousands of meanings which makes the classification cumbersome.", "We now introduce a transformer-based model for the ESC task (Figure 1).", "It takes as input a context c with a target word w 1 concatenated with D w .", "The target word w is surrounded by the tags <t> and </t> and each definition in D w has the first letter capitalized and a period at the end.", "We separate the context c and the definitional context D w with the special symbol </s> and surround the whole text with the tags <s> and </s> .", "2 Formally, given the input: m = <s> w 1 . . . <t> w </t> . . . w n </s> w d 1 1 . . . w d 1 | d 1 | . . . w d k 1 . . . w d k | d k | </s> of length l , the model computes the span ( i, j ) containing the predicted gloss for the target word w as follows: H = transformer ( m ) Z = WTH + b Z s = (cid:2) Z 11 . . . Z 1 l (cid:3) Z e = (cid:2) Z 21 . . . Z 2 l (cid:3) where transformer can be any transformer-based architecture, H R f l is the matrix of hidden states, 3 and W R f 2 and b R 2 are trainable parameters.", "Z s and Z e are two variables containing the logits for each word w u indicating, respectively, whether it is the start or the end of the correct definition for target word w .", "1 For the sake of simplicity, in the following we use word to refer to subwords, words and multiwords.", "Finally, we train the model by averaging two distinct cross-entropy losses that we compute for the start and end indices: L s = Z si + log l (cid:88) v =1 exp ( Z sv ) L e = Z ej + log l (cid:88) v =1 exp ( Z ev ) where Z si and Z ej are the scores associated with the correct start and end indices.", "At prediction time, rather than allowing the system to output a span that does not correspond precisely to any definition in D w , the model outputs a pair ( i, j ) such that a definition d k D w starts in i and ends in j and its probability is the maximum across all the other gloss spans in D w .", "Formally, the model selects its output as follows: output = arg max ( i,j ) P ( w i , w j ) P ( w i , w j ) = P ( w i = start | Z s ) P ( w j = end | Z e ) P ( w u = start | Z s ) = exp ( Z su ) (cid:80) lv =1 exp ( Z sv ) P ( w u = end | Z e ) = exp ( Z eu ) (cid:80) lv =1 exp ( Z ev ) where P ( w u = start | Z s ) and P ( w u = end | Z e ) indicate the probability that w u is the start or the end of any of the k definitions, respectively.", "While our approach already allows all the possible definitions of a word to be contextualized by jointly encoding them together with the context sentence, it may still suffer from the high unbalance in sense distribution (Kilgarriff, 2004) and be biased towards the most frequent definition regardless of its contextualization.", "Our framework allows this issue to be dealt with in an elegant way, which we have called Gloss Noise (GN).", "GN counterbalances this bias by lowering the prior probability of the most frequent glosses.", "That is, inspired by the negative sampling technique (Mikolov et al., 2013), GN adds, to each training example, k frequent definitions that are not related to the target word.", "We sample the k glosses from the following multinomial distribution: p ( d i ) = f d i (cid:80) | D | j =1 f d j where D is the set of all possible definitions in the training set and f d i is the frequency of the i -th definition in a sense-tagged corpus.", "The value of k , instead, is sampled from a Poisson distribution with = 1 , so that the expected number of added definitions is equal to", "1. This allows the discrepancy between the training and prediction phases to be kept as small as possible, while also introducing negative signals for frequent senses.", "Indeed, Gloss Noise ensures that the expected number of times a definition is added as a negative example is equal to the number of times it is seen as a correct one, thereby counterbalancing the high rate at which frequent definitions are seen only as positive examples without overly affecting rare senses.", "In this Section we introduce the experimental setting we use to evaluate the proposed framework and neural architecture.", "Data We use the evaluation suite made available by Raganato et al. (2017a) for the English Word Sense Disambiguation task.", "It includes SemCor (Miller et al., 1993) for training, i.e., a corpus containing 33 , 362 sentences and 226 , 036 instances annotated manually with senses from WordNet 3.0.", "As common practice, we use SemEval-2007 ( SE07 ; Pradhan et al., 2007) as development set.", "For testing, we consider all the remaining datasets in the suite, i.e., Senseval-2 ( SE2 ; Edmonds and Cotton, 2001), Senseval-3 ( SE3 ; Snyder and Palmer, 2004), SemEval-2013 ( SE13 ; Navigli et al., 2013), SemEval-2015 ( SE15 ; Moro and Navigli, 2015) and their concatenation (ALL).", "4 In order to measure the extent to which systems generalize to rare and unseen words and definitions (zero-shot settings), we also consider five other test sets that we created from the ALL dataset:", "i) MFS , which contains test instances tagged with the most frequent sense for the target word in the training set;", "ii) LFS , which contains test instances that are tagged with a sense that is not the most frequent for the target word and that was seen at least once during training; 4 We note that the evaluation suite includes the dev set, i.e., SemEval-2007, within the ALL dataset, and so do we.", "iii) 0-lex , which contains test instances whose lexeme 5 was never seen as a target word during training;", "iv) 0-lex-def , 6 which contains test instances with a definition that was never seen associated with the target lexeme during training;", "v) 0-def , which contains test instances whose definition has never been seen during training.", "We note that 0-def differs from 0-lex-def as a definition is tied in WordNet to a synset, i.e., a set of synonymous senses, rather than to a sense; therefore the same definition may be seen associated with different lexemes.", "Comparison Systems As baselines, we consider the Most Frequent Sense computed on the training set (MFS SemCor) and two neural models featuring BERT large and BART large as text encoders, with a linear classifier over the whole sense vocabulary on top.", "As for the BERT large baseline, we follow Blevins and Zettlemoyer (2020) and keep BERT large weights fixed, while for BART large we finetune the whole model.", "As competitors, we consider the following models: GLU (Hadiwinoto et al., 2019), which keeps BERT weights frozen and trains a gated linear unit on top of it; SVC 7 (Vial et al., 2019), which uses a vocabulary compression technique; EWISE (Kumar et al., 2019); GlossBERT (Huang et al., 2019); BEM 8 (Blevins and Zettlemoyer, 2020) and EWISER (Bevilacqua and Navigli, 2020), which take advantage of external knowledge such as glosses and semantic relations.", "We note that EWISER uses a different development set, hence its results are not fully comparable with the others.", "Finally, we also consider two nearest-neighbour approaches based on synset embedding and vector similarity, i.e., LMMS (Loureiro and Jorge, 2019) and ARES (Scarlini et al., 2020).", "ESCHER Setting We use BART large (Lewis et al., 2020; Wolf et al., 2020) as transformer architecture 9 owing to the fact that it is among the strongest models on reading comprehension tasks 5 A (lemma, part of speech) pair.", "6 We identify a sense as a pair ( lexeme , definition ) .", "7 Similarly to Blevins and Zettlemoyer (2020), we report the best results of the SVC single model trained on SemCor only.", "such as SQuAD (Rajpurkar et al., 2016) and it allows us to feed sequences up to 1024 subtokens long.", "10 We use the output of its last decoder layer to represent the input tokens and compute the start and end token distributions.", "We note that ESCHER is directly comparable to the BART large baseline in terms of model complexity as both use the same transformer model with one linear layer on top.", "We finetune the whole ESCHER architecture with the Rectified Adam (Liu et al., 2020) optimizer with learning rate set to 1 e 5 for up to 300 , 000 steps, 20 steps of gradient accumulation and batches made of 700 tokens.", "11 In what follows, we report the results for our model with and without Gloss Noise (Section 3.3), denoting them as ESCHER and ESCHER No-GN , respectively.", "Framework Benchmark In Table 1 we report the F1 scores of ESCHER , ESCHER No-GN and all the other systems.", "By comparing BART large and ESCHER , we can measure the effectiveness of our proposed framework, ESC, on the performance of a transformer-based architecture.", "Indeed, the two architectures are nearly identical except for the last layer, where, for each token, BART large makes a prediction across the whole sense vocabulary, while ESCHER performs a binary classification.", "Thus, the large difference between the two models ( 8 . 5 F1 points) suggests that the Extractive Sense Comprehension formulation of WSD allows the potential of transformer-based architectures to be fully exploited, and, therefore, attain better performance.", "When Gloss Noise is enabled (ESCHER row), our model gains 1 F1 point in comparison to when it is disabled (ESCHER No-GN ).", "This highlights that directly mitigating the bias towards the Most Frequent Senses during training is fundamental to making our approach as effective as possible.", "Finally, thanks to our new formulation of the WSD problem, a simple model such as ESCHER outperforms all the other approaches by a large margin on the ALL dataset, beating the previous state of the art by 1 .", "7 points (BEM).", "This corroborates our hunch that the Extractive Sense Comprehension task is an extremely effective formulation of WSD for transformer-based architectures.", "Results on Rare and Unseen Senses In Table 2 we report the results of the three best-performing models, i.e., ESCHER , ESCHER No-GN and BEM, on five datasets, measuring how well models perform when dealing with rare words and meanings in different situations (cf. Section 4.1).", "ESCHER No-GN manages to outperform BEM on most datasets, hence already demonstrating that our new framing allows transformers to better generalize on rare words and senses.", "When enabling Gloss Noise, ESCHER achieves even higher performance on all datasets, falling behind BEM only on the MFS dataset.", "Interestingly enough, the comparison with BEM on the 0-lex-def and 0-def datasets shows that ESCHER can easily predict definitions that were either seen associated only with lexemes different from the input ones or not seen at all, while, in direct contrast, BEM performs poorly in both scenarios.", "A similar pattern is observed for the Least Frequent Senses (LFS) dataset, where ESCHER outperforms BEM by 3 .", "6 F1 points at the cost of only 1 point less in predicting the most frequent meanings.", "Being able to combine datasets tagged with different inventories is a desirable ability for a model.", "Indeed, being able to use different datasets grants access to a larger number of examples, while, at the same time, removing the necessity of having Model MFS LFS 0-lex 0-lex-def 0-def BEM 94.7 52.1 91.2 67.1 68.2 ESCHER No-GN 93.7 52.8 94.5 74.3 76.4 ESCHER 93.7 55.7 95.1 75.0 76.8 Table 2: Comparison of ESCHER against its competitors on MFS, LFS and zero-shot datasets.", "one system for each inventory.", "However, merging distinct lexicographic resources is not a straightforward task and requires its own complex pipeline.", "An easier approach could be to concatenate datasets tagged with different vocabularies, which, nonetheless, would expose models to possibly different definitions for nearly identical meanings and to different levels of sense granularity.", "In this Section we therefore investigate the ability of ESCHER to manage data annotated with distinct sense inventories when simply joining them.", "To this end, we train ESCHER on the concatenation of SemCor and the Oxford Dictionary dataset (Chang et al., 2018) and compare its performance with the state-of-the-art system at the moment of writing, i.e. BEM, when trained on the same corpus.", "Chang et al. (2018) introduced a dataset containing roughly 785 , 000 instances for as many sentences and covering 79 , 004 senses of the Oxford Dictionary of English.", "The dataset is split Dataset Polysemy Exp.", "into train (Oxford train ), dev (Oxford dev ) and test 12 (Oxford test ).", "In Table 3 we report its statistics together with those of the training (SemCor), development (SE07) and test (ALL) sets of the standard evaluation suite.", "Specifically, we show the average polysemy of each dataset (Polysemy), the expressed polysemy (Exp. Polysemy), i.e., for each lexeme we compute the number of senses that appear in the dataset over the number of possible senses it can assume in the reference vocabulary and we average across all lexemes, the number of distinct senses (#Senses) and the number of instances (#Instances).", "As one can see, Oxford train contains more than two times the instances and senses of SemCor, while having roughly half of SemCor's polysemy but a higher expressed polysemy.", "As for Oxford test , it contains a larger number of instances than ALL, and also a higher polysemy and expressed polysemy.", "We analyze three different scenarios:", "i) Standard , where the system is trained on the same inventory with which it is tested, e.g., trained on Oxford train and tested on Oxford test ;", "ii) Zero-shot , in which the system is trained on one sense inventory and tested on the other, e.g,.", "trained on SemCor and tested on Oxford test ; and", "iii) Joint , in which the system is jointly trained with the two sense inventories.", "In order to combine the two different inventories, we train the model by alternating the batches made up of either SemCor or Oxford train instances.", "Since the number of instances in SemCor is lower than that in Oxford train , we oversample SemCor by repeating its instances.", "Finally, we select the model with the best macro F1 averaged on the two validation datasets (SE07 and Oxford dev ).", "We add the subscript S , OT and S + OT to models trained on SemCor, Oxford train and their concatenation, 12 We refer to the one named test_easy in the original paper.", "As one can see from Table 4, ESCHER outperforms BEM in all settings.", "That is, when trained with one inventory and tested on a dataset tagged with the other inventory (BEMS and ESCHERS on the Oxford test and BEMOT and ESCHEROT on ALL), ESCHER attains 6 and 3 points higher performance, respectively, than its competitor.", "This result is not important per se, but it also suggests that ESC does not bind the model to a single lexical knowledge base.", "Indeed, by extensively leveraging sense definitions, it allows a transformer-based model to scale on multiple inventories as long as they provide at least one definition for each meaning.", "BEM, instead, by encoding each gloss independently, falls short in representing definitions that were previously unseen, as also shown in Section 4.2.", "When trained on SemCor and Oxford train together, not only can ESCHER handle the two inventories that coexist in the training set effectively, but it also leverages them at its own convenience, achieving 81 .", "5 F1 points on ALL, in contrast to BEM which performs slightly worse than when trained in the Standard scenario.", "We now move to analyzing the performances of ESCHER in a few-shot scenario, i.e., when the number of samples available for each sense is limited.", "Setting We compare ESCHER against BEM, and report the F1 scores on the ALL dataset when varying the number k of training instances per sense in { 1 , 3 , 5 , 10 , unlimited } .", "We show in Table 5 the", "number of instances drawn from SemCor that are seen at training time for each k .", "We also report the F1 scores of ESCHER on the MFS, LFS and 0-lex-def datasets in the same scenario in order to investigate the extent to which the difference in the number of occurrences for each sense impacts the ability of the model to generalize on rare senses.", "Results As one can see from Figure 2a, 13 ESCHER makes much more efficient use of training data than BEM, needing roughly one third of the instances to attain the same results.", "In fact, BEM needs more than 5 instances per sense ( 83 , 068 instances) to reach the same performance ( 73 . 9 F1 points) as that of ESCHER trained with k = 1 ( 33 , 206 instances).", "Furthermore, with roughly half of the instances ( k = 10 ) ESCHER attains results that are in the same ballpark as the current state of the art.", "Interestingly enough, by looking at Fig-13 BEM chart from the original paper.", "ure 2b, we see that ESCHER 's accuracy on the MFS instances rises when adding more examples.", "This is due to the fact that frequent senses get increasingly represented within the training set, therefore better matching the sense distribution in the test set.", "Similarly, the performance on the Least Frequent Senses also rises from k = 1 to k = 10 , but slightly drops when considering the whole dataset.", "By manually inspecting the data we notice that this happens because most of the instances added to the dataset with k = 10 are tagged with the most frequent sense, therefore drastically skewing the sense distribution.", "Finally, the performance on 0-lex-def remains stable for all k , hence showing that, despite increasingly skewing the distribution towards the most frequent definitions, our approach can still provide meaningful representations for unseen senses.", "14 7 Error Analysis In order to get a clear picture of the model's pitfalls and gain insights into possible directions for future work, we perform an analysis of ESCHER misclassifications on the ALL dataset.", "We find that the mistaken predictions belong to three main categories: most frequent sense bias, insufficient context and WordNet sense granularity.", "Since we already discussed the first of these in the previous sections, we focus here on the latter two.", "Insufficient Context Annotators often compiled the WSD evaluation datasets by considering each instance in the context of the documents they appear in.", "In contrast, WSD models typically take into account only the sentence surrounding the target word, discarding a large portion of the available context.", "This behavior causes a discrepancy where sentences do not provide enough information to disambiguate the target words therein.", "Indeed, ESCHER mistakes most often appear in sentences with an average length of 27 tokens, i.e., roughly 5 tokens less than the average length in ALL ( 32 ).", "This suggests that moving the disambiguation context from sentences to documents may improve the performances of models as long as they are capable of handling longer sequences.", "WordNet Sense Granularity The granularity of WordNet senses has been considered one of the main reasons behind the complexity of the WSD task (Palmer et al., 2007).", "To measure the extent to which this affects ESCHER 's performance, we utilize the 45 domain-based labels introduced by Lacerra et al. (2020, CSI), which define macro categories for each WordNet sense.", "For instance, in the CSI inventory, the sense argument%1:10:03:: belongs to the following domains: Culture Anthropology and Society , Language and Linguistics and Communication and Telecomunication .", "To better understand the relation between ESCHER predictions and the gold annotations, for each misclassified instance in ALL, we compute the average Jaccard similarity between the CSI labels assigned to the gold annotation of that instance and those assigned to the sense predicted by ESCHER .", "As an example, ESCHER misclassified an instance annotated with the sense argument%1:10:03:: , assigning to it the sense argument%1:10:00:: .", "Examining the domains to which the predicted sense belongs, we can see a considerable overlap (and consequently a high Jaccard similarity) with the domains of the gold sense (i.e. argument%1:10:03:: ): Culture Anthropology and Society , Politics Government and Nobility , Language and Linguistics and Communication and Telecomunication .", "As a term of comparison, we repeat the same procedure when considering a random baseline as WSD model, i.e., one that predicts for each instance a random sense among those of the target word.", "We find that ESCHER predictions have an average Jaccard similarity with the gold predictions of 0 .", "49 , whereas the random baseline achieves 0 .", "27 .", "This suggests that, even when providing a formally mistaken output, ESCHER still predicts a sense that is correlated, according to CSI labels, to the gold sense.", "Our analysis calls for further work to improve evaluation in WSD as the F1 score cannot discriminate between predictions that are clearly wrong and predictions that are just slightly different from the gold sense.", "In this paper, we introduced a novel framing for the Word Sense Disambiguation problem inspired by the Extractive Reading Comprehension task in QA: given a word in a sentence and a text containing all its possible definitions, a model has to identify the span containing the correct definition for the target word.", "For this new formulation which we called Extractive Sense Comprehension (ESC) we devised a transformer-based architecture (ESCHER ), which, differently from previous approaches, can look at all the target word definitions at once, alongside the input sentence.", "ESCHER surpasses the current state of the art by 1 .", "7 points on the standard English all-words WSD task, thanks to its more efficient use of the training data.", "Also, when provided with only a few examples for each sense, ESCHER attains remarkable levels of performance, requiring roughly three times less annotated instances than its direct competitor to reach the same performances.", "Furthermore, our new formulation allows ESCHER to scale across different inventories and to combine them effectively.", "Indeed, when provided with data annotated with multiple vocabularies, it achieves even better results than when limited to one inventory only, with results in the 86-88% range.", "As future work we plan to expand this framework so as to condition the prediction not only on the target word context and definitions, but also on the possible senses of its surrounding words.", "The pretrained model, along with code and data, is available at https://github.com/ SapienzaNLP/esc .", "This work was supported in part by the MIUR under grant Dipartimenti di eccellenza 2018-2022 of the Department of Computer Science of Sapienza University." ]
[ "abstain", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "result", "objective", "objective", "objective", "objective", "objective", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "objective", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "method", "abstain", "other", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "other", "other" ]
[ "We consider the problem of generating natural language given a communicative goal and a world description.", "We ask the question: is it possible to combine complementary meaning representations to scale a goal-directed NLG system without losing expressiveness?", "In particular, we consider using two meaning representations, one based on logical semantics and the other based on distributional semantics.", "We build upon an existing goal-directed generation system, S-STRUCT, which models sentence generation as planning in a Markov decision process.", "We develop a hybrid approach, which uses distributional semantics to quickly and imprecisely add the main elements of the sentence and then uses first-order logic based semantics to more slowly add the precise details.", "We find that our hybrid method allows S-STRUCT's generation to scale significantly better in early phases of generation and that the hybrid can often generate sentences with the same quality as S-STRUCT in substantially less time.", "However, we also observe and give insight into cases where the imprecision in distributional semantics leads to generation that is not as good as using pure logical semantics.", "We consider the problem of goal-directed natural language generation (NLG) (Gatt and Krahmer, 2018).", "Here, the agent intends to communicate some information about its world to another entity.", "It has semantic representations for its world, its goal, and a grammar to realize the language.", "Given this input, the goal is to generate (realize) a syntactically correct representation of the semantic goal without omissions or additions (see Figure 1).", "This task is different from open ended text generation that fills in text after a prompt or the problem of filling in a blank given some context.", "Many previous systems for goal-directed NLG use first-order logic (FOL) extended with the calculus to represent semantics (Church, 1985).", "This semantic representation allows for very precise generation.", "However, the process is usually slow, primarily because each step of the generation process needs to check that the semantics of the partially realized text is compatible with the eventual goal.", "This step typically involves checking all possible compatible bindings , which is combinatorial.", "Using distributional semantic representations (Deerwester et al., 1990; Mikolov et al., 2013; Pennington et al., 2014) may allow us to sidestep this combinatorial process through checks via sim-1936 ple algebraic operations.", "However, these semantics may lack precision and introduce errors in generation with respect to the goal.", "In this paper we ask whether it is possible to combine these two different semantic representations in a single generation system that takes advantage of their strengths while mitigating their weaknesses.", "In particular our insight is that, early in the generating process, we may not need to be very precise.", "We can use distributional semantics to quickly add in the main elements of a sentence and then use logical semantics to fill in the details more slowly and precisely.", "Our goal is to balance these elements to get a more scalable generation system while not sacrificing much, if any, expressiveness.", "A rich literature exists for generation systems.", "Overgeneration and ranking systems derive possible sentences from word lattices (Langkilde-Geary, 2002; Langkilde, 2000; Bangalore and Rambow, 2000).", "These word lattices are directed acyclic graphs whose edges correspond to single words.", "To generate a valid sentence, the system can traverse a path in the lattice.", "Then, they rank the candidate sentences using a language model.", "An alternative approach is to view generation as an AI planning problem.", "The planner can apply grammar actions to take planning steps until it finds a state that ful-fils some communicative goal.", "One such system is SPUD (Sentence Planner Using Descriptions) which answers questions using a knowledge base (Stone and Doran, 1997).", "The CRISP system builds on SPUD by applying an off-the-shelf planner instead of using a greedy search (Koller and Stone, 2007; Koller and Hoffmann, 2021).", "This allows for the application of search heuristics and other advances in classical planning.", "A further improvement is PCRISP which allows probabilistic actions by translating probabilities into costs (Bauer and Koller, 2010).", "Other work uses neural networks with an encoder and/or decoder architectures.", "For instance, a transformer will already have the semantics of individual words as static word vectors (Vaswani et al., 2017; Peters et al., 2018; Radford et al., 2018; Devlin et al., 2019; Lewis et al., 2020).", "The overall meaning is calculated using attention and feed-forward layers.", "These approaches create a complex representation of the language model in the encoder and employ a variety of sampling strategies in the decoder.", "While they can be much faster than logic-based systems, it can be difficult to guarantee that the generated string will be consistent with some world or goal.", "Recent work has started to explore hybrid approaches in this space.", "One approach adds logical constraints and plans to transformers and LSTMs.", "DualEnc models are provided con-tent plan traversals through RDF graphs as input (Zhao et al., 2020).", "While these plans provide the model with the information that is supposed to be included in the generated text, there is no guarantee the model will include all of it or that the information is truly consistent with the original RDF graph.", "Rather than providing a plan before generation, NeuroLogic Decoding constrains generated transformer output based on logical constraints during the decoding step (Lu et al., 2021).", "By leveraging predicate logic, this allows the output to be more precisely constrained to include any necessary true facts and leave out any extra, potentially incorrect, information.", "This precision is added at a cost asymptotically equivalent to a conventional beam search.", "However, the logical constraints in this work are syntactic rather than semantic, and ensure that, for example, certain words are not used by the decoded string.", "In contrast to such approaches, in our work, we modify S-STRUCT (McKinley and Ray, 2014; Pfeil and Ray, 2016), a planning based system for goal-directed NLG, to use both distributional as well as logical semantics.", "S-STRUCT is described in detail in the next section.", "This system models generation as planning in a Markov decision process (MDP).", "We show experimentally that our approach scales better than pure logical semantics in many cases.", "We also identify and discuss tradeoffs that arise from the use of distributional semantics, that in some cases lead to worse generation quality.", "Scalable Sentence Tree Realization using UCT (Upper Confidence bounds applied to Trees), or S-STRUCT, is a planning based NLG system that generates single sentences, using a world of facts, a communicative goal, and a grammar.", "The world and goal are both specified semantically in FOL.", "The world describes all entities and relations known to the generator while the goal specifies information to communicate.", "The grammar consists of a semantically annotated probabilistic lexicalized tree adjoining grammar (PLTAG) derived from the XTAG project (XTAG Research Group, 1998).", "Before S-STRUCT begins the generation process, it finds an expanded communicative goal.", "The expanded goal includes any extra information necessary to make the goal entities unambigiuous from the other entities in the world.", "Then, S-STRUCT prunes the grammar of lexicalized trees that, given the goal, will never be used.", "The resulting pruned grammar will be able to express relations between goal entities and will contain trees that satisfy semantic constraints that may not be explicitly mentioned in the goal (e.g., a complementizer that tree).", "It will also be able to express at least one referring-expression for each unique entity (e.g., if we need to generate a cat entity, we may need to clarify whether it is black or brown ).", "We observe that once trees are pruned, any associated entities and relations in the world can be pruned as well.", "This world pruning reduces the space of possible bindings, which we have empirically verified results in a significant speed increase without impacting accuracy.", "We call this version of S-STRUCT with world pruning S-STRUCT v2.", "Once this pruning is completed, to generate a sentence, S-STRUCT uses the UCT (Kocsis and Szepesvri, 2006) procedure (see Figure 2) to plan in an MDP.", "States in the MDP are semantically annotated partial trees reflecting the partial sentence constructed so far.", "Actions adjoin or substitute a single PLTAG tree.", "At each step, S-STRUCT ranks actions to add a new fragment to the current partial tree.", "If there are unexplored actions, it chooses such an action to explore.", "Otherwise, it chooses actions based on the UCT ranking, which balances explore/exploit criteria.", "To estimate the downstream quality of the action, S-STRUCT looks ahead by a number of exploratory actions that are uniformly sampled.", "For each action, S-STRUCT finds the reward of each state reached, propagating the rewards up the search tree.", "These rewards identify the best action at a state.", "The reward is largely determined by how well the partial sentence matches the semantics of the goal.", "To do this, S-STRUCT considers bindings between entities in the partial sentences and the goal.", "Here, a valid binding between entities is one in which the stated semantic information does not disagree (e.g., we have a cat in the partial sentence that is white and a cat in the goal that is white and long-haired ).", "In the reward, S-STRUCT only receives credit when a goal entity has a valid biding to a partial-sentence entity.", "The reward also considers the number of entities missing a determiner, the number of partial-sentence entities with no goal bindings, the number of world bindings, and the length of the sentence.", "The first two characteristics penalize missing information.", "The number of world bindings reflects potential ambiguity in the sentence.", "Finally, the last criterion reflects the fact that given two sentences, both of which express the goal semantics precisely, we prefer the shorter sentence.", "The action search procedure returns the action with the best reward.", "S-STRUCT applies this action and updates the partial sentence.", "If a terminal state is reached in which adding more actions will not improve the reward or the generation process runs out of time, then this sentence is returned.", "If not, the action search repeats.", "In subsequent searches, we may be able to reuse parts of the search tree of exploratory actions as some will still be relevant in the new state.", "To improve search efficiency, the search in S-STRUCT is carried out in two phases.", "First, only substitution actions are considered until all substitution nodes in the PLTAG tree are filled.", "Then, adjoin actions (and some substitutions if required by the added adjoins) are considered to complete 1938 the generation.", "This reduces the branching factor of the search considerably, speeding up the process.", "In this section, we describe how we modify S-STRUCT to use distributional semantics.", "We first describe how we compose distributional semantics and how the distributional reward is computed.", "Then, we describe additional modifications required by using imprecise semantics in the search, including a step to correct word ordering errors and using a beam search instead of a greedy search in UCT.", "We must be able to compose embeddings to obtain the semantics of the goal and each state.", "We first note that we cannot use order-dependent composition of word vectors to obtain state or goal embeddings.", "Say our goal is to express dog ( x ) cat ( y ) rat ( z ) chase ( y, z ) chase ( x, y ) .", "If we have generated to the point of, for example, dog chase cat, then we can use an order dependent method to compose the semantics of the dog , chase , and cat vectors to represent the fragment.", "However, the logical goal does not order relations in a meaningful way.", "In fact, figuring out the syntactic structure to realize the goal is a problem S-STRUCT itself solves.", "So, to compose the goal embedding, we will need a method that is not order dependent.", "To be consistent, we need to apply the same method for state embeddings as well.", "In our approach, we find goal or state embeddings by averaging the components.", "In other words, we map the entities and relations in our goal or state to word embeddings (for example, the relation chase is mapped to the vector for the word chase ) and then average these to create an embedding for the state or the goal.", "For each partial state, we need to compute a reward that measures how close we are to realizing the goal.", "This is described in Algorithm", "1. First, we calculate the distance between the partial state and the goal as the Euclidean norm of the difference between the embeddings (line 2).", "We next add a penalty for the number of missing or extra conditions (line 3) and the sentence length (line 4).", "Thus the best states will be short sentences that do not have missing or extra conditions and that have embeddings close to the goal.", "C 1 , C 2 and C 3 are weight factors that modify the relative importance of these factors (hand-selected as 100, 15 and 10 and consistent in all experiments).", "Finally, for S-STRUCT we need the reward to be positive.", "This is because S-STRUCT's tree policy chooses actions in part based on the total reward over all the times it has been applied.", "Here, a negative reward will penalize actions that we see more often.", "To fix this, we add a large constant to the reward (line 1), making the reward always positive.", "This reward shaping will not affect the optimal plan (Ng et al., 1999).", "Each semantic representation has its strengths and weaknesses.", "How should we integrate the two?", "First, consider an alternative in which we only use distributional semantics.", "This version (let us call it PureDist) would be fast, but runs into several issues.", "First, in the absence of word-order-sensitive composition, PureDist cannot identify which sentences have the wrong word ordering.", "Additionally, a problem arises with the stopping criterion.", "Generation should stop when not taking an action leads to a better reward than taking one.", "For PureDist, since our embedding vectors are high dimensional, there are many degrees of freedom to slightly improve the reward.", "So, while S-STRUCT can only get reward for fulfilling a goal relation or adding a determiner to an entity once, PureDist can keep generating by adding new, potentially repetitive words that are not adding any new information but instead are moving the state slightly closer to the goal vector.", "Without a strong way to determine whether or not the current state has reached the goal, PureDist's generation quality is more heavily tied to the balance of the sentence length penalty.", "If the sentence length penalty is too high, PureDist will cut generation off before useful information has been expressed.", "If it is too low, generation can continue to add irrelevant words that move the state slightly closer to the goal.", "As a result, using a purely distributional semantic search is not a viable alternative (this is validated in our experiments).", "This leaves two options: we can use distributional semantics to start, and then switch to logical, or vice versa.", "Of these, the first is more suitable.", "The key intuition is that early in generation, the work done to compute bindings and to validate partial sentences in S-STRUCT is overkill.", "A less precise semantic representation could do just as well, while being more efficient.", "Further, by switching to logical semantics after distributional we will have the opportunity to correct word ordering errors.", "Conversely, using logical semantics in the early phase means we are potentially doing unnecessary work.", "Therefore we decide to use distributional semantics to start, and then switch to logical semantics.", "When should we switch?", "As mentioned above, S-STRUCT has a natural transition point.", "The first part of the search focuses only on substitution actions (a substitution phase) before switching to an adjoin phase.", "We choose to use distributional semantics in the substitution phase.", "In our example in Figure 3, we use our distributional reward in the initial and substitution actions getting us to cat chased dog.", "At this point, we cannot add more information without adjoin actions, so we can move to the next phase.", "Since our state/goal composition is word order in-dependent, the output of the first phase may have ordering errors, such as in Figure 3b.", "We address this by adding a Swap phase in between the distributional and logical semantic phases.", "In this phase, we find all pairs of entities in the tree output by the distributional phase that have the same type (such as a noun phrase), and consider the trees that result if we exchange them.", "For each such tree, we compute the original S-STRUCT reward with logical semantics.", "This means that we consider exact bindings of the sentence entities to their world and goal counterparts to determine the reward of the sentence.", "We greedily apply the best swaps we can find until doing no swap yields a better reward.", "Unlike with Dist, Swap will only get credit for adding an entity if it is being used correctly, meaning Swap will get a better reward when word ordering mistakes are fixed.", "While Swap actions can mitigate some of the mistakes caused by the first phase, they may not account for all possible errors, such as the use of incorrect substitutions.", "So we use a beam search within the UCT search of the first phase of HS-STRUCT instead of greedily selecting the best state.", "This means HS-STRUCT can keep track of multiple states that may seem sub-optimal when using the distributional reward but will be more successful under the formal logic reward.", "The resulting beam after the first phase is passed into Swap as described above.", "Each partial state is processed by Swap, and the best state found is then input to the third phase, which is regular S-STRUCT, to perform adjoins and finish the generation process (shown in Figure 3c and 3d).", "By adding this beam search, we allow HS-STRUCT to partially underspecify substitution decisions during the distributional phase.", "To keep generation efficient, we split trials of exploratory actions between all states in the beam.", "In other words, each beam search state uses an equal portion of the overall exploratory actions, keeping the total number of exploratory actions the same as without the beam search.", "Our HS-STRUCT algorithm is shown in Algorithm", "2. We begin by using distributional semantics (Dist) from the initial (empty) state.", "This search follows the general structure of the original S-STRUCT search (Figure 2) though with a beam search (line 4 and Figure 3a).", "We only allow initial and substitution actions in this phase as we are only trying to block out the main ideas.", "Then, we consider swap actions to correct for word order issues (lines 5 and 6 and Figure 3b).", "Now that we have our swapped states, we down select into a single state for the remainder of generation (line 7).", "Finally, we use our original FOL-based S-STRUCT to add any details that would have required adjoin actions to finish out our generation (line 8 and Figures 3c and 3d).", "Our primary hypothesis is that integrating distributional and logical semantics through HS-STRUCT will scale better (i.e. generate better quality sentences in less time) than either S-STRUCT v2 (S-STRUCT with world pruning), PureDist or using distributional semantics after logical semantics.", "We will also evaluate the impacts of design choices such as the beam search.", "We have not compared against contextual language models in our experiments because, as described in Section 1, the most related such approaches that we know of still do not address the goal-directed NLG task.", "Data .", "We follow prior work and focus on generation of English sentences, pulling world facts and goals from the WSJ section of the Penn Tree-Bank corpus (McKinley and Ray, 2014; Marcus et al., 1999).", "The sentences were parsed with an LTAG parser (Sarkar, 2000; XTAG Research Group, 1998) to find the best parse trees for each Test Set Goals World World Avg.", "sentence.", "A subset of the most frequently occurring XTAG trees were chosen and manually semantically annotated with FOL semantics.", "Together, these trees could parse 74% of the corpus.", "For our distributional semantic representation, we use pre-trained OLIVE word vectors trained on the Wikipedia English corpus (Seonwoo et al., 2019).", "While HS-STRUCT is agnostic to the distributional semantic representation used, OLIVE vectors are trained to have additive compositionality, so we know we can compose our partial sentence and goal embeddings.", "Some sentences in our dataset had to be removed because of the lack of OLIVE vectors to cover them.", "For our experiments, we choose goals from the semantic annotations of sentences in the dataset, with the world being a combination of facts from the semantics of all goals (sentences).", "These worlds and goals are split into a simple and a complex dataset based on the complexity of the goals with the complex dataset having more world entities, relations, average relations and entities in the goal (Table 1).", "These datasets are made from non-overlapping goals.", "An example simple goal is bank ( z 1) acquiesced ( z 1) which could mean The bank acquiesced., and an example complex goal is siliconvalley ( z 1) sigh ( z 2) relief ( z 3) heaved ( z 1 , z 2) of ( z 2 , z 3) which could mean Silicon Valley heaved a sigh of relief. 1941 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 S-STRUCT v2 Hybrid, 20 States Reversed Hybrid, 20 States PureDist", "Metrics.", "There are two possible ways to evaluate our generation quality: syntactic and semantic.", "The reward assigned by S-STRUCT v2 and HS-STRUCT is primarily based on a semantic match between the goal and the partial sentence.", "However, in some cases a semantic difference can be misleading if the syntactic realization is similar.", "So, in addition to our semantic reward, we also evaluate our sentences using ROUGE-1 (Recall-Oriented Understudy for Gisting Evaluation)(Lin, 2004).", "ROUGE is designed to evaluate summaries of texts by comparing them to ideal summaries created by humans.", "We use it to compare the result of each approach after each action to the \"ideal\" sentence with the best possible reward.", "All experiments were implemented in Python 3.7.1.", "They were run on a single core of an Intel(R) i5-8250 processor clocked at 1.60GHz with access to 8GB of RAM.", "The results of our evaluation are shown in Figure 4a to 4h.", "In each case the x -axis is time in seconds and the y -axis is the percentage of the best metric at a given time.", "The results are averaged over all goals in each dataset.", "As we can see in Figure 4a, HS-STRUCT gains a much higher reward than the method that only uses the distributional reward, PureDist.", "As we discussed in Section 3.3, there are a number of issues like word ordering and stopping criteria which makes generation with only distributional semantics difficult.", "This result provides empirical validation of these observations.", "Because PureDist performed so poorly on even simple goals, we did not run it on complex goals.", "We also consider reversing the order of distributional and formal semantics within the hybrid, starting with the logic-based FOL and then using the distributional Dist.", "As we can see in Figure 4a, reversed hybrid is less efficient than either HS-STRUCT or S-STRUCT v2.", "Reversed HS-STRUCT needs to spend twice the time switching between semantic systems and potentially much more time completing swap actions than regular HS-STRUCT.", "While we chose the sentence length tradeoff hyperparameter to allow for the reversed HS-STRUCT to achieve a good reward, in general using a distributional reward for adjoin actions leads to issues with deciding when to cut generation off.", "Overall, this shows that simply reversing HS-STRUCT does not yield an improvement in performance.", "On the simple dataset (Figures 4a and 4b), HS-STRUCT initially creates sentences with a higher", "syntactic and semantic quality faster than S-STRUCT v2.", "The final syntactic quality (4b) is the same as S-STRUCT v2, though there is a small gap in the final semantic quality.", "On the complex dataset, (Figures 4e and 4f), while HS-STRUCT again produces sentences with a higher reward faster than S-STRUCT v2 early on in generation.", "But there is a decrease in the final metric obtained by HS-STRUCT both syntactically and semantically by about 12% syntactically and about 17% semantically.", "The reason for the gap in the final metric values is analyzed further below.", "Overall, we find that HS-STRUCT sometimes produces lower quality sentences on high complexity goals than S-STRUCT v2 given enough time.", "However, even when the goal is complex, HS-STRUCT can produce higher quality sentences than S-STRUCT v2 under a short time limit, and consistently achieves this when the goal is simple.", "Figures 4c and 4g show the effect of using a beam search in the distributional phase of HS-STRUCT as opposed to a greedy search.", "On the simple dataset (Figure 4c), allowing HS-STRUCT to choose between stored states in the formal logic phase improved the average final reward by 22% .", "On the complex dataset, this change resulted in a 154% increase.", "This indicates as well that the distributional phase may not be very accurate at selecting the single best state in the search process.", "When a beam search is used, a decision must be made how to allocate the UCT rollouts to different states in the beam.", "Instead of giving each state the same number of trials as the single, greedy search approach, which would make the beam search much less efficient, we hypothesize that we can split the overall number of trials between each state to receive a comparable reward in significantly less time.", "As we can see in Figure 4d, splitting the number of trials has a very significant effect on the speed and quality of generation on the simple dataset.", "Simply providing each state in the beam the same number of trials as greedy search slows generation considerably.", "Again in Figure 4h, we observe that not splitting trials slows generation substantially in the early phases while having no significant impact on quality.", "Increasing the number of trials to 2x also slowed generation with no significant increase in quality.", "Thus, a broad and shallow search seems well suited to the early phase of generation, which agrees with our intuition.", "Our results show that HS-STRUCT can produce sentences that are around 12% lower in terms of syntactic quality than S-STRUCT v2 under no time constraints.", "In this section we discuss why we see this gap and whether it is due to fundamental aspects of distributional semantics.", "The reward gap between HS-STRUCT and S-STRUCT v2 results from a number of issues such as incorrect parts of speech, incorrect verb valence, and over-generation.", "These errors can co-occur, but we report them without overlap, prioritizing part of speech errors.", "This means that the reported error frequencies are a lower bound for every error type except part of speech.", "Parts of Speech.", "The same word may be used as different parts of speech.", "This is important in generation, but distributional semantics has diffi-culty telling this apart.", "A common problem for HS-STRUCT was using an entity as a verb.", "These mistakes will not be fixed by the Swap phase and will not allow for valid bindings in the logical phase.", "This means that we cannot recover if all beam search states contain this error.", "S-STRUCT v2 does not make such mistakes, since it does not use distributional semantics.", "On the complex dataset, these part of speech mistakes account for about 50% of cases in which HS-STRUCT earns a worse reward than S-STRUCT v2.", "Such errors could potentially be fixed with improved embeddings considering different vectors for different parts of speech.", "Verb Valence.", "Our grammar has multiple possible verb trees which will be lexicalized with the possible verbs in our lexicon.", "This means that, as appropriate, verbs can lexicalize multiple trees, representing the different number of arguments the verb could take (also known as the verb's valence).", "In S-STRUCT, this valence distinction will not cause issues.", "Using the wrong tree with the correct root will not count toward the goal satisfaction portion of the reward since the number of arguments in the semantic representation has to match.", "For HS-STRUCT, however, the reward calculation does not explicitly check for the correct valence as the verb is the same, with the same embedding, so the same goal distance will be given to partial sentences using either tree.", "In the simple dataset, this leads to a partially artificial reward gap.", "In 60% of cases in which HS-STRUCT received a lower reward, the final sentences produced by HS-STRUCT and S-STRUCT v2 are identical.", "In the complex dataset, valence issues accounted for about 20% of cases in which S-STRUCT v2 outperformed HS-STRUCT.", "This issue could be alleviated by computing different embeddings for different valences of a verb.", "Over-generation.", "On the complex dataset, we also see a number of cases in which the overall content of the two generated sentences were nearly identical, with HS-STRUCT adding in unneeded additions like extra complementizers.", "Since extra complementizers do not change the semantic content, this mistake will barely affect the logical reward but does decrease ROUGE-1 scores.", "This over-generation may also stem from the incremen-tal benefit of HS-STRUCT's reward function as described in Section 3.3, and accounts for about 15% of cases in which HS-STRUCT earns a worse reward than S-STRUCT v2 on the complex dataset.", "It could potentially be alleviated by more carefully tuning the sentence length penalty in the reward.", "Examples.", "Consider the sentence The rates in the secondary market are typical, which is expressed as the goal rates ( z 1) typical ( z 1) market ( z 2) secondary ( z 2) in ( z 1 , z 2) .", "As we can see, there is no verb listed in the goal semantics.", "The copula are would not make sense as an FOL relation as typical ( z 1) already implies that the rates are typical.", "We also run into issues with the preposition in.", "Since it could also feasibly be an abbreviation for inch, our grammar includes it as a noun as well.", "HS-STRUCT begins by choosing a typical declarative adjective small clause tree ( nx0Ax1) with typical as the AP and the NP and V substitution nodes left open.", "It cannot tell the difference between noun and preposition in, so it sees substituting in for the remaining NP node (essentially creating the string in typical).", "Using our OLIVE vectors, the combination of in and typical is closer to the goal than the correct rates and typical (because in is present in the goal), so this part-of-speech mistake is seen as beneficial.", "HS-STRUCT may also store the rates substitution, but this is a function of the beam width.", "It leaves the copular verb are location blank, as this verb does not appear in the goal.", "While the in substitution helped the distributional reward, it will hurt the logical reward in future since there is no in entity in the world or goal to bind to (there is only an in relation).", "Since HS-STRUCT will not be able to find valid bindings, it will not be able to add additional information, forcing the generation to stop at in typical.", "Here, generation can be improved by increasing the beam states until the rates substitution is also chosen, or by having the distributional phase represent preposition in and inches in separately.", "Another such error is shown by the sentence Investors dumped any technology shares.", "Here, the shares are an entity in the goal (i.e., written as instance _ of ( x, shares ) not shares ( x, y ) ).", "HS-STRUCT will represent both the verb shares and the noun shares the same way, so it does not know that it should not consider the initial tree using the verb shares.", "In this case, HS-STRUCT ends the distributional phase with the best sentence Investors share technology.", "Again, such an error is not recoverable in the logical semantics phase.", "A different issue that may contribute to HS-STRUCT's lower reward in some cases is that of copular verbs.", "These do not appear in the goal semantics.", "However, if there is some non-copular verb in the goal, HS-STRUCT may incorrectly substitute in a verb in the copular verb slot, as doing so will decrease the goal distance.", "If the beam search did not keep a state without one of these incorrect substitutions, then HS-STRUCT will not be able to recover in the logical phase.", "We have presented HS-STRUCT, which uses both distributional and logical semantics for goal-directed language generation.", "By taking a hybrid approach HS-STRUCT's generation scales significantly better in early phases.", "However, in some cases, the quality of the final generation can be lower than a pure logical approach.", "HS-STRUCT is available through GitHub upon request." ]
[ "objective", "objective", "method", "objective", "objective", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "result", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain" ]
[ "Recent years have witnessed the emergence of a variety of post-hoc interpretations that aim to uncover how natural language processing (NLP) models make predictions.", "Despite the surge of new interpretation methods, it remains an open problem how to define and quantitatively measure the faithfulness of interpretations, i.e., to what extent interpretations reflect the reasoning process by a model.", "We propose two new criteria, sensitivity and stability, that provide complementary notions of faithfulness to the existed removal-based criteria.", "Our results show that the conclusion for how faithful interpretations are could vary substantially based on different notions.", "Motivated by the desiderata of sensitivity and stability, we introduce a new class of interpretation methods that adopt techniques from adversarial robustness.", "Empirical results show that our proposed methods are effective under the new criteria and overcome limitations of gradient-based methods on removal-based criteria.", "Besides text classification, we also apply interpretation methods and metrics to dependency parsing.", "Our results shed light on understanding the diverse set of interpretations.", "As complex NLP models are widely deployed in real-world applications, there is an increasing interest in understanding how these models come to certain decisions.", "As a result, the line of research on interpretation techniques grows rapidly, facilitating a broad range of model analysis, from building user trust on models (Ribeiro et al., 2016; Hase and Bansal, 2020) to exposing subtle biases (Zhao et al., 2017; Doshi-Velez and Kim, 2017).", "In this paper, we focus on post-hoc interpretations in NLP.", "Given a trained model and a specific input text, post-hoc interpretations assign an importance score to each token in the input which indicates its contribution to the model output.", "Current methods in this direction can be roughly divided into three categories: gradient-based methods (Simonyan et al., 2014; Li et al., 2016); reference-based methods (Sundararajan et al., 2017; Shrikumar et al., 2017); and perturbation-based methods (Zeiler and Fergus, 2014; Ribeiro et al., 2016).", "Despite the emergence of new techniques, one critical issue is that there is little consensus on how to define and evaluate the faithfulness of these techniques, i.e., whether they reflect the true reasoning process by a model.", "A widely employed criterion, especially in NLP, is the removal-based criterion (DeYoung et al., 2020), which removes or only preserves a set of tokens given by interpretations and measures how much the model prediction would change.", "However, as pointed out in prior work (Bastings and Filippova, 2020; Ancona et al., 2018), the corrupted version of an input produced during evaluations falls out of the distribution that models are trained on, and thus results in an inaccurate measurement of faithfulness.", "This limitation prevents removal-based metrics from being used as the golden standard for evaluating interpretations.", "To remedy this, we complement the removal-based criterion with two other criteria, sensitivity and stability , which are overlooked in prior works.", "Sensitivity is based on the notion that models should be more sensitive to perturbations on tokens identified by a faithful explanation.", "In contrast to the removal-based criterion, which completely removes important tokens, the sensitivity criterion adds small but adversarial perturbations in a local region of the token embedding, and thus preserves the structure of input sentences as well as interactions between context words.", "This criterion is recently discussed in Hsieh et al. (2020) in computer vision, while we provide comprehensive analyses on various NLP models and tasks.", "Note that while the removal-based criterion asks the question: if some important tokens did not exist', what would happen , the sensitivity criterion asks: if some im-2631 portant tokens were changed' adversarially, what would happen .", "Stability assumes that a faithful interpretation should not produce substantially different explanations for two inputs that the model finds similar.", "There are several attempts to generate such a pair of inputs.", "The most relevant one is Ghorbani et al. (2019).", "However, their method is only applicable to differentiable interpretations.", "Our work proposes a new paradigm based on adversarial word substitution that employs a black-box algorithm to generate a semantically related neighbor of the original input, which is specially designed for NLP and applicable to all interpretations techniques.", "The above two metrics highlight the connection between interpretability and robustness.", "Experiments show that interpretations which perform well on the removal-based criterion might not do well on the new criteria.", "Motivated by the limitations of existing interpretations and the desiderata of sensitivity, we propose robustness-based methods , based on projected gradient descent (PGD) attacks (Madry et al., 2018) and certifying robustness (Jia et al., 2019; Huang et al., 2019; Shi et al., 2020; Xu et al., 2020).", "We demonstrate that the new methods achieve top performance under sensitivity and stability.", "Moreover, as a simple improvement to gradient-based methods, our methods avoid the gradient saturation issues of gradient-based methods under the removal-based criterion.", "Another limitation of removal-based metrics emerges when interpreting dependency parsing when input tokens are removed, the tree structure is drastically changed and a model might not be able to produce a meaningful parse tree.", "Thus, there are little discussion for dependency parsing interpretations.", "In this paper, we propose a new paradigm to interpret dependency parsers leveraging prepositional phrase (PP) attachment ambiguity examples.", "To our best knowledge, this is the first work to study interpretations on dependency parsing.", "We demonstrate that sensitivity does not change the output tree structure as much as removal-based ones do, and provide analyses for interpretation methods with our paradigm and metrics.", "Our contributions can be summarized as follows.", "1. We discuss two overlooked notions of faithfulness in NLP interpretations.", "Our notions emphasize the connection between interpretability and robustness.", "We systematically evaluate interpretations under these notions, including existed removal-based ones.", "The code for this paper could be found at https://github.com/uclanlp/ NLP-Interpretation-Faithfulness .", "2. We propose new robustness-based interpretations inspired by the sensitivity metric and demonstrate their effectiveness under both sensitivity and stability.", "3. We propose a novel paradigm to evaluate interpretations on the dependency parsing task.", "A faithful post-hoc interpretation identifies the important parts of the input a model prediction relies on.", "Let x = [ x 1 ; x 2 ; . . . ; x n ] be a sequence of tokens.", "e ( ) denotes the token embedding function.", "An NLP model f takes the embedding matrix e ( x ) R n d as input and provides its prediction f ( e ( x )) = y .", "Let s y ( e ( x )) denote the output score of f ( e ( x )) on y .", "The exact form of s y ( e ( x )) is defined in Appendix D. An interpretation assigns an importance score to each token which indicates its contribution to the model decision.", "We first review the well-established removal-based criterion and emphasize its relation to the two criteria defined in this paper 1) sensitivity , and 2) stability , for which we propose novel paradigms to adapt them to various NLP tasks.", "Removal-based Criterion A well-established notion of interpretation faithfulness is that the presence of important tokens should have more meaningful influence on the model's decision than random tokens, quantified by the removal-based criterion.", "We adopt the comprehensiveness and the sufficiency score in DeYoung et al. (2020).", "The comprehensiveness score measures how much the model performance would drop after the set of relevant\" tokens identified by an interpretation is removed. A higher comprehensiveness score suggests the tokens are more influential to the model output, and thus a more faithful explanation. The sufficiency score measures to what extent the original model performance is maintained when we solely preserve relevant tokens. A lower sufficiency score means less change in the model prediction, and thus a more faithful explanation. See DeYoung et al. (2020) for detailed definitions. Note that completely removing input tokens produces incomplete texts. Large perturbation of this kind lead to several issues as pointed out by prior studies (Feng et al., 2018; Bastings and Filippova, 2020). 2632 Ours: Sensitivity Instead of removing important tokens, the sensitivity criterion adds local but adversarial noise to embedding vectors of the important tokens and measures the magnitude of the noise needed to change the model prediction. This is inspired by the notion that models should be more sensitive to perturbations being added to relevant tokens compared to random or irrelevant tokens. From the adversarial robustness perspective (Hsieh et al., 2020), this notion implies that by perturbing the most relevant tokens, we can reach the local decision boundary of a model with the minimum perturbation magnitude. Given the sequence of relevant tokens r k , sensitivity adds perturbation to its embedding e ( r k ) but keeps the remaining token embeddings unchanged. Then, it measures the minimal perturbation norm, denoted as r k , that changes the model prediction for this instance: r k = min r k F s.t. f ( e ( x ) + r k ) = y, where F is the Frobenius norm of a matrix, and r k R n d denotes the perturbation matrix where only the columns for tokens in r k have non-zero elements. Since the exact computation of r k is intractable, we use the PGD attack (Madry et al., 2018) with a binary search to approximate r k . A lower r k suggests a more faithful interpretation. In practice, we vary the size of r k , compute multiple r k , and summarize them with the area under the curve (AUC) score. Ours: Stability Another desired property of faithfulness is that a faithful interpretation should not give substantially different importance orders for two input points that the model finds similar. To construct a pair of similar inputs, we propose to generate contrast examples to the original one by synonym substitutions. A contrast example of x , x , satisfies (1) has at most k different but synonymous tokens with x ; (2) the prediction score at x changes less than compared to the score at x . The goal of these two conditions is to generate (almost) natural examples where the changes of model outputs are smaller than a threshold . Given all contrast examples, we search for the one that leads to the largest rank difference D between the importance order for x , m ( x ) and the alternated order m ( x ) : arg max x D ( m ( x ) , m ( x )) , s.t. | s y ( e ( x )) s y ( e ( x )) | , x x 0 k. Specifically, we first extract synonyms for each token x i following Alzantot et al. (2018). Then, in the decreasing order of m ( x ) , we greedily search for a substitution of each token that induces the largest change in m ( x ) and repeat this process until the model output score changes by more than or the pre-defined constraint k is reached. Finally, we measure the difference D between two importance ranks using Spearman's rank order correlation (Spearman, 1961). We call this criterion stability . A higher score indicates that the ranks between this input pair are more similar, and thus a more faithful interpretation. Note that instead of using the gradient information of interpretation methods to perturb importance ranks like Ghorbani et al. (2019), our algorithm treats interpretations as black-boxes, which makes it applicable to non-differentiable ones. Also, compared to Ding and Koehn (2021), who manually construct similar input pairs, our method is a fully automatic one as suggested by their paper. 3 Interpretations via Adversarial Robustness Techniques Experiments indicate that existing methods do not work well with the sensitivity and stability metrics (Sec. 4.2). In this section, we define a new class of interpretation methods by adopting techniques in adversarial robustness to remedy this. We first give a brief review of existing interpretation approaches and then introduce our new methods. 3.1 Existing Interpretation Methods We roughly divide the existing methods into three categories: gradient-based methods , reference-based methods , and perturbation-based methods , and discuss the representatives of them. Gradient-based methods The first class of methods leverage the gradient at each input token. To aggregate the gradient vector at each token into a single importance score, we consider two methods: 1) using the L 2 norm, (cid:13)(cid:13)(cid:13) s y ( e ( x )) e ( x i ) (cid:13)(cid:13)(cid:13) 2 , referred to as Vanilla Gradient (VaGrad) (Simonyan et al., 2014), and 2) using the dot product of gradient and input, (cid:16) s y ( e ( x )) e ( x i ) (cid:17) e ( x i ) , referred to as Gradient Input (GradInp) (Li et al., 2016). Reference-based methods These methods distribute the difference between model outputs on a reference point and on the input as the importance score for each token. We consider Inte-2633 grated Gradient (IngGrad) (Sundararajan et al., 2017) and DeepLIFT (Shrikumar et al., 2017). IngGrad computes the linear intergral of the gradients from the reference point to the input. DeepLIFT decomposes the difference between each neuron activation and its reference activation' and back-propagates it to each input token. We use DeepLIFT with the Rescale rule. Note DeepLIFT diverges from IngGrad when multiplicative interactions among tokens exist (Ancona et al., 2018). Perturbation-based methods Methods in this class query model outputs on perturbed inputs. We choose Occlusion (Zeiler and Fergus, 2014) and LIME (Ribeiro et al., 2016). Occlusion replaces one token at a time by a reference value and uses the corresponding drop on model performance to represent the importance of each token. LIME uses a linear model to fit model outputs on the neighborhood of input x and represents token importance by the weights in the trained linear model. 3.2 Proposed Robustness-based Methods We propose two methods inspired from the PGD attack (Madry et al., 2018) and the certifying robustness algorithms (Xu et al., 2020) in adversarial robustness. VaPGD and PGDInp The PGD attack in adversarial robustness considers a small vicinity of the input and takes several mini-steps\" within the vicinity to search for an adversarial example.", "Consider the token embeddings for the input x , we perform t iterations of the standard PGD procedure starting from e (0) = e ( x ) : e ( j ) = P (cid:16) e ( j 1) s y (cid:16) e ( j 1) (cid:17)(cid:17) , j =1 , 2 , . . . , t.", "P represents the operation that projects the new instance at each step back to the vicinity of e ( x ) , and is the step size.", "Intuitively, e ( t ) e ( x ) tells us the descent direction of model confidence.", "Similar to the gradient-based methods, the importance of each token x i can be either represented by (cid:13)(cid:13)(cid:13) e ( t ) i e ( x i ) (cid:13)(cid:13)(cid:13) 2 , where e ( t ) i is the i-th column in e ( t ) , referred to as Vanilla PGD (VaPGD), or by (cid:16) e ( x i ) e ( t ) i (cid:17) e ( x i ) , referred to as PGD Input (PGDInp) Note that different from the PGD attack we use for approximating the sensitivity criterion, we manually decide the magnitude of the vicinity of e ( x ) instead of using a binary search.", "We add perturbations to the whole sentence at the same time.", "Also, the final e ( t ) does not necessarily change the model prediction.", "Certify Certifying robustness algorithms also consider a vicinity of the original input and aim to provide guaranteed lower and upper bounds of a model output within that region.", "We use the linear relaxation based perturbation analysis (LiRPA) discussed in (Shi et al., 2020; Xu et al., 2020).", "LiRPA looks for two linear functions that bound the model.", "Specifically, LiRPA computes W , W , b , and b that satisfy (cid:80) i W i e ( x i )+ b s y ( e ( x )) (cid:80) i W i e ( x i ) + b for any point e ( x ) that lies within the L 2 ball of e ( x ) with size .", "We use the IBP+backward method in Xu et al. (2020).", "It uses Interval Bound Propagation (Gowal et al., 2018; Mirman et al., 2018) to compute bounds of internal neurons of the model and then constructs the two linear functions with a bound back-propagation process (Zhang et al., 2018; Singh et al., 2019).", "Finally, the importance score of the i -th token in the input is represented by W i e ( x i ) , where W i is the i -th row of W .", "We call this method Certify .", "Robustness-based vs. Gradient-based Gradient-based methods provide a linear approximation of the model decision boundary at the single input, which is not accurate for non-linear models.", "Robustness-based methods instead search multiple steps in neighbors and approximate the steepest descent direction better.", "We also empirically show that robustness-based methods avoid the saturation issue of gradient-based methods, i.e, gradient becomes zero at some inputs.", "See Appendix H. Note that VaPGD (PGDInp) degrades to VaGrad (Grad-Inp) when the number of iterations is 1. Robustness-based vs. IngGrad IngGrad leverages the average gradient in a segment between the input and a reference.", "It is likely to neglect local properties desired by the sensitivity criterion.", "Robustness-based methods instead search in the vicinity of the input, and thus local properties are better preserved.", "See results in Sec. 4.2.", "In this section, we present the results on text classification tasks under the three criteria.", "We find that the correlation between interpretation faithfulness based on different criteria are relatively low in some cases.", "Results verify the effectiveness of our new methods.", "Datasets We conduct experiments on three text classification datasets: SST-2 (Socher et al., 2013), Yelp (Zhang et al., 2015), and AGNews (Zhang et al., 2015) following Jain and Wallace (2019)'s preprocessing approach.", "All of them are converted to binary classification tasks.", "SST-2 and Yelp are sentiment classification tasks where models predict whether a review is negative (0) or positive (1).", "AGNews is to discriminate between world (0) and business (1) articles.", "See Appendix A for statistics of the three datasets.", "When evaluating interpretation methods, for each dataset, we select 200 random samples (100 samples from class 0 and 100 samples from class 1) from the test set.", "Models For text classification, we consider two model architectures: BERT (Devlin et al., 2019) and BiLSTM (Hochreiter and Schmidhuber, 1997).", "Interpretation Methods Besides our robustness-based interpretations PGDInp , VaPGD , and Certify , we experiment with six others from three existing categories: VaGrad , GradInp (gradient-based); IngGrad , DeepLIFT (reference-based); and Occlusion , LIME (perturbation-based).", "We also include a random baseline Random that randomly assigns importance scores.", "We use comprehensiveness ( Comp. ), sufficiency ( Suff. ), sensitivity ( Sens. ), and stability ( Stab. ) as metrics.", "", "Overall Results Results of interpretations for BERT and BiLSTM are presented in Table 1 and 2. The interpretations' performance are averaged over three runs on models trained from different random seeds.", "Results verify the effectiveness of our proposed robustness-based methods.", "Specifically, VaPGD achieves the best performance under the sensitivity and the stability criteria for both BERT and BiLSTM.", "Our methods also outperform their gradient-based counterparts under removal-based criteria.", "Especially, when interpreting BERT on SST-2 and AGNews, GradInp has near random performance.", "PGDInp can avoid these unreasonable behaviors.", "See Appendix H for a qualitative study on this, where we find PGDInp does not suffer from the saturation issue as GradInp.", "Also notice that the superior performance of robsutness-based methods are consistent on BERT and BiLSTM+GloVe, which demonstrate that it is not influenced by the embeddings being used.", "However, the performance of other methods tend to be inconsistent under different measurements.", "For example, under the removal-based criterion, IngGrad performs well for BiLSTM, which gives four out of six best numbers.", "But, IngGrad has very limited performance under the sensitivity metric, especially for BiLSTM on SST-2 and Yelp.", "Similar issues exist for LIME and Occlusion.", "Also, one might fail to recognize the faithfulness of VaPGD by solely looking at the removal-based criterion.", "Thus, when deploying interpretation methods on real tasks, we advocate for a careful selection of the method you use based on the underlying faithfulness notion that aligned with your goal.", "Performance Curves To show how the size of the relevant set affects interpretation performance, we plot the comprehensiveness and the sensitivity curves when increasing the number of tokens being removed (perturbed).", "Consider interpreting BERT on Yelp as an example, we collect two groups of examples from the test set of Yelp based on input lengths, where examples in each group are of 30 5 and 120 5 tokens long, and remove (perturb) the topk most important tokens given by interpretations.", "Results are shown in Figure 2. As shown in the figure, Occlusion is able to discover a smaller set of impactful tokens, under both metrics.", "However, when the size of the relevant 2635 SST-2 Yelp AGNews Methods Comp.", "set is increased, the performance of IngGrad under the comprehensiveness metric and the performance of VaPGD under the sensitivity metric gradually surpass Occlusion and other methods.", "This implies that the two methods are better at identifying a relevant set with more tokens.", "Interpolation Analysis To check whether the comprehensiveness and sensitivity scores can reflect the relative importance of each token in the relevant set, we conduct an interpolation analysis that gradually replaces each token in the relevant set with a random token outside of the set.", "Specifically, we select 50 examples from SST-2 and test on BERT with relevant sets given by LIME and VaPGD.", "For each example, we extract a relevant set consists of the top four most important tokens and gradually replace each token, from the least to the most important one, with a random token.", "We denote the relevant set at each step as S 0 , S 1 , ..., S 4 , where S 0 is the original relevant set containing the top four tokens and S 4 is the set of four random tokens.", "The performance change at step i is represented by f ( i ) = | M ( S 0 ) M ( S i ) | | M ( S 0 ) M ( S 4 ) | , where M is the comprehensiveness or sensitivity score.", "We expect that a good metric should induce a monotonically increasing function f .", "Further, f should be strictly convex as that indicates the importance of each token is different.", "We plot the curve in Figure 3. Results show that both the comprehensiveness and sensitivity metrics generate a monotonically increasing function, which indicates that they fully consider each token in the relevant set.", "Also, notice that based on the comprehensiveness metric, the contribution of each token tends to distribute evenly within the relevant set, which contradicts the fact that tokens in the set have different contribution to the prediction, while the importance rank is better preserved based on the sensitivity metric,.", "we qualitatively study the notions of faithfulness defined by comprehensiveness ( comp. ) and sensitivity ( sens. ), and discuss two main differences.", "evaluations, which could possibly break the interaction between removed tokens and context tokens, and underestimate the importance of context tokens.", "In Figure", "1(a), the tokens not' and hold' together determine the negative sentiment of the sentence.", "Sens. considers both not' and hold' as important tokens as one expects.", "However, comp.", "regards hold' less important than will'.", "Second, sens. measures token importance by how much model performance would change after adversarially perturbing' that token.", "In this sense, both positive and negative pertinent tokens will be deemed important.", "In contrast, comp.", "only considers positive pertinent ones.", "In Figure", "1(b), which is predicted as positive, removing the negative verb hate' would not influence model performance much.", "However, adversarially perturbing hate' (e.g. change hate' to a more negative verb) might change the model prediction from positive to negative.", "Thus, sens. prefers interpretations that identify hate' as an important token like VaPGD.", "The full version of Figure", "1(b) is in Appendix E. Some contrast examples generated for the stability criterion are presented in Appendix F. 5 Experiments on Structured Prediction Structured prediction tasks are in the center of NLP applications.", "However, applying interpretation methods and criteria to these tasks are difficult because 1) the required output is a structure instead of a single score.", "It is hard to define the contribution of each token to a structured output, and 2) compared to text classification tasks, removing parts of the input like what removal-based criteria do, would cause more drastic changes to model predictions as well as the groundtruth.", "Therefore, existing works often conduct experiments only on binary or multi-class text classification tasks.", "To remedy these issues, we investigate interpretations for dependency parsing, with an special focus on analyzing how models resolve the PP attachment ambiguity, which avoids interpreting the structured output as a whole.", "We show that our sensitivity metric is a better metric for dependency parsing as it causes negligible changes to model outputs compared to removal-based metrics.", "Our paradigm focuses on the PP attachment ambiguity, which involves both syntactic and semantics considerations.", "A dependency parser needs to determine either the preposition in PP attaches to the preceding noun phrase NP (NP-attachment) or the verb phrase VP (VP-attachment) (Hindle and Rooth, 1993).", "The basic structure of ambiguity is VP NP PP.", "For example, in the sentence I saw a cat with a telescope , a parser uses the semantics of the noun phrase a telescope to predict the head of with , which is saw .", "If we change a telescope to 2637 PTB-SD Method Comp.", "a tail , the head of with would become the preceding noun cat .", "We will later call nouns in PPs like telescope disambiguating nouns\", as they provide semantic information for a parser to disambiguate PP attachment ambiguity. The main advantage of this paradigm is that disambiguating nouns can be viewed as proxy groundtruths for faithfulness as parsers must rely on them to make decisions.", "Experimental Setup We use DeepBiaffine, a graph-based dependency parser as the target model (Dozat and Manning, 2017).", "We extract 100 examples that contain the PP attachment ambiguity from the English Penn Treebank converted to Stanford Dependencies 3.5.0 (PTB-SD).", "We consider the same interpretation methods as before, and they assign an importance score to each token in the sentence to indicate how much it impacts the model prediction on PP attachment arcs.", "We test the faithfulness of the attributions using comprehensiveness and sensitivity.", "See Appendix A C for details.", "Results are shown in Table 3. Similar to the results on text classification tasks, we find that perturbation-based methods like LIME and Occlusion perform well under the comprehensiveness score, while VaPGD performs the best under sensitivity.", "PGDInp and Certify are slightly better than GradInp under both the two metrics.", "Qualitatively, we find that according to interpretation methods, important tokens for a PP-attachment decision converge tothe preposition itself, the preceding noun or verb, and the disambiguating noun.", "This is close to human expectations.", "An example is shown in Appendix E. PGD Occlusion IngGrad GradInp Comp.", "Metric Check Removing even a small piece of inputs breaks the dependency tree.", "It will be hard to distinguish either the decision process behind the model has changed or the removal of important tokens actually causes the performance drop.", "Thus, we expect a better metric to have less influence on the tree structure of a sentence.", "In Table 4, we show that evaluating interpretations with sensitivity leads to smaller changes in the output dependency tree compared to comprehensiveness, suggesting sensitivity a more compatible metric for the dependency parsing task interpretations.", "Disambiguating Noun Analysis Disambiguating nouns are expected to be identified as important signals by faithful interpretations.", "We summarize how many times they are actually recognized as the top-k most important words by interpretation methods, where k is the interval varies in 10-20%, . . . , 90-100% of total tokens in an example.", "Results in Figure 4 demonstrate that interpretation methods from the same category have high correlations when extracting disambiguating nouns.", "For example, VaGrad and VaPGD leveraging gradients only, tend to position disambiguating nouns on the top of their importance lists, which is consistent with human judgments.", "Likewise, the perturbation-based methods, Occlusion and LIME, also put the disambiguation words to very similar positions.", "Interpretation methods Various post-hoc interpretation methods are proposed to explain the behaviors of black-box models.", "These methods can be roughly categorized into three classes: gradient-based methods (Simonyan et al., 2014; Li et al., 2016), which leverage local gradient information; reference-based methods (Shrikumar et al., 2017; Sundararajan et al., 2017), which consider the model output difference between the original point and a reference point; and perturbation-based methods (Ribeiro et al., 2016; Zeiler and Fergus, 2014; Lundberg and Lee, 2017), which query model outputs on perturbed data.", "In our work, we propose 2638 0 20 Gradient-based Robustness-basedw/ Input GradInp Gradient-based Robustness-basedw/o Input VaGrad Perturbation-based Occlusion 0.5 1.0 0 20 PGDInp 0.5 1.0 VaPGD 0.5 1.0 LIME Figure 4: Where do interpretations place the disambiguating nouns.", "new interpretation methods called robustness-based methods, which adopt techniques in the adversarial robustness domain and bridge the gap between the gradient-based and the reference-based methods.", "Evaluating interpretation methods One line of studies explores approaches to evaluate interpretations.", "Several studies propose measurements for faithfulness.", "A large proportion of them occlude tokens identified as important by interpretations and measure the confidence change of models (DeY-oung et al., 2020; Jain and Wallace, 2019; Zaidan and Eisner, 2008; Serrano and Smith, 2019).", "Some other works propose to evaluate the faithfulness by checking to what extent they satisfy some desired axioms (Ancona et al., 2018; Sundararajan et al., 2017; Shrikumar et al., 2017).", "Besides, Alvarez-Melis and Jaakkola (2018); Ghorbani et al. (2019); Kindermans et al. (2019); Yeh et al. (2019) reveal limitations in interpretation faithfulness through testing the robustness of interpretations.", "Another group of studies measure the plausibility of interpretations, i.e., whether the explanations conform with human judgments (Doshi-Velez and Kim, 2017; Ribeiro et al., 2016), or assist humans or student models to predict model behaviors on new data (Hase and Bansal, 2020; Pruthi et al., 2020).", "Note that although there exist many hybrid works that evaluate both the faithfulness and the plausibility of interpretations by combining a suite of diagnostic tests (DeYoung et al., 2020; Atanasova et al., 2020; Liu et al., 2020), Jacovi and Goldberg (2020) advocate to explicitly distinguish between the two measurements.", "In our work, we focus on interpretation faithfulness but consider two new metrics.", "We apply them to the dependency parsing task.", "Also, notice that the stability could be regarded as an automatic input consistency tests suggested by Ding and Koehn (2021).", "In our work, we enhanced the existed definition of interpretation faithfulness by two other notions and proposed corresponding quantitative metrics: sensitivity and stability, for each of them.", "We studied interpretations under the two notions along with the existed one.", "We found that interpretations have inconsistent performance regarding different criteria.", "We proposed a new class of interpretations, motivated by the adversarial robustness techniques, which achieves the best performance under the sensitivity and the stability criteria.", "We further proposed a novel paradigm to evaluate interpretations on the dependency parsing task, which moves beyond text classification in the literature.", "Our study shed light on understanding the behavior of model interpretations and suggested the community to put more efforts on defining an appropriate evaluation pipeline for interpretation faithfulness.", "We thank anonymous reviewers, UCLA PLUS-Lab and UCLA-NLP for their helpful feedback.", "This work is supported in part by NSF 1927554, 2008173, 2048280, CISCO, and a Sloan fellowship.", "This paper does not contain direct social influences.", "However, we believe the model analysis and interpretation techniques discussed in this paper are critical for deploying deep learning based models to real-world applications.", "Following previous work in this direction such as Jacovi and Goldberg (2020), we advocate to carefully consider the explanations obtained from interpretation methods as they may not always reflect the true reasoning process behind model predictions.", "Besides the three notions of faithfulness discussed in this paper, there are other important aspects for measuring interpretations that could be applied to evaluate interpretations.", "Also, We are not claiming that the proposed paradigm are per-fect as faithfulness measurements.", "For example, we recognize that it requires further and detailed 2639 analysis on either the model itself or the interpretation methods lead to a low performance on the stability metric, although we do try to make sure models behaviors do not change substantially between an input pair.", "Moreover, experiments in this paper are all based on mainstream English corpora.", "Although our techniques are not language specific, there could be different conclusions given the varying properties of languages.", "For example, the discussion for dependency parsing could be easily affected by the language one considers." ]
[ "abstain", "abstain", "objective", "result", "objective", "objective", "method", "result", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "objective", "objective", "objective", "objective", "abstain", "objective", "method", "other", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "method", "other", "objective", "method", "result", "objective", "objective", "method", "other", "other", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain" ]
[ "An essential step in FrameNet Semantic Role Labeling is the Frame Identification (FrameId) task, which aims at disambiguating a situation around a predicate.", "Whilst current FrameId methods rely on textual representations only, we hypothesize that FrameId can profit from a richer understanding of the situational context.", "Such contextual information can be obtained from common sense knowledge, which is more present in images than in text.", "In this paper, we extend a state-of-the-art FrameId system in order to effectively leverage multimodal representations.", "We conduct a comprehensive evaluation on the English FrameNet and its German counterpart SALSA.", "Our analysis shows that for the German data, textual representations are still competitive with multimodal ones.", "However on the English data, our multimodal FrameId approach outperforms its unimodal counterpart, setting a new state of the art.", "Its benefits are particularly apparent in dealing with ambiguous and rare instances, the main source of errors of current systems.", "For research purposes, we release", "(a) the implementation of our system,", "(b) our evaluation splits for SALSA 2.0, and", "(c) the embeddings for synsets and IMAGINED words.", "1 1 Introduction FrameNet Semantic Role Labeling analyzes sentences with respect to frame-semantic structures based on FrameNet (Fillmore et al., 2003).", "Typically, this involves two steps: First, Frame Identification (FrameId), capturing the context around a predicate ( frame evoking element ) and assigning a frame, basically a word sense label for a prototypical situation, to it.", "Second, Role Labeling, i.e. identifying the participants ( fillers ) of the predicate and connecting them with predefined framenamed alphabetically 1 https://github.com/UKPLab/ naacl18-multimodal-frame-identification specific role labels.", "FrameId is crucial to the success of Semantic Role Labeling as FrameId errors account for most wrong predictions in current systems (Hartmann et al., 2017).", "Consequently, improving FrameId is of major interest.", "The main challenge and source of prediction errors of FrameId systems are ambiguous predicates, which can evoke several frames, e.g., the verb sit evokes the frame Change posture in a context like a person is sitting back on a bench' , while it evokes Being located when a company is sitting in a city' .", "Understanding the predicate context, and thereby the context of the situation (here, Who / what is sitting where?' ), is crucial to identifying the correct frame for ambiguous cases.", "State-of-the-art FrameId systems model the situational context using pretrained distributed word embeddings (see Hermann et al., 2014).", "Hence, it is assumed that the context of the situation is explicitly expressed in words.", "However, language understanding involves implicit knowledge, which is not mentioned but still seems obvious to humans, e.g., people can sit back on a bench, but companies cannot', companies are in cities' .", "Such implicit common sense knowledge is obvious enough to be rarely expressed in sentences, but is more likely to be present in images.", "Figure 1 takes the ambiguous predicate sit to illustrate Figure 1: Example sentences demonstrating the potential benefit of images for ambiguous predicates.", "how images can provide access to implicit common sense knowledge crucial to FrameId.", "When looking at the semantics of events, FrameId has commonalities with event prediction tasks.", "These aim at linking events and their participants to script knowledge and at predicting events in narrative chains.", "Ahrendt and Demberg (2016) argue that knowing about the participants helps to identify the event, which suggests the need for implicit context knowledge also for FrameId.", "This specifically applies to images, which can reflect properties of the participants of a situation in a inherently different way, see Fig. 1.", "We analyze whether multimodal representations grounded in images can encode common sense knowledge to improve FrameId.", "To that end, we extend SimpleFrameId (Hartmann et al., 2017), a recent FrameId model based on distributed word embeddings, to the multimodal case and evaluate for English and German.", "Note that there is a general lack of evaluation of FrameId systems for languages other than English.", "This is problematic as they yield different challenges; German, for example, due to long distance dependencies.", "Also, word embeddings trained on different languages have different strengths in ambiguous words.", "We elaborate on insights from using different datasets by language.", "Contributions.", "(1) We propose a pipeline and architecture of a FrameId system, extending state-of-the-art methods with the option of using implicit multimodal knowledge.", "It is flexible toward modality and language, reaches state-of-the-art accuracy on English FrameId data, clearly outperforming several baselines, and sets a new state of the art on German FrameId data.", "(2) We discuss properties of language and meaning with respect to implicit knowledge, as well as the potential of multimodal representations for FrameId.", "(3) We perform a detailed analysis of FrameId systems.", "First, we develop a new strong baseline.", "Second, we suggest novel evaluation metrics that are essential for assessing ambiguous and rare frame instances.", "We show our system's advantage over the strong baseline in this regard and by this improve upon the main source of errors.", "Third, we analyze gold annotated datasets for English and German showing their different strengths.", "Finally, we release the implementation of our system, our evaluation splits for SALSA 2.0, and the embeddings for synsets and IMAGINED words.", "State-of-the-art FrameId systems rely on pretrained word embeddings as input (Hermann et al., 2014).", "This proved to be helpful: those systems consistently outperform the previously leading FrameId system SEMAFOR (Das et al., 2014), which is based on a handcrafted set of features.", "The open source neural network-based FrameId system SimpleFrameId (Hartmann et al., 2017) is conceptually simple, yet yields competitive accuracy.", "Its input representation is a concatenation of the predicate's pretrained embedding and an embedding of the predicate context.", "The dimension-wise mean of the pretrained embeddings of all words in the sentence is taken as the context.", "In this work, we first aim at improving the representation of the predicate context using multimodal embeddings, and second at assessing the applicability to another language, namely German.", "Common sense knowledge for language understanding.", "Situational background knowledge can be described in terms of frames (Fillmore, 1985) and scripts (Schank and Abelson, 2013).", "Ahrendt and Demberg (2016) report that knowing about a script's participants aids in predicting events linked to script knowledge.", "Transferring this insight to FrameId, we assume that a rich context representation helps to identify the sense of ambiguous predicates.", "Addressing ambiguous predicates where participants have different properties depending on the context, Feizabadi and Pado (2012) give some examples where the location plays a discriminating role as participant: motion verbs that have both a concrete motion sense and a more abstract sense in the cognitive domain, e.g., struggle , lean , follow .", "Frame identification in German.", "Shalmaneser (Erk and Pado, 2006) is a toolbox for semantic role assignment on FrameNet schemata of English and German (integrated into the SALSA project for German).", "Shalmaneser uses a Naive Bayes clas-sifier to identify frames, together with features for a bag-of-word context with a window over sentences, bigrams, and trigrams of the target word and dependency annotations.", "They report an F1 of 75.1 % on FrameNet 1.2 and 60 % on SALSA 1.0.", "These scores are difficult to compare against more recent work as the evaluation uses older versions of datasets and custom splits.", "Shalmaneser 1482 requires software dependencies that are not available anymore, hindering application to new data.", "To the best of our knowledge, there is no FrameId system evaluated on SALSA 2.0.", "Johannsen et al. (2015) present a simple, but weak translation baseline for cross-lingual FrameId.", "A SEMAFOR-based system is trained on English FrameNet and tested on German Wikipedia sentences, translated word-by-word to English.", "This translation baseline reaches an F1 score of 8.5 % on the German sentences when translated to English.", "The performance of this weak translation baseline is worse than that of another simple baseline: a most frequent sense baseline' computing majority votes for German (and many other languages) reaches an F1 score of 53.0 % on the German sentences.", "This shows that pure translation does not help with FrameId and, furthermore, indicates a large room for improvement for FrameId in languages other than English.", "There is a growing interest in Natural Language Processing for enriching traditional approaches with knowledge from the visual domain, as images capture qualitatively different information compared to text.", "Regarding FrameId, to the best of our knowledge, multimodal approaches have not yet been investigated.", "For other tasks, multimodal approaches based on pretrained embeddings are reported to be superior to unimodal approaches.", "Textual embeddings have been enriched with information from the visual domain, e.g., for Metaphor Identification (Shutova et al., 2016), Question Answering (Wu et al., 2017), and Word Pair Similarity (Collell et al., 2017).", "The latter presents a simple, but effective way of extending textual embeddings with so-called multimodal IMAGINED embeddings by a learned mapping from language to vision.", "We apply the IMAGINED method to our problem.", "In this work, we aim to uncover whether representations that are grounded in images can help to improve the accuracy of FrameId.", "Our application case of FrameId is more complex than a comparison on the word-pair level as it considers a whole sentence in order to identify the predicate's frame.", "However, we see a potential for multimodal IMAGINED embeddings to help: their mapping from text to multimodal representations is learned from images for nouns.", "Such nouns, in turn, are candidates for role fillers of predicates.", "In order to identify the correct sense of an ambiguous predicate, it could help to enrich the representation of the context situation with multimodal embeddings for the entities that are linked by the predicate.", "Our system builds upon the SimpleFrameId (Hart-mann et al., 2017) system for English FrameId based on textual word embeddings.", "We extend it to multimodal and multilingual use cases; see Fig. 2 for a sketch of the system pipeline.", "Same as SimpleFrameId, our system is based on pretrained embeddings to build the input representation out of the predicate context and the predicate itself.", "However, different to SimpleFrameId, our representation of the predicate context is multimodal: beyond textual embeddings we also use IMAGINED and visual embeddings.", "More precisely, we concatenate all unimodal representations of the predicate context, which in turn are the unimodal mean embeddings of all words in the sentence.", "We use concatenation for fusing the different embeddings as it is the simplest yet successful fusion approach (Bruni et al., 2014; Kiela and Bottou, 2014).", "The input representation is processed by a two-layer Multilayer Perceptron (MLP, Rosenblatt, 1958), where we adapt the number of hidden nodes to the increased input size and apply dropout to all hidden layers to prevent overfitting (Srivastava et al., 2014).", "Each node in the output layer corresponds to one frame-label class.", "We use rectified linear units (Nair and Hinton, 2010) as activation function for the hidden layers, and a soft-Figure 2: Sketch of the pipeline.", "(1) Data: sentence with predicate.", "(2) Mapping: words to embeddings.", "(3) Representation: concatenation of modality-specific means.", "(4) Classifier: neural network predicting frame.", "max for the output layer yielding a multinomial distribution over frames.", "We take its arg max as the final prediction at test time.", "Optionally, filter-ing based on the lexicon can be performed on the predicted probabilities for each frame label.", "The development set was used to determine the architecture and hyperparameters, see Sec. 6.", "Majority baselines.", "We propose a new strong baseline based on a combination of two existing ones.", "These are: first, the most-frequent-sense baseline using the data majority (Data Baseline) to determine the most frequent frame for a predicate; second, the baseline introduced by Hartmann et al. (2017) using a lexicon (Lexicon Baseline) to consider the data counts of the Data Baseline only for those frames available for a predicate.", "We propose to combine them into a Data-Lexicon Baseline, which uses the lexicon for unambiguous predicates and for ambiguous ones it uses the data majority.", "This way, we trust the lexicon for unambiguous predicates but not for ambiguous ones, there we rather consider the data majority.", "Comparing a system to these baselines helps to see whether it just memorizes the data majority or the lexicon, or actually captures more.", "All majority baselines strongly outperform the weak translation baseline of Johannsen et al. (2015) when training the system on English data and evaluating it on German data.", "Textual embeddings for words.", "We use the 300-dimensional GloVe embeddings (Pennington et al., 2014) for English, and the 100-dimensional embeddings of Reimers et al. (2014) for German.", "GloVe and Reimers have been trained on the Wikipedia of their targeted language and on additional newswire text to cover more domains, resulting in similarly low out-of-vocabulary scores.", "Visual embeddings for synsets.", "We obtain visual embeddings for WordNet synsets (Fellbaum, 1998; , Ed.): we apply the pretrained VGG-m-128 Convolutional Neural Network model (Chat-field et al., 2014) to images for synsets from ImageNet (Deng et al., 2009), we extract the 128-dimensional activation of the last layer (before the softmax) and then we L 2 -normalize it.", "We use the images of the WN9-IMG dataset (Xie et al., 2017), which links WordNet synsets to a collection of ten ImageNet images.", "We average the embeddings of all images corresponding to a synset, leading to a vocabulary size of 6555 synsets.", "All synsets in WN9-IMG are part of triples of the form entity-relation-entity, i.e. synset-relation-synset.", "Such synset entities that are participants of relations with other synset entities are candidates for incorporating the role fillers for predicates and, therefore, may help to find the correct frame for a predicate (see Sec. 5 for details about sense-disambiguation.) Linguistic embeddings for synsets.", "We obtain 300-dimensional linguistic synset embeddings: we apply the AutoExtend approach (Rothe and Schutze, 2015) to GloVe embeddings and produce synset embeddings for all synsets having at least one synset lemma in the GloVe embeddings.", "This leads to a synset vocabulary size of 79 141.", "Linguistic synset embeddings are based on textual word embeddings and the synset information known by the knowledge base WordNet, thus they complement the visual synset embeddings.", "IMAGINED embeddings for words.", "We use the IMAGINED method (Collell et al., 2017) for learning a mapping function: it maps from the word embedding space to the visual embedding space given those words that occur in both pretrained embedding spaces (7220 for English and 7739 for German).", "To obtain the English synset lemmas, we extract all lemmas of a synset and keep those that are nouns.", "We automatically translate English nouns to German nouns using the Google Translate API to obtain the corresponding German synset lemmas.", "The IMAGINED method is promising for cases where one embedding space (here, the textual one) has many instances without correspondence in the other embeddings space (here, the visual one), but the user still aims at obtaining instances of the first in the second space.", "We aim to obtain visual correspondences for the textual embeddings in order to incorporate regularities from images into our system.", "The mapping is a nonlinear transformation using a simple neural network.", "The objective is to minimize the cosine distance between each mapped representation of a word and the corresponding visual representation.", "Finally, a multimodal representation for any word can be obtained by applying this mapping to the word embedding.", "English FrameId: Berkeley FrameNet.", "The Berkeley FrameNet (Baker et al., 1998; Ruppen-hofer et al., 2016) is an ongoing project for building a large lexical resource for English with expert annotations based on frame semantics (Fill-more, 1976).", "It consists of two parts, a manually created lexicon that maps predicates to the frames they can evoke, and fully annotated texts (fulltext).", "The mapping can be used to facilitate the frame identification for a predicate in a sentence, e.g., a sentence in the fulltext corpus.", "Table 1 contains the lexicon statistics, Table 2 (top left) the dataset statistics.", "In this work, we use FrameNet 1.5 to ensure comparability with the previous state of the art, with the common evaluation split for FrameId systems introduced by Das and Smith (2011) (with the development split of Hermann et al., 2014).", "Due to having a single annotation as consent of experts, it is hard to estimate a performance bound of a single human for the fulltext annotation.", "German FrameId: SALSA.", "The SALSA project (Burchardt et al., 2006; Rehbein et al., 2012) is a completed annotation project, which serves as the German counterpart to FrameNet.", "Its annotations are based on FrameNet up to version 1.2.", "SALSA adds proto-frames to properly annotate senses that are not covered by the English FrameNet.", "For a more detailed description of differences between FrameNet and SALSA, see Ellsworth et al. (2004); Burchardt et al. (2009).", "SALSA also provides a lexicon (see Table 1 for statistics) and fully annotated texts.", "There are two releases of SALSA: 1.0 (Burchardt et al., 2006) used for Shalmaneser (Erk and Pado, 2006) (cf. Sec. 2.1), and the final release 2.0 (Rehbein et al., 2012), which contains more annotations and adds nouns as predicates.", "We use the final release.", "SALSA has no standard evaluation split; Erk and Pado (2006) used an undocumented random lexicon frames LUs avg (fr/pred) %amb.pred.", "split.", "Also, it is not possible to follow the splitting method of Das and Smith (2011), as SALSA project distributions do not map to documents.", "We suggest splitting based on sentences, i.e. all annotations of a sentence are in the same set to avoid mixing training and test sets.", "We assign sentences to 100 buckets based on their IDs and create a 70/15/15 split for training, development, and test sets based on the bucket order.", "This procedure allows future work to be evaluated on the same data.", "Table 2 (bottom left) shows the dataset statistics.", "Synsets in FrameNet and SALSA.", "To prepare the datasets for working with the synset embeddings, we sense-disambiguate all sentences using the API of BabelNet (Navigli and Ponzetto, 2010), which returns multilingual synsets.", "We thus depend on the state-of-the-art accuracy of BabelNet (Navigli and Ponzetto, 2012) when using synset embeddings on sense-disambiguated sentences.", "However, this dependence does not hold when applying IMAGINED embeddings to sentences, as the mapping from words to IMAGINED embeddings does not need any synsets labeled in the sentences.", "After sense-disambiguation some sentences do not contain any synset available in our synset embeddings.", "The statistics of those sentences that have at least one synset embedding (visual or linguistic AutoExtend) is given in Table 2 (right).", "We contrast our system's performance for context representations based on unimodal (textual) versus multimodal (textual and visual) embeddings.", "Also, we compare English against German data.", "We run the prediction ten times to reduce noise in sentences frames reduced sentences syns-Vis syns-AutoExt F r a m e N train 2819 15406 1310 2714 dev 707 4593 320 701 test 2420 4546 913 2318 SALSA train 16852 26081 4707 16736 dev 3561 5533 1063 3540 test 3605 5660 1032 3570 Table 2: Dataset statistics for FrameNet 1.5 fulltext with Das split and for SALSA 2.0 with our split: number of sentences and frames (as used in our experi-ments).", "Use of lexicon.", "We evaluate our system in two settings: with and without lexicon, as suggested by Hartmann et al. (2017).", "In the with-lexicon setting, the lexicon is used to reduce the choice of frames for a predicate to only those listed in the lexicon.", "If the predicate is not in the lexicon, it corresponds to the without-lexicon setting, where the choice has to be done amongst all frames.", "Evaluation metrics.", "FrameId systems are usually compared in terms of accuracy , which we adopt for comparability.", "As a multiclass classification problem, FrameId has to cope with a strong variation in the annotation frequency of frame classes.", "Minority classes are frames that occur only rarely; majority classes occur frequently.", "Note that the accuracy is biased toward majority classes, explaining the success of majority baselines on imbalanced datasets such as FrameNet.", "Alternatively, the F1 score is sometimes reported as it takes a complementary perspective.", "The F-measure is the harmonic mean of precision and recall, measuring exactness and completeness of a model, respectively.", "In previous work, micro-averaging is used to compute F1 scores.", "Yet, similar to the accuracy, micro-averaging introduces a bias toward majority classes.", "We compute F1-macro instead, for which precision and recall are computed for each class and averaged afterwards, giving equal weight to all classes.", "Taken together, this yields scores that underestimate (F1-macro) and overestimate (average accuracy) on imbalanced datasets.", "Previous work just used the overestimate such that a comparison is possible in terms of accuracy in the with-lexicon setting.", "We suggest to use F1-macro additionally to analyze rare, but interesting classes.", "Thus, a comparison within our work is possible for both aspects, giving a more detailed picture.", "Note that previous work reports one score whilst we report the mean score of ten runs.", "Hyperparameters.", "We identified the best hyperparameters for the English and German data based on the respective development sets.", "2 The Multilayer Perceptron architecture performed con-2 Differences in hyperparameters to SimpleFrameId: nadam' as optimizer instead of adagrad', dropout on hidden layers and early stopping to regularize training.", "Different number of hidden units, optimized by grid search.", "sistently better than a more complex Gated Recurrent Unit model (Cho et al., 2014).", "We found that more than two hidden layers did not bring any improvement over two layers; using dropout on the hidden layers helped to increase the accuracy.", "Among the various input representations, a concatenation of the representations of context and predicate was the best amongst others, including dependencies, lexicon indicators, and part-of-speech tags.", "Training is done using Nesterov-accelerated Adam (Nadam, Dozat, 2016) with default parameters.", "A batch size of 128 is used.", "Learning stops if the development accuracy has not improved for four epochs, and the learning rate is reduced by factor of two if there has not been any improvement for two epochs.", "First, we report our results on English data (see Table 3, top) and then, we compare against German data (see Table 3, bottom).", "Baseline.", "Our new strong Data-Lexicon Baseline reaches a considerable accuracy of 86.32 %, which is hard to beat by trained models.", "Even the most recent state of the art only beats it by about two points: 88.41 % (Hermann et al., 2014).", "However, the accuracy of the baseline drops for ambiguous predicates (69.73 %) and the F1-macro score reveals its weakness toward minority classes (drop from 64.54 % to 37.42 %).", "Unimodal.", "Our unimodal system trained and evaluated on English data slightly exceeds the accuracy of the previous state of the art (88.66 % on average versus 88.41 % for Hermann et al., 2014); our best run's accuracy is 89.35 %.", "Especially on ambiguous predicates, i.e. the difficult and therefore interesting cases, our average accuracy surpasses that of previous work by more than one point (the best run by almost three points).", "Considering the proposed F1-macro score for an assessment of the performance on minority classes and ambiguous predicates reveals our main improvement: Our system substantially outperforms the strong Data-Lexicon Baseline, demonstrating that our system differs from memorizing majorities and actually improves minority cases.", "Multimodal.", "From a range of multimodal context representations as extensions to our system, 1486 with lexicon without lexicon model acc acc amb F1-m F1-m amb acc acc amb F1-m F1-m amb F r a m e N e t Data Baseline 79.06 69.73 33.00 37.42 79.06 69.73 33.00 37.42 Lexicon Baseline 79.89 55.52 65.61 30.95 Data-Lexicon Baseline 86.32 69.73 64.54 37.42 Hermann et al. (2014) 88.41 73.10 Hartmann et al. (2017) 87.63 73.80 77.49 our uni 88.66 74.92 76.65 53.86 79.96 71.70 57.07 47.40 our mm (im, synsV) 88.82 75.28 76.77 54.80 81.21 72.51 57.81 49.38 SALSA Data Baseline 77.00 70.51 37.40 28.87 77.00 70.51 37.40 28.87 Lexicon Baseline 61.57 52.5 19.36 15.68 Data-Lexicon Baseline 77.16 70.51 38.48 28.87 our uni 80.76 75.59 48.42 41.38 80.59 75.52 47.64 41.17 our mm (im) 80.71 75.58 48.29 41.19 80.51 75.51 47.36 40.93 Table 3: FrameId results (in %) on English (upper) and German (lower) with and without using the lexicon.", "We observe that the improvements are more pronounced for difficult cases, such as for rare and ambiguous cases (one point improvement in F1-macro), as well as in the absence of a lexicon (up to two points improvement).", "Significance tests.", "We conduct a single sample t-test to judge the difference between previous state-of-the-art accuracy (Hermann et al., 2014) and our unimodal approach.", "The null hypothesis (expected value of our sample of ten accuracy scores equals previous state-of-the-art accuracy) is rejected at a significance level of = 0 .", "05 ( p = 0 . 0318) .", "In conclusion, even our unimodal approach outperforms prior state of the art in terms of accuracy.", "To judge the difference between our unimodal and our multimodal approach, we conduct a t-test for the means of the two independent samples.", "The null hypothesis states identical expected values for our two samples of ten accuracy scores.", "Regarding the setting with lexicon, the null hypothesis cannot be rejected at a significance level of = 0 .", "05 ( p = 0 . 2181) .", "However, concerning accuracy scores without using the lexicon, the null hypothesis is rejected at a significance level of = 0 .", "05 ( p < 0 . 0001) .", "In conclusion, the multimodal approach has a slight overall advantage and, interestingly, has a considerable advantage over the unimodal one when confronted with a more difficult setting of not using the lexicon.", "German results.", "Our system evaluated on German data sets a new state of the art on this corpus with 80.76 % accuracy, outperforming the baselines (77.16 %; no other system evaluated on this dataset).", "The difference in F1-macro between the majority baselines and our system is smaller than for the English FrameNet.", "This indicates that the majorities learned from data are more powerful in the German case with SALSA than in the English case, when comparing against our system.", "Multimodal context representations cannot show an improvement for SALSA with this general dataset.", "Lexicon.", "We report results achieved without the lexicon to evaluate independently of its quality (Hartmann et al., 2017).", "On English data, our systems outperforms Hartmann et al. (2017) by more than two points in accuracy and we achieve a large improvement over the Data Baseline.", "Comparing the F1-macro with and without lexicon, it can be seen that the additional information stored in the lexicon strongly increases the score by about 20 points for English data.", "For German data, the increase of F1-macro with lexicon versus without is small (one point).", "Insights from the baseline.", "Many indicators point to our approach not just learning the data majority: our trained models have better F1-macro and especially much higher ambiguous F1-macro scores with lexicon.", "This clearly suggests that our system is capable of acquiring more expressiveness than the baselines do by counting majorities.", "Impact of multimodal representations.", "Multimodal context representations improve results compared to unimodal ones.", "It helps to incorporate visual common sense knowledge about the sit-uation's participants.", "Referring back to our example of the ambiguous predicate sit , the multimodal approach is able to transfer the knowledge to the test sentence Al-Anbar in general, and Ramadi in particular, are sat with the Americans in Jordan.' by correctly identifying the frame Being located whilst the unimodal approach fails with predicting Change posture .", "The increase in performance when adding information from visual synset embeddings is not simply due to higher dimensionality of the embedding space.", "To verify, we further investigate extending the unimodal system with random word embeddings.", "This leads to a drop in performance compared to using just the unimodal representations or using these in combination with the proposed multimodal embeddings, especially in the setting without lexicon.", "Interestingly, replacing visual synset embeddings with linguistic synset embeddings (AutoExtend by Rothe and Schutze (2015), see Sec. 4) in further investigations also showed that visual embeddings yield better performance.", "This points out the potential for incorporating even more image evidence to extend our approach.", "Difficulties for German data.", "The impact of multimodal context representations is more difficult to interpret for the German dataset.", "The fact that they have not helped here may be due to mismatches when translating the English nouns of a synset to German in order to train the IMAGINED embeddings.", "Here, we see room for future work to improve on simple translation by sense-based translations.", "In SALSA, a smaller portion of sentences has at least one synset embedding, see Table 2.", "For further investigations, we reduced the dataset to only sentences actually containing a synset embedding.", "Then, minor improvements of the multimodal approach were visible for SALSA.", "This points out that a dataset containing more words linking to implicit knowledge in images (visual synset embeddings) can profit more from visual and IMAGINED embeddings.", "Impact of lexicon: English versus German.", "Even if both lexica approximately define the same number of frames (see Table 1), the number of de-fined lexical units (distinct predicate-frame combinations) in SALSA is smaller.", "This leads to a lexicon that is a magnitude smaller than the FrameNet lexicon.", "Thus, the initial situation for the German case is more difficult.", "The impact of the lexicon for SALSA is smaller than for FrameNet (best visible in the increase of F1-macro with using the lexicon compared to without), which can be explained by the larger percentage of ambiguous predicates (especially evoking proto-frames) and the smaller size of the lexicon.", "The evaluation on two different languages highlights the impact of an elaborate, manually created lexicon: it boosts the performance on frame classes that are less present in the training data.", "English FrameId benefits from the large high-quality lexicon, whereas German FrameId currently lacks a high-quality lexicon that is large enough to benefit the FrameId task.", "Dataset properties: English versus German.", "To better understand the influence of the dataset on the prediction errors, we further analyze the errors of our approach (see Table 4) following Palmer with lexicon without lexicon model correct e uns e unsLab e n correct e uns e unsLab e n F r a m e N our uni 89.35 0.40 3.04 7.22 80.36 1.32 7.68 10.65 our mm (im, synsV) 89.79 0.58 3.55 6.08 80.63 1.91 8.50 8.96 SALSA our uni 80.99 0.49 0.97 17.54 80.80 0.49 1.10 17.61 our mm (im) 81.24 1.94 1.88 14.94 80.96 1.94 2.05 15.05 Table 4: Error analysis of best uniand multimodal systems.", "and Sporleder (2010).", "A wrong prediction can either be a normal classification error, or it can be the result of an instance that was unseen at training time, which means that the error is due to the training set.", "The instance can either be completely unseen or unseen with the target label.", "We observe that FrameNet has larger issues with unseen data compared to SALSA, especially data that was unseen with one specific label but seen with another label.", "This is due to the uneven split of the documents in FrameNet, leading to data from different source documents and domains in the training and test split.", "SALSA does not suffer from this problem as much since the split was performed differently.", "It would be worth considering the same splitting method for FrameNet.", "As stated previously, FrameId has commonalities with event prediction.", "Since identifying frames is only one way of capturing events, our approach is transferable to other schemes of event prediction and visual knowledge about participants of situations should be beneficial there, too.", "It would be interesting to evaluate the multimodal architecture on other predicate-argument frameworks, e.g., script knowledge or VerbNet style Semantic Role Labeling.", "In particular the exploration our findings on visual contributions to FrameId in the context of further event prediction tasks forms an interesting next step.", "More precisely, future work should consider using implicit knowledge not only from images of the participants of the situation, but also from the entire scene in order to directly capture relations between the participants.", "This could provide access to a more holistic understanding of the scene.", "The following visual tasks with accompanying datasets could serve as a starting point:", "(a) visual Verb Sense Disambiguation with the VerSe dataset (Gella et al., 2016) and", "(b) visual SRL with several datasets, e.g., imSitu (Yatskar et al., 2016) (linked to FrameNet), V-COCO (Gupta and Malik, 2015) (verbs linked to COCO), VVN (Ronchi and Perona, 2015) (visual VerbNet) or even SRL grounded in video clips for the cooking-domain (Yang et al., 2016) and visual Situation Recognition (Mallya and Lazebnik, 2017).", "Such datasets could be used for extracting visual embeddings for verbs or even complex situations in order to improve the visual component in the embeddings for our FrameId system.", "Vice versa: visual tasks could profit from multimodal approaches (Bal-trusaitis et al., 2017) in a similar sense as our textual task, FrameId, profits from additional information encoded in further modalities.", "Moreover, visual SRL might profit from our multimodal FrameId system to a similar extend as any FrameNet SRL task profits from correctly identified frames (Hartmann et al., 2017).", "Regarding the combination of embeddings from different modalities, we suggest to experiment with different fusion strategies complementing the middle fusion (concatenation) and the mapping (IMAGINED method).", "This could be a late fusion at decision level operating like an ensemble.", "In this work, we investigated multimodal representations for Frame Identification (FrameId) by incorporating implicit knowledge, which is better reflected in the visual domain.", "We presented a flexible FrameId system that is independent of modality and language in its architecture.", "With this flexibility it is possible to include textual and visual knowledge and to evaluate on gold data in different languages.", "We created multimodal representations from textual and visual domains and showed that for English FrameNet data, enriching the textual representations with multimodal ones improves the accuracy toward a new state of the art.", "For German SALSA data, we set a new state of the art with textual representations only and discuss why incorporating multimodal information is more difficult.", "For both datasets, our system is particularly strong with respect to ambiguous and rare classes, considerably outperforming our new Data-Lexicon Baseline and thus addressing a key challenge in FrameId.", "This work has been supported by the DFG-funded research training group Adaptive Preparation of Information form Heterogeneous Sources (AIPHES, GRK 1994/1).", "We also acknowledge the useful comments of the anonymous reviewers." ]
[ "abstain", "objective", "abstain", "objective", "method", "result", "objective", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "objective", "abstain", "abstain", "method", "objective", "objective", "result", "result", "abstain", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "method", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "method", "objective", "method", "abstain", "other", "other", "method", "abstain", "method", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "result", "method", "abstain", "method", "abstain", "objective", "method", "abstain", "objective", "objective", "objective", "other", "other" ]
[ "Grounding events into a precise timeline is important for natural language understanding but has received limited attention in recent work.", "This problem is challenging due to the inherent ambiguity of language and the requirement for information propagation over inter-related events.", "This paper first formulates this problem based on a 4-tuple temporal representation used in entity slot filling, which allows us to represent fuzzy time spans more conveniently.", "We then propose a graph attention network-based approach to propagate temporal information over document-level event graphs constructed by shared entity arguments and temporal relations.", "To better evaluate our approach, we present a challenging new benchmark on the ACE2005 corpus, where more than 78% of events do not have time spans mentioned explicitly in their local contexts.", "The proposed approach yields an absolute gain of 7.0% in match rate over contextualized embedding approaches, and 16.3% higher match rate compared to sentence-level manual event time argument annotation.", "1 1 Introduction Understanding and reasoning about time is a crucial component for comprehensive understanding of evolving situations, events, trends and forecasting event abstractions for the long-term.", "Event time extraction is also useful for many downstream Natural Language Processing (NLP) applications such as event timeline generation (Huang and Huang, 2013; Wang et al., 2015; Ge et al., 2015; Steen and Markert, 2019), temporal event tracking and prediction (Ji et al., 2009; Minard et al., 2015), and temporal question answering (Llorens et al., 2015; Meng et al., 2017).", "*Work done prior to joining Amazon.", "1 The resource for this paper is available at https://gi thub.com/wenhycs/NAACL2021-Event-Time-Extraction-and-Propagation-via-Graph-Attention-Networks .", "In order to ground events into a timeline we need to determine the start time and end time of each event as precisely as possible (Reimers et al., 2016).", "However, the start and end time of an event is often not explicitly expressed in a document.", "For example, among 5,271 annotated event mentions in the Automatic Content Extraction (ACE2005) corpus 2 , only 1,100 of them have explicit time argument annotations.", "To solve the temporal event grounding (TEG) problem, previous efforts focus on its subtasks such as temporal event ordering (Bram-sen et al., 2006; Chambers and Jurafsky, 2008; Yoshikawa et al., 2009; Do et al., 2012; Meng et al., 2017; Meng and Rumshisky, 2018; Ning et al., 2017, 2018, 2019; Han et al., 2019) and duration prediction (Pan et al., 2006, 2011; Vempala et al., 2018; Gusev et al., 2011; Vashishtha et al., 2019; Zhou et al., 2019).", "In this paper we aim to solve TEG directly using the following novel approaches.", "To capture fuzzy time spans expressed in text, we adopt a 4-tuple temporal representation proposed in the TAC-KBP temporal slot filling task (Ji et al., 2011, 2013) to predict an event's earliest possible start date, latest possible start date, earliest possible end date and latest possible end date, given the entire document.", "We choose to work at the day-level and leave time scales smaller than that for future work since, for example, only 0.6% of the time expressions in the newswire documents in ACE contain smaller granularities (e.g., hours or minutes).", "Fortunately, the uncertain time boundaries of an event can often be inferred from its related events in the global context of a document.", "For example, in Table 1, there are no explicit time expressions or clear linguistic clues in the local context to infer the time of the appeal event.", "But the earliest possible date of the refuse event is explicitly expressed as 2003-04-18.", "Since the appeal event must happen before the refuse event, we can infer 2 https://catalog.ldc.upenn.edu/LDC2006T06 Malaysia' s Appeal Court Friday [2003-04-18] refused to overturn the conviction and nine-year jail sentence imposed on ex-deputy prime minister Anwar Ibrahim.", "Anwar now faces an earliest possible release date of April 14, 2009 [2009-04-14] .", "The former heir says he was framed for political reasons, after his appeal was rejected ...", "Mahathir's sacking of Anwar in September 1998 [1998-09] rocked Malaysian politics ...", "Within weeks he was arrested and charged with ...", "Anwar was told Monday [2003-04-14] that he had been granted a standard one-third remission of a six-year corruption sentence for good behavior, and immediately began to serve the nine-year sentence ...", "the earliest start and the latest end date of appeal as 2003-04-18.", "However, there are usually many other irrelevant events that are in the same document, which requires us to develop an effective approach to select related events and perform temporal information propagation.", "We first use event-event relations to construct a document-level event graph for each input document, as illustrated in Figure", "1. We leverage two types of event-event relations: (1) if two events share the same entity as their arguments, then they are implicitly connected; (2) automatic event-event temporal relation extraction methods such as (Ning et al., 2019) provide important clues about which element in the 4-tuple of an event can be propagated to which 4-tuple element of another event.", "We propose a novel time-aware graph propagation framework based on graph attention networks (GAT, Velickovic et al., 2018) to propagate temporal information across events in the constructed event graphs.", "Experimental results on a benchmark, newly created on top of ACE2005 annotations, show that our proposed cross-event time propagation framework significantly outperforms state-of-the-art event time extraction methods using contextualized embedding features.", "Our contributions can be summarized as follows.", "This is the first work taking advantage of the flexibility of 4-tuple representation to formulate absolute event timeline construction.", "We propose a GAT based approach for timeline construction which effectively propagates temporal information over document-level event graphs without solving large constrained optimization problems (e.g., Integer Linear Programming (ILP)) as previous work did.", "We propose two effective methods to construct the event graphs, based on shared arguments and temporal relations, which allow the time information to be propagated across the entire document.", "We build a new benchmark with over 6,000 human annotated non-infinite time elements, which implements the 4-tuple representation for the first time as a timeline dataset, and is intended to be used for future research on absolute timeline construction.", "Grounding events into a timeline necessitates the extraction of the start and end time of each event.", "However, the start and end time of most events is not explicitly expressed in a document.", "To capture such uncertainty, we adopt the 4-tuple representation introduced by the TAC-KBP2011 temporal slot filling task (Ji et al., 2011, 2013).", "We define 4-tuple event time as four time elements for an event e (cid:104) start , + start , end , + end (cid:105) , 3 which indicate earliest possible start date , latest possible start date , earliest possible end date and latest possible end date , respectively.", "These four dates follow hard constraints: (cid:40) start + start end + end , (cid:40) start end + start + end .", "The enemy have now been flown out and we're treating them including a man who is almost dead with a gunshot wound to the chest after we (Royal Marines) sent in one of our companies of about 100 men in here (Umm Kiou) this morning .", "The above temporal representation was originally designed for entity slot filling, and we regard it as an expressive way for describing events too as: (1) it allows for flexible representation of fuzzy time spans and thus, for those events that we cannot determine the accurate dates, they can also be grounded into a timeline; and (2) it allows for a unified treatment of various types of temporal information and thus makes it convenient to propagate over multiple events.", "We choose the Automatic Content Extraction (ACE) 2005 dataset because it includes rich annotations of event types, entity/time/value argument roles, time expressions and their normalization results.", "In our annotation interface, each document is highlighted with event triggers and time expressions.", "The annotators are required to read the whole document and provide as precise information as possible for each element of the 4-tuple of each event.", "If there is no possible information for a specific time, the annotators are asked to provide +/-infinite labels.", "Overall, we have annotated 182 documents from this dataset.", "Most of the documents are from broadcast news or newswire genres.", "Detailed data statistics and data splits are shown in Table", "2. We annotated all the documents with two independent passes.", "Two experts led the final adjudication based on independent annotations and discussions with annotators since single annotation pass is likely to miss important clues, especially when the event and its associated time expression appear in different paragraphs.", "The input is a document D = [ w 1 , . . . , w n ] , containing event triggers E = [ e 1 , . . . , e m ] and time expressions T = [ t 1 , . . . , t l ] , and we use gold-standard annotation for event triggers and time expressions.", "Our goal is to connect the event triggers E and time expressions T scattered in a document, and estimate their association scores to select the most possible values for the 4-tuple elements.", "At a high-level, our approach is composed of: (1) a text encoder to capture semantic and narrative information in local context, (2) a document-level event graph to facilitate global knowledge, (3) a graph-based time propagation model to propagate time along event-event relations, and (4) an extraction algorithm to generate 4-tuple output.", "Among these four components, (1) and (4) build up the minimal requirements of an extractor, which serve as our baseline model and will be described in Section 3.2.", "We will detail how we utilize event arguments and temporal ordering to construct the document-level event graph, namely component (2), in Section 3.3.", "We will present our graph-based time propagation model in Section 3.4, and wrap up our model with training objective and other details in Section 3.5.", "We list notations in Table 3, which will be explained when encountered.", "Our baseline extraction model is an event-time pair classifier based on a pre-trained language model (Devlin et al., 2019; Liu et al., 2019; Beltagy et al., 2020) encoder.", "The pre-trained language models allow us to have contextualized representation for every token in a given text.", "We directly derive the local representation for event triggers and time expressions from the contextualized representation.", "The representations are denoted as h e i for event trigger e i and h t j for time expression t j .", "For events or time expressions containing multiple tokens, we take the average of token representations.", "Thus, all h e i and h t j are of the same dimensions.", "We pair each event and time in the document, i.e., { ( e i , t j ) | e i E, t j T } , to form the training examples.", "After obtaining event and time representations, we concatenate them and feed them into a 2-layer feed-forward neural classifier.", "The classifier estimates the probability of filling t j in e i 's 4-tuple time elements, i.e., (cid:104) i, start , + i, start , i, end , + i, end (cid:105) .", "The probabilities are: p i,j,k = ( w 2 ,k ReLU ( W 1 [ h e i ; h t j ] + b 1 ) + b 2 ,k ) (2) where ( ) is sigmoid function, and W 1 , 2 and b 1 , 2 are learnable parameters.", "In short, we use i,k to represent the k th element in i ( k { 1 , 2 , 3 , 4 } ) and p i,j,k represents a probability that t j fills in the k th element of 4-tuple i .", "The baseline model consists of 4 binary classifiers, one for each element of the 4-tuple.", "When determining the 4-tuple for each event e i , we estimate the probability of t 1 through t l .", "For each element, we take the time expression with the highest probability to fill in this element.", "A practical issue is that the same time is often expressed by different granularity levels, such as 2020-01-01 and 2020-W1 , following the most common TIMEX format (Ferro et al., 2005).", "To uniformly represent all the time expressions and allow certain degree of uncertainty, we introduce the following 2-tuple normalized form for time expressions, which indicates the time range of t j by two dates, t i (cid:104) t i , t + i (cid:105) (3) where t represents the earliest possible dates and t + represents the latest possible dates.", "We also make a simplification that the earliest possible values can only fill in earliest possible dates, i.e., T = { t 1 , . . . , t l } (cid:55) start , end , similarly for the latest dates, T + = { t +1 , . . . , t + l } (cid:55) + start , + end .", "This constraint can be relaxed in future work.", "Here is an example of how we determine the binary labels for event-time pairs.", "If the 4-tuple time for an event is (cid:104) 2020-01-01 , 2020-01-03 , 2020-01-01 , 2020-01-07 (cid:105) and the 2-tuple for time expression 2020-W1 is (cid:104) 2020-01-01 , 2020-01-07 (cid:105) , then the classification labels of this event-time pair will be (cid:104) True , False , True , True (cid:105) .", "Before we conduct the global time propagation, we first construct document-level event graphs.", "In this paper, we focus on two types of event-event relations: (1) shared entity arguments, and (2) temporal relations.", "We denote the event-argument graph as G arg = { ( e i , v j , r i,j ) } , where e i represents an event, v j represents an entity or a time expression, and r i,j represents the bi-directed edge between e i and v j , namely the argument role.", "For example, in Figure 1, there will be two edges between the sent event ( e 1 ) and the entity Royal Marines ( v 1 ), namely ( e 1 , v 1 , AGENT ) and ( v 1 , e 1 , AGENT ) .", "In addition, we add a self-loop for each node in this graph.", "The graph can be constructed by Information Extraction (IE) techniques and we use gold-standard event annotation from ACE 2005 dataset in our experiments.", "Event Temporal Graph.", "Event-event temporal relations provide explicit directions to propagate time information.", "If we know that an attack event happened before an injury event, the lower-bound end date of the attack can possibly be the start date of the injury.", "We denote the event temporal graph as G temp = { ( e i , e j , i,j ) } , where e i and e j denote events, and i,j denotes the temporal order between e i and e j .", "Similar to G arg , we also add a self-loop in G temp and edges for two directions.", "For example, for a BEFORE relation from e 1 to e 2 , we will add two edges, ( e 1 , e 2 , BEFORE ) and ( e 2 , e 1 , AFTER ) .", "We only consider BEFORE and AFTER relations when constructing the event temporal graph.", "To propagate time information, we also use local time arguments as in event argument graphs.", "We apply the state-of-the-art event temporal relation extraction model (Ning et al., 2019) to extract temporal relations for event pairs that appear in the same sentence or two consecutive sentences, and we only keep the relations whose confidence score is over 90%.", "After obtaining the document-level graphs G arg and G temp , we design a novel time-aware graph neural network to perform document-level 4-tuple propagation.", "Graph neural networks (Dai et al., 2016; Kipf and Welling, 2017; Hamilton et al., 2017; Schlichtkrull et al., 2018; Velickovic et al., 2018) have shown effective for relational reasoning (Zhang et al., 2018; Marcheggiani et al., 2018).", "We adopt graph attention networks (GAT, Velickovic et al., 2018) to propagate time through event-argument or event-event relations.", "GAT are proposed to aggregate and update information for each node from its neighbors through attention mechanism.", "Compared to the original GAT, we further include relational embedding for edge labels when performing attention to capture various types of relations between each event and its neighboring events.", "The graphs G arg and G temp together with the GAT model are placed in the intermediate layer of our baseline extraction model (Section 3.2), i.e., between the pre-trained language model encoder and the 2-layer feed-forward neural classifier (Eq.", "(2)).", "For clarity, we denote all events and entities as nodes V = { v 1 , . . . , v n } , and we use r i,j to denote their relation types.", "More specifically, we stack several layers of GAT on top of the contextualized representations of nodes h v i .", "And we follow Vaswani et al. (2017) to use multi-head attention for each layer.", "We use the simplified notation h v i to describe one of the attention heads for h kv i .", "where ELU is exponential linear unit (Clevert et al., 2016), a ij is the attention coefficient of node v i and v j , ij is the attention weight after softmax, and h v i and h (cid:48) v i are the hidden states of node v i before and after one GAT layer, respectively.", "We use N ( i ) to denote the neighborhood of v i .", "The attention coefficients are calculated through a ij = (cid:16) w 4 (cid:104) W 3 h v i ; W 3 h v j ; r i,j (cid:105)(cid:17) (6) where is LeakyReLU (Clevert et al., 2016) activation function.", "r i,j is the learnable relational embedding for relation type of r i,j that we further add compared to the original GAT.", "We concatenate m different attention heads to compute the representation of v i for the next layer after performing attention for each head, h (cid:48) v i = m (cid:110) k =1 h (cid:48) kv i .", "We stack n l GAT layers to obtain the final representations for events and time.", "These representations are fed into the 2-layer feed-forward neural classifier in Eq.", "(2) to generate the corresponding probabilities.", "Since we model the 4-tuple extraction task by four binary classifiers, we adopt the log loss as our model objective:", "Since the 4-tuple elements are extracted from time expressions, the model cannot generate +/-inf (infinite) output.", "To address this issue, we adopt another hyperparameter, inf threshold, and convert those predicted time values with scores lower than the threshold into +/-inf values.", "That is, we regard the probability p i,j,k also as a confidence score.", "A low score indicates the model cannot determine the results for some 4-tuple elements.", "Thus it is natural to set those elements as inf .", "When this case happens in start or end , we correct the value to be -inf , and when it is + start or + end , we set the value to be +inf .", "This threshold and its searching will be applied to both baseline extract and GAT-based extraction systems.", "The extraction model may generate 4-tuples that do not follow the constraints on Eq.", "(1) and we leave enforcing the constraints for future work.", "We conduct our experiments on previously introduced annotated data.", "Statistics of the dataset and splits are shown in Table", "2. Experiment Setup.", "We compare our proposed graph-based time propagation model with the following baselines: Local gold-standard time argument: The gold-standard time argument annotation provides the upperbound of the performance that a local time extraction system can achieve in our document 4-tuple time extraction task.", "We map gold-standard time argument roles to our 4-tuple representation scheme and report its performance for comparison.", "Specifically, if the argument role indicates the start time of an event (e.g., TIME-AFTER , TIME-ATBEGINNING ) we will map the date to start and + start ; if the argument role indicates the end time of an event (e.g., TIME-BEFORE ) we will map the date to end and + end ; if the argument role is TIME-WITHIN , we will map the date to all elements.", "And we will leave all other elements as infinite.", "Document creation time: Document creation time plays an important role in previous absolute timeline construction (Chambers et al., 2014; Reimers et al., 2018).", "We build a baseline that uses document creation time as + start and end for all events.", "Rule-based time propagation: We also build rule-based time propagation method on top of local gold-standard time arguments.", "One strategy is to set 4-tuple time for all events that do not have time arguments as document creation time.", "Another strategy is to set 4-tuple time for events that do not have time arguments as 4-tuple time for their previous events in context.", "Baseline extraction model: We compare our model with the baseline extraction model using contextualized embedding introduced in Section 3.2.", "We use two contextualized embedding methods, RoBERTa (Liu et al., 2019) and Longformer (Beltagy et al., 2020), which provide sentence-level 4 and document-level contextualized embeddings respectively.", "For our proposed graph-based time propagation model, we use contextualized embedding from Longformer and consider two types of event graphs: (1) constructed event arguments, and (2) constructed temporal relations and time arguments.", "We optimize our model with Adam (Kingma and Ba, 2015) for up to 500 epochs with a learning rate of 1e-4.", "We use dropout with a rate of 0.5 for each layer.", "The hidden size of two-layer feed-forward neural networks and GAT heads for all models is 384.", "The size of relation embeddings is 50.", "We use 4 different heads for GAT.", "The number of layers n l is 2 for all GAT models.", "And we use a fixed pretrained model 5 to obtain contextualized representation for each sentence or document.", "We use 10 different random seeds for our experiments and report the averaged scores.", "We evaluate our model at each epoch, and search the best threshold for infinite dates on the development set.", "We use all predicted scores from the development set as candidate thresholds.", "We choose the model with the best performance on accuracy based on the development set and report the performance on test set using the best searched threshold on the development set.", "Evaluation Metrics.", "We evaluate the performance of models based on two different metrics, exact match rate and approximate match rate proposed in TAC-KBP2011 temporal slot filling evaluation (Ji et al., 2011).", "For exact match 4 We use RoBERTa to encode sentences instead of the entire documents because many documents exceed its maximal input length.", "5 We use roberta-base and longformer-base-4096 for RoBERTa and Longformer, respectively.", "rate, credits will only be assigned when the extracted date for a 4-tuple element exactly matches the ground truth date.", "The approximate match rate Q ( ) compares the predicted 4-tuple i = (cid:104) i, start , + i, start , i, end , + i, end (cid:105) with ground truth i = (cid:104) i, start , + i, start , i, end , + i, end (cid:105) by the averaged absolute difference between the corresponding dates, Q ( i , i ) = 1 4 (cid:88) s { + , } , p start , end 1 1 + | si, p si, p | .", "In this way, partial credits will be assigned based on how close the extracted date is to the ground truth.", "For example, if a gold standard date is 2001-01-01 and the corresponding extracted date is 2001-01-02 , the credit will be 1 1+ | 2001-01-01 2001-01-02 | = 12 .", "If a gold standard date is inf and the corresponding extracted date is 2001-01-02 , the credit will be 1 1+ | inf 2001-01-02 | = 0 .", "Our experiment results are shown in Table 4.", "From the results of directly converting sentence-level time arguments to 4-tuple representation, we can find that local time information is not sufficient for our document-level 4-tuple event time extraction.", "And the document creation time baseline does not perform well because a large portion of document-level 4-tuple event time information does not coincide with document creation time, which is widely used in previous absolute timeline construction.", "By comparing the performance of basic extraction framework that uses sentence-level and document-level contextualized embedding, we can also find that involving document-level information from embeddings can already improve the system performance.", "Similarly, we can also see performance improvement by involving rule-based time propagation rules, which again indicates the importance of document-level information for this task.", "Our GAT based time propagation methods significantly outperform those baselines, both when using temporal relations and when using arguments to construct those event graphs.", "Specifically, we find that using relation embedding significantly improves the temporal relation based propagation, by 2.01% on exact match rate and 2.03% on approximate match rate.", "This is because temporal labels between events, for example, BEFORE and AFTER , are more informative than argument roles in tasks related to time.", "Although our argument-based propagation model does not explicitly resolve conflict, the violation rate of 4-tuple constraints is about 4% in the output.", "Our time propagation framework has also been integrated into the state-of-the-art multimedia multilingual knowledge extraction system GAIA (Li et al., 2020a,b) for NIST SM-KBP 2020 evaluation and achieves top performance at intrinsic temporal evaluation.", "Table 5 shows some cases of comparison of various methods.", "In the first example, our argument based time propagation can successfully propagate Wednesday, which is attached to the event ar-rive, to talk event, through the shared argument Blair.", "In the second example, Negotiation and meeting share arguments Washington and Py-ongyang.", "So the time information for Negotiation can be propagated to meeting.", "In contrast, for these two cases, the basic extraction framework extracts wrong dates.", "The third example shows the effectiveness of temporal relation based propagation.", "We use the extracted temporal relation that rumble happens before secured to propagate time information.", "The basic extraction model does not know the temporal relation between these two events and thus makes mistakes.", "Some temporal boundaries may require knowledge synthesis of multiple temporal clues in the docu-...", "ment.", "For example, in Table 1, the latest end date of the sentence\" event (2012-04-14) needs to be inferred by aggregating two temporal clues in the document, namely its duration as nine-year, and its start date as 2003-04-14. Temporal information for many events, especially major events, may be incomplete in a single document. Taking Iraq war as an example, one document may mention its start date and another may mention its end date. To tackle this challenge, we need to extend document-level extraction to corpus-level and then aggregate temporal information for coreferential events in multiple documents. It is also challenging for the current 4-tuple representation to represent temporal information for recurring events such as paying monthly bills. Currently we consider recurring events as different events and fill in slots separately. Besides, this work does not capture more fine-grained information such as hours and minutes, but it is straightforward to extend the 4-tuple representation to these time scales in future work. Our current annotations are done by linguistic experts and thus they are expensive to acquire. It is worth exploring crowd-sourcing methods in the future to make it more scalable and less costly. 5 Related Work Event Temporal Anchoring. Event temporal anchoring is first introduced by Setzer (2002) using temporal links (TLINKS) to specify the relation among events and time. However, the TimeBank Corpus and TimeBank Dense Corpus using TimeML scheme (Pustejovsky et al., 2003b,a; Cassidy et al., 2014) is either too vague and sparse or is dense only with limited scope. Recently, Reimers et al. (2016) annotate the start and end time of each event on TimeBank. We have made several extensions by adding event types, capturing uncertainty by 4-tuple representation instead of TLINKS so that indirect time can also be considered, and extending event-event relations to document-level. Models trained on TimeBank often formulate the problem as a pair-wise classification for TLINKS. Efforts have been made to use Markov logical networks or ILP to propagate relations (Bramsen et al., 2006; Chambers and Jurafsky, 2008; Yoshikawa et al., 2009; Do et al., 2012), sieve-based classification (Chambers et al., 2014), and neural networks based methods (Meng et al., 2017; Meng and Rumshisky, 2018; Cheng et al., 2020). There are also efforts on event-event temporal relations (Ning et al., 2017, 2018, 2019; Han et al., 2019). Especially, Reimers et al. (2018) propose a decision tree that uses a neural network based classifier to find start and end time on Reimers et al. (2016). Leeuwenberg and Moens (2018) use event time to construct relative timeline. Temporal Slot Filling. Earlier work on extracting 4-tuple representation focuses on temporal slot-filling (TSF, Ji et al., 2011, 2013) to collect 4-tuple dates as temporal boundaries for entity attributes. Attempts on TSF include pattern matching (Byrne and Dunnion, 2011) and distant supervision (Li et al., 2012; Ji et al., 2013; Surdeanu et al., 2011; Sil and Cucerzan, 2014; Reinanda et al., 2013; Reinanda and de Rijke, 2014). In our work, we directly adopt 4-tuple as a fine-grained temporal representation for events instead of entity attributes. Temporal Reasoning. Some early efforts attempt to incorporate event-event relations to perform temporal reasoning (Tatu and Srikanth, 2008) and propagate time information (Gupta and Ji, 2009) based on hard constraints learned from annotated data. Our work is largely inspired from Talukdar et al. (2012) on graph-based label propagation for acquiring temporal constraints for event temporal ordering. We extend the idea by constructing rich event graphs, and proposing a novel GAT based method to assign weights for propagation. The idea of constructing event graph based on sharing arguments is also motivated from Centering Theory (Grosz et al., 1995), which has been applied to many NLP tasks such as modeling local coherence (Barzilay and Lapata, 2008) and event schema induction (Chambers and Jurafsky, 2009). 6 Conclusions and Future Work In this paper, we have created a new benchmark for document-level event time extraction based on 4-tuple representation, which provides rich representation to handle uncertainty. We propose a graph-based time propagation and use event-event relations to construct document-level event graphs. Our experiments and analyses show the effectiveness of our model. In the future, we will focus on improving the fundamental pretraining model for time to represent more fine-grained time information and cross-document temporal aggregation. Acknowledgement This research is based upon work supported in part by U.S. DARPA KAIROS Program No. FA8750-19-2-1004, U.S. DARPA AIDA Program No. FA8750-18-2-0014, and Air Force No. FA8650-17-C-7715. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of DARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein. References Regina Barzilay and Mirella Lapata. 2008. Modeling local coherence: An entity-based approach. Computational Linguistics , 34(1):134. Iz Beltagy, Matthew E. Peters, and Arman Cohan. 2020. Longformer: The long-document transformer. CoRR , abs/2004.05150. Philip Bramsen, Pawan Deshpande, Yoong Keok Lee, and Regina Barzilay. 2006. Inducing temporal graphs. In EMNLP 2006, Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing, 22-23 July 2006, Sydney, Australia , pages 189198. ACL. Lorna Byrne and John Dunnion. 2011. UCD IIRG at TAC 2011. In Proceedings of Text Analysis Conference (TAC2011) . Taylor Cassidy, Bill McDowell, Nathanael Chambers, and Steven Bethard. 2014. An annotation framework for dense event ordering. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) , pages 501506, Baltimore, Maryland. Association for Computational Linguistics. Nathanael Chambers, Taylor Cassidy, Bill McDowell, and Steven Bethard. 2014. Dense event ordering with a multi-pass architecture. Transactions of the Association for Computational Linguistics , 2:273 284. Nathanael Chambers and Dan Jurafsky. 2009. Unsupervised learning of narrative schemas and their participants. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP , pages 602610, Suntec, Singapore. Association for Computational Linguistics. Nathanael Chambers and Daniel Jurafsky. 2008. Jointly combining implicit constraints improves temporal ordering. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing , pages 698706, Honolulu, Hawaii. Association for Computational Linguistics. Fei Cheng, Masayuki Asahara, Ichiro Kobayashi, and Sadao Kurohashi. 2020. Dynamically updating event representations for temporal relation classification with multi-category learning. In Findings of the Association for Computational Linguistics: EMNLP 2020 , pages 13521357, Online. Association for Computational Linguistics. Djork-Arn Clevert, Thomas Unterthiner, and Sepp Hochreiter. 2016. Fast and accurate deep network learning by exponential linear units (elus). In 4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings . Hanjun Dai, Bo Dai, and Le Song. 2016. Discriminative embeddings of latent variable models for structured data. In Proceedings of the 33nd International Conference on Machine Learning, ICML 2016, New York City, NY, USA, June 19-24, 2016 , volume 48 of JMLR Workshop and Conference Proceedings , pages 27022711. JMLR.org. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers) , pages 41714186, Minneapolis, Minnesota. Association for Computational Linguistics. Quang Do, Wei Lu, and Dan Roth. 2012. Joint inference for event timeline construction. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning , pages 677687, Jeju Island, Korea. Association for Computational Linguistics. Lisa Ferro, Laurie Gerber, Inderjeet Mani, Beth Sund-heim, and George Wilson. 2005. TIDES2005 standard for the annotation of temporal expressions. MITRE Corporation Technical Report . Tao Ge, Wenzhe Pei, Heng Ji, Sujian Li, Baobao Chang, and Zhifang Sui. 2015. Bring you to the past: Automatic generation of topically relevant event chronicles. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers) , pages 575585, Beijing, China. Association for Computational Linguistics. Barbara J. Grosz, Aravind K. Joshi, and Scott Weinstein. 1995. Centering: A framework for modeling the local coherence of discourse. Computational Linguistics , 21(2):203225. Prashant Gupta and Heng Ji. 2009. Predicting unknown time arguments based on cross-event propagation. In ACL 2009, Proceedings of the 47th Annual Meeting of the Association for Computational Linguistics and the 4th International Joint Conference on Natural Language Processing of the AFNLP, 2-7 August 2009, Singapore, Short Papers , pages 369372. The Association for Computer Linguistics. Andrey Gusev, Nathanael Chambers, Divye Raj Khilnani, Pranav Khaitan, Steven Bethard, and Dan Jurafsky. 2011. Using query patterns to learn the duration of events. In Proceedings of the Ninth International Conference on Computational Semantics (IWCS 2011) . William L. Hamilton, Zhitao Ying, and Jure Leskovec. 2017. Inductive representation learning on large graphs. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 4-9 December 2017, Long Beach, CA, USA , pages 10241034. Rujun Han, Qiang Ning, and Nanyun Peng. 2019. Joint event and temporal relation extraction with shared representations and structured prediction. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019 , pages 434444. Association for Computational Linguistics. Lifu Huang and Lian'en Huang. 2013. Optimized event storyline generation based on mixture-event-aspect model. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing , pages 726735, Seattle, Washington, USA. Association for Computational Linguistics. Heng Ji, Taylor Cassidy, Qi Li, and Suzanne Tamang. 2013. Tackling representation, annotation and classification challenges for temporal knowledge base population. Knowledge and Information Systems , 41(3):611646. Heng Ji, Ralph Grishman, Zheng Chen, and Prashant Gupta. 2009. Cross-document event extraction and tracking: Task, evaluation, techniques and challenges. In Proceedings of the International Conference RANLP-2009 , pages 166172, Borovets, Bulgaria. Association for Computational Linguistics. Heng Ji, Ralph Grishman, and Hoa Trang Dang. 2011. An overview of the TAC2011 knowledge base population track. In Proceedings of Text Analysis Conference (TAC2011) . Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings . Thomas N. Kipf and Max Welling. 2017. Semi-supervised classification with graph convolutional networks. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings . OpenReview.net. Artuur Leeuwenberg and Marie-Francine Moens. 2018. Temporal information extraction by predicting relative time-lines. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing , pages 12371246, Brussels, Belgium. Association for Computational Linguistics. Manling Li, Ying Lin, Tuan Manh Lai, Xiaoman Pan, Haoyang Wen, Sha Li, Zhenhailong Wang, Pengfei Yu, Lifu Huang, Di Lu, Qingyun Wang, Haoran Zhang, Qi Zeng, Chi Han, Zixuan Zhang, Yujia Qin, Xiaodan Hu, Nikolaus Parulian, Daniel Campos, Heng Ji, Brian Chen, Xudong Lin, Alireza Zareian, Amith Ananthram, Emily Allaway, Shih-Fu Chang, Kathleen McKeown, Yixiang Yao, Michael Spector, Mitchell DeHaven, Daniel Napierski, Marjorie Freedman, Pedro Szekely, Haidong Zhu, Ram Neva-tia, Yang Bai, Yifan Wang, Ali Sadeghian, Haodi Ma, and Daisy Zhe Wang. 2020a. GAIA at SM-KBP 2020 a dockerlized multi-media multi-lingual knowledge extraction, clustering, temporal tracking and hypothesis generation system. In Proceedings of Thirteenth Text Analysis Conference (TAC 2020) . Manling Li, Alireza Zareian, Ying Lin, Xiaoman Pan, Spencer Whitehead, Brian Chen, Bo Wu, Heng Ji, Shih-Fu Chang, Clare Voss, Daniel Napierski, and Marjorie Freedman. 2020b. GAIA: A fine-grained multimedia knowledge extraction system. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations , pages 7786, Online. Association for Computational Linguistics. Qi Li, Javier Artiles, Taylor Cassidy, and Heng Ji. 2012. Combining flat and structured approaches for temporal slot filling or: How much to compress? In Computational Linguistics and Intelligent Text Processing 13th International Conference, CICLing 2012, New Delhi, India, March 11-17, 2012, Proceedings, Part II , volume 7182 of Lecture Notes in Computer Science , pages 194205. Springer. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining approach. CoRR , abs/1907.11692. Hector Llorens, Nathanael Chambers, Naushad UzZa-man, Nasrin Mostafazadeh, James Allen, and James Pustejovsky. 2015. SemEval-2015 task 5: QA Tem-pEval evaluating temporal information understanding with question answering. In Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015) , pages 792800, Denver, Colorado. Association for Computational Linguistics. Diego Marcheggiani, Jasmijn Bastings, and Ivan Titov. 2018. Exploiting semantics in neural machine translation with graph convolutional networks. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers) , pages 486492, New Orleans, Louisiana. Association for Computational Linguistics. Yuanliang Meng and Anna Rumshisky. 2018. Context-aware neural model for temporal information extraction. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018, Melbourne, Australia, July 15-20, 2018, Volume 1: Long Papers , pages 527536. Association for Computational Linguistics. Yuanliang Meng, Anna Rumshisky, and Alexey Romanov. 2017. Temporal information extraction for question answering using syntactic dependencies in an lstm-based architecture. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP 2017, Copenhagen, Denmark, September 9-11, 2017 , pages 887896. Association for Computational Linguistics. Anne-Lyse Minard, Manuela Speranza, Eneko Agirre, Itziar Aldabe, Marieke van Erp, Bernardo Magnini, German Rigau, and Rubn Urizar. 2015. SemEval-2015 task 4: TimeLine: Cross-document event ordering. In Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015) , pages 778786, Denver, Colorado. Association for Computational Linguistics. Qiang Ning, Zhili Feng, and Dan Roth. 2017. A structured learning approach to temporal relation extraction. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing , pages 10271037, Copenhagen, Denmark. Association for Computational Linguistics. Qiang Ning, Zhili Feng, Hao Wu, and Dan Roth. 2018. Joint reasoning for temporal and causal relations. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018, Melbourne, Australia, July 15-20, 2018, Volume 1: Long Papers , pages 22782288. Association for Computational Linguistics. Qiang Ning, Sanjay Subramanian, and Dan Roth. 2019. An improved neural baseline for temporal relation extraction. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019 , pages 62026208. Association for Computational Linguistics. Feng Pan, Rutu Mulkar, and Jerry R. Hobbs. 2006. Learning event durations from event descriptions. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics , pages 393400, Sydney, Australia. Association for Computational Linguistics. Feng Pan, Rutu Mulkar-Mehta, and Jerry R. Hobbs. 2011. Annotating and learning event durations in text. Computational Linguistics , 37(4):727752. James Pustejovsky, Jos M. Castao, Robert Ingria, Roser Saur, Robert J. Gaizauskas, Andrea Setzer, Graham Katz, and Dragomir R. Radev. 2003a. Timeml: Robust specification of event and temporal expressions in text. In New Directions in Question Answering, Papers from 2003 AAAI Spring Symposium, Stanford University, Stanford, CA, USA , pages 2834. AAAI Press. James Pustejovsky, Patrick Hanks, Roser Sauri, Andrew See, Robert Gaizauskas, Andrea Setzer, Dragomir Radev, Beth Sundheim, David Day, Lisa Ferro, et al. 2003b. The timebank corpus. In Corpus linguistics , volume 2003, page 40. Lancaster, UK. Nils Reimers, Nazanin Dehghani, and Iryna Gurevych. 2016. Temporal anchoring of events for the TimeBank corpus. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 2195 2204, Berlin, Germany. Association for Computational Linguistics. Nils Reimers, Nazanin Dehghani, and Iryna Gurevych. 2018. Event time extraction with a decision tree of neural classifiers. Transactions of the Association for Computational Linguistics , 6:7789. Ridho Reinanda and Maarten de Rijke. 2014. Prior-informed distant supervision for temporal evidence classification. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers , pages 9961006, Dublin, Ireland. Dublin City University and Association for Computational Linguistics. Ridho Reinanda, Daan Odijk, and de M Rijke. 2013. Exploring entity associations over time. In SI-GIR2013; Workshop on time-awareiInformation access . TAIA'13. Michael Sejr Schlichtkrull, Thomas N. Kipf, Peter Bloem, Rianne van den Berg, Ivan Titov, and Max Welling. 2018. Modeling relational data with graph convolutional networks. In The Semantic Web 15th International Conference, ESWC 2018, Heraklion, Crete, Greece, June 3-7, 2018, Proceedings , volume 10843 of Lecture Notes in Computer Science , pages 593607. Springer. Andrea Setzer. 2002. Temporal information in newswire articles: an annotation scheme and corpus study. Ph.D. thesis, University of Sheffield. Avirup Sil and Silviu-Petru Cucerzan. 2014. Towards temporal scoping of relational facts based on Wikipedia data. In Proceedings of the Eighteenth Conference on Computational Natural Language Learning , pages 109118, Ann Arbor, Michigan. Association for Computational Linguistics. Julius Steen and Katja Markert. 2019. Abstractive timeline summarization. In Proceedings of the 2nd Workshop on New Frontiers in Summarization , pages 21 31, Hong Kong, China. Association for Computational Linguistics. Mihai Surdeanu, Sonal Gupta, John Bauer, David Mc-Closky, Angel X. Chang, Valentin I. Spitkovsky, and Christopher D. Manning. 2011. Stanford's distantly-supervised slot-filling system. In Proceedings of the Fourth Text Analysis Conference, TAC 2011, Gaithersburg, Maryland, USA, November 14-15, 2011 . NIST. Partha Pratim Talukdar, Derry Wijaya, and Tom M. Mitchell. 2012. Acquiring temporal constraints between relations. In 21st ACM International Conference on Information and Knowledge Management, CIKM'12, Maui, HI, USA, October 29 November 02, 2012 , pages 9921001. ACM. Marta Tatu and Munirathnam Srikanth. 2008. Experiments with reasoning for temporal relations between events. In Proceedings of the 22nd International Conference on Computational Linguistics (Coling 2008) , pages 857864, Manchester, UK. Coling 2008 Organizing Committee. Siddharth Vashishtha, Benjamin Van Durme, and Aaron Steven White. 2019. Fine-grained temporal relation extraction. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics , pages 29062919, Florence, Italy. Association for Computational Linguistics. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 4-9 December 2017, Long Beach, CA, USA , pages 59986008. Petar Velickovic, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Li, and Yoshua Bengio. 2018. Graph attention networks. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 May 3, 2018, Conference Track Proceedings . OpenRe-view.net. Alakananda Vempala, Eduardo Blanco, and Alexis Palmer. 2018. Determining event durations: Models and error analysis. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers) , pages 164168, New Orleans, Louisiana. Association for Computational Linguistics. Lu Wang, Claire Cardie, and Galen Marchetti. 2015. Socially-informed timeline generation for complex events. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies , pages 10551065, Denver, Colorado. Association for Computational Linguistics. Katsumasa Yoshikawa, Sebastian Riedel, Masayuki Asahara, and Yuji Matsumoto. 2009. Jointly identifying temporal relations with Markov Logic. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP , pages 405413, Suntec, Singapore. Association for Computational Linguistics. Yuhao Zhang, Peng Qi, and Christopher D. Manning. 2018. Graph convolution over pruned dependency trees improves relation extraction. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing , pages 22052215, Brussels, Belgium. Association for Computational Linguistics. Ben Zhou, Daniel Khashabi, Qiang Ning, and Dan Roth. 2019. going on a vacation takes longer than going for a walk: A study of temporal commonsense understanding." ]
[ "abstain", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "other", "abstain", "other", "abstain", "abstain", "abstain", "other", "objective", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "method", "abstain", "abstain", "objective", "objective", "objective", "objective", "objective", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "method", "abstain", "method", "other", "objective", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "abstain", "method", "abstain", "method", "method", "abstain", "method", "method", "method", "method", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "result", "result", "abstain", "result", "result", "result", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other" ]
[ "Social media platforms are deploying machine learning based offensive language classification systems to combat hateful, racist, and other forms of offensive speech at scale.", "However, despite their real-world deployment, we do not yet comprehensively understand the extent to which offensive language classifiers are robust against adversarial attacks.", "Prior work in this space is limited to studying robustness of offensive language classifiers against primitive attacks such as misspellings and extraneous spaces.", "To address this gap, we systematically analyze the robustness of state-of-the-art offensive language classifiers against more crafty adversarial attacks that leverage greedy-and attention-based word selection and context-aware embeddings for word replacement.", "Our results on multiple datasets show that these crafty adversarial attacks can degrade the accuracy of offensive language classifiers by more than 50% while also being able to preserve the readability and meaning of the modified text.", "Online social media platforms are dealing with an unprecedented scale of offensive (e.g., hateful, threatening, profane, racist, and xenophobic) language (Twitter; Facebook; Reddit).", "Given the scale of the problem, online social media platforms now increasingly rely on machine learning based systems to proactively and automatically detect offensive language (Rosen, 2020; Gadde and Derella, 2020; Kastrenakes, 2019; Hutchinson, 2020).", "The research community is actively working to improve the quality of offensive language classification (Zampieri et al., 2020, 2019b; Liu et al., 2019; Nikolov and Radivchev, 2019; Mahata et al., 2019; Arango et al., 2020; Agrawal and Awekar, 2018; This paper examines offensive language as a case study. The reader is cautioned that the paper contains unavoidable strong language given the nature of the research. Our code and data are available at: https://github.com/JonRusert/RobustnessOfOffensiveClassifiers Fortuna and Nunes, 2018).", "A variety of offensive language classifiers ranging from traditional shallow models (SVM, Random Forest), deep learning models (CNN, LSTM, GRU), to transformer-based models (BERT, GPT-2) have been proposed in prior literature (Liu et al., 2019; Nikolov and Radivchev, 2019; Mahata et al., 2019).", "Amongst these approaches, BERT-based transformer models have achieved state-of-the-art performance while ensembles of deep learning models also generally perform well (Zampieri et al., 2019b, 2020).", "It remains unclear whether the state-of-the-art offensive language classifiers are robust to adversarial attacks.", "While adversarial attacks are of broad interest in the ML/NLP community (Hsieh et al., 2019; Behjati et al., 2019), they are of particular interest for offensive language classification because malicious users can make subtle perturbations such that the offensive text is still intelligible to humans but evades detection by machine learning classifiers.", "Prior work on the robustness of text classification is limited to analyzing the impact on classifiers of primitive adversarial changes such as deliberate misspellings (Li et al., 2019), adding extraneous spaces (Grndahl et al., 2018), or changing words with their synonyms (Jin et al., 2020; Ren et al., 2019; Li et al., 2020).", "However, the primitive attacks can be easily defended against a spell checker can fix misspellings and a word segmenter can correctly identify word boundaries even with extra spaces (Rojas-Galeano, 2017; Li et al., 2019).", "Additionally, a normal synonym substitution will not theoretically hold for offensive language as less offensive language will be substituted and thus meaning will be lost.", "Crucially, we do not know how effective these text classifiers are against crafty adversarial attacks employing more advanced strategies for text modifications.", "To address this gap, we analyze the robustness of offensive language classifiers against an adversary who uses a novel word embedding to identify 7424 word replacements and a surrogate offense classifier in a black-box setting to guide modifications.", "This embedding is purpose-built to evade offensive language classifiers by leveraging an evasion collection that comprises of evasive offensive text gathered from online social media.", "Using this embedding, the adversary modifies the offensive text while also being able to preserve text readability and semantics.", "We present a comprehensive evaluation of the state-of-the-art BERT and CNN/LSTM based offensive language classifiers, as well as an offensive lexicon and Google's Perspective API, on two datasets.", "We summarize our key contributions below.", "We systematically study the ability of an adversary who uses a novel, crafty strategy to attack and bypass offensive language classifiers.", "The adversary first builds a new embedding from a special evasion collection, then uses it alongside a surrogate offensive language classifier deployed in black-box mode to launch the attack.", "We explore variations of our adversarial strategy.", "These include greedy versus attention based selection of text words to replace.", "These also include two different versions of embeddings for word substitutions.", "We evaluate robustness of state-of-the-art offensive language classifiers, as well as a real-world offensive language classification system on two datasets from Twitter and Reddit.", "Our results show that 50% of our attacks cause an accuracy drop of 24% and 69% of attacks cause drops 20% against classifiers across datasets.", "Ethics Statement: We acknowledge that our research demonstrating attacks against offensive language classifiers could be used by bad agents.", "Our goal is to highlight the vulnerability within offensive language classifiers.", "We hope our work will inspire further research to improve their robustness against the presented and similar attacks.", "The adversary's goal is to modify his/her offensive post in such a manner as to evade detection by offensive language classifiers while simultaneously preserving semantics and readability for humans.", "To make suitable modifications, the adversary is assumed to have black-box access to a surrogate offensive language classifier that is different from the one used by the online social media platform.", "The adversary leverages feedback from this surrogate classifier to guide modifications using a novel approach that we propose.", "Our goal is to evaluate the extent to which the adversary can evade detection by an unknown offensive language classifier under this threat model.", "We evaluate the following offensive language classifiers under our threat model.", "1. NULI (Liu et al., 2019) is a BERT (Devlin et al., 2019) based system trained on offensive language.", "During preprocessing, emojis are converted into English phrases 1 and hash-tags are segmented 2 .", "This was the top-ranked system in OffensEval (Zampieri et al., 2019b).", "2. Vradivchev (Nikolov and Radivchev, 2019) is also a BERT based system trained on offensive language data.", "The preprocessing step includes removing symbols @\" and #\", tok-enization and lowercasing, splitting hashtags, and removing stopwords.", "This was the second best system in OffensEval.", "3. MIDAS (Mahata et al., 2019) is a voting ensemble of three deep learning systems: a CNN, a BLSTM, and a BLSTM fed into a Bidirectional Gated Recurrent Unit (BGRU).", "This was the top non-BERT system in OffensEval 3 .", "4. Offensive Lexicon (Wiegand et al., 2018) is a simple method that classifies a post as offensive if at least one word is in a lexicon of offensive words.", "We use their lexicon.", "5. Perspective API (Perspective) by Google (Jigsaw) provides a toxicity model that classifies whether a post is rude, disrespectful, or unreasonable.", "The production model uses a CNN trained with fine-tuned GloVe word embeddings and provides toxicity probability.", "We use 0.5 threshold to classify a post as offensive as in Pavlopoulos et al. (2019).", "This section describes our adversarial attack method as well as a recent visual adversarial attack (Eger et al., 2019) and a simpler attack (Grndahl et al., 2018) for baseline comparison.", "The adversary's attack involves selecting words to replace in the input text and deciding on suitable replacements.", "Selection.", "There are several ways to approach word selection for replacement.", "Here we explore a greedy approach (Hsieh et al., 2019) and an approach using attention weights (Xu et al., 2018).", "For the greedy approach, we first remove each word one at a time (retaining the rest in the text) and get the drop in classification probability for the text from the surrogate offensive classifier.", "Words are removed until the offensive label is flipped (ac-cording to the classifier).", "The removed words make up the full list of possible replacements.", "The adversary then selects the word that causes the largest drop for replacement.", "If replacing this word is insufficient to bypass the surrogate classifier then the word with the next largest drop is also selected for replacement and so on.", "For the attention approach, we leverage a BLSTM with attention which is trained on the target classification task.", "Note that this BLSTM is different from the one found in MIDAS.", "To select words, we give the input text to the BLSTM and examine the attention weights estimated during classification.", "The adversary selects the word with the highest attention weight.", "If replacing this word is insufficient to bypass the surrogate classifier then the word with the next largest attention weight is also selected for replacement and so on.", "The attention approach can potentially find replacements that greedy approach may not.", "Specifically, the greedy approach may miss instances where the combination of words cause offense rather than single words.", "Replacement.", "Figure 1 depicts our framework for substituting the selected word with another word.", "First, a candidate list of 20 most similar words (closest vectors) is obtained from an embedding space.", "Next, we replace the selected word with its most similar word and check the modified text against the surrogate classifier.", "If the modified text is declared not offensive, then this word is chosen Selected Word Embedding Top-20 most similar words Replace word with most similar word OffenseClassifier offensive Not offensive Select next word Yes No Replace with least offensive Words remain?", "as the replacement.", "Otherwise, the process continues with the next most similar word.", "If the candidate list is exhausted without misclassification by the surrogate classifier, we choose the replacement word which causes the largest drop in classification probability.", "Embeddings.", "The key idea here is to design a context-aware word embedding for crafty replacements.", "To this end, we first build a text collection of 13 million deleted tweets through retrospective analysis using the Twitter API (Thomas et al., 2011; Le et al., 2019).", "Next we filter out the tweets from this set that are labeled as offensive by any of the offensive language classifiers in Section 2.2.", "4 The remaining set of 8.5 million deleted tweets contains offensive tweets that were likely flagged by users or human moderators.", "5 We expect this set of deleted tweets to contain crafty substitutions and expressions that are likely to evade detection by state-of-the-art offensive language classifiers.", "We refer to this set of deleted tweets as the evasion collection and this is the data that the adversary uses to train word embeddings.", "We explore the following embeddings:", "These are pretrained GloVe embeddings on 2 billion tweets.", "The vocabulary size of this model is 1,193,514 tokens.", "This represents a baseline off-the-shelf word embedding.", "2. GloVe embedding fine-tuned with evasion collection ( F T ): We use the evasion collection to fine-tune the pretrained GloVe embeddings.", "Fine tuning is done over 10 epochs.", "The resulting vocabulary size is 1,312,106 tokens.", "Figure 2 illustrates this approach.", "Insights into the embeddings.", "Our intuition of crafty substitutions being present in the evasion collection is backed up by examination of the embeddings.", "Using a set of offensive words as probes we find that on average the position of the first evasive word amongst the 20 most similar words in P re is 11, while for F T this number is 3, implying that F T is more likely to offer an evasive replacement.", "We expand on these insights and analysis in Section 6.", "Furthermore, as fine-tuning the embeddings may introduce garbage words (non english, often meaningless words) as replacements, we add in a filter to the candidates when using the FT embeddings.", "This filter only allows candidates which have been used in tweets by 3 distinct authors in the evasion dataset.", "Finally, as checking every candidate can be time consuming and inefficient, we apply this filter only when we substitute text words that were not in the original Pre embeddings.", "VIPER.", "We implement a recent visual adversarial attack called VIPER (Eger et al., 2019) that aims to generate adversarial text for any classification task.", "VIPER (VIsual PERturber) replaces characters in the text with visually nearest neighbors determined from a visual embedding space.", "Each character present in the text is selected for replacement with a fixed probability p .", "VIPER strategically chooses replacements from non standard unicode characters assuming that systems rarely train outside the standard unicode space.", "As the main comparison, we choose their description-based character embedding space (DCES) in our experiments since it had the best tradeoff between attack success and readability.", "DCES represents characters by their unicode textual descriptions.", "The nearest neighbor substitute is the character whose description refers to the same letter in the same case.", "We also compare with their simpler, easy character embedding space (ECES), which contains only nearest neighbor for character replacement.", "We used VIPER with p = 0.1 and 0.4, the first for better readability and the second for better likelihood of attack success.", "Note that higher p values correspond to more changes in the text.", "Grondahl.", "Grndahl et al. (2018) explored rather simple attack methods such as modifying whitespace and misdirection by adding a misleading word.", "We implement several of their adversarial attacks.", "These are: adding a space after every character, removing all spaces between characters, adding the word love' to the input text, and finally removing all spaces then adding love.' This last attack strategy outperformed others in their evaluation.", "Offensive Language Identification Dataset (OLID).", "OLID was used in SemEval-6 2019: OffensEval, a shared task on classifying offensive language (Zampieri et al., 2019a).", "This collection is annotated by experienced annotators to ensure high quality.", "OLID contains 14,100 English tweets (text only): split into 13,240 (4,400 offensive, 8,840 nonoffensive) training tweets and 860 (240 offensive, 620 non-offensive) test tweets.", "Semi-Supervised Offensive Language Identification Dataset (SOLID) .", "SOLID is an expansion of OLID used in SemEval 2020: OffensEval, which continued the task of classifying offensive language (Rosenthal et al., 2020).", "SOLID was constructed from tweets via semi-supervised manner using democratic co-training with OLID as a seed dataset.", "SOLID contains 9,000,000 tweets as an expansion for training, and 5,993 test tweets, (3,002 offensive, 2,991 non-offensive).", "= Accuracy Original Accuracy Modified , where Accuracy Original is the classifier's accuracy on original text and Accuracy Modified is the classifier's accuracy on the modified text.", "Larger drops imply better evasion of offensive language classifiers by the adversary.", "Readability and semantic preservation: We measure readability of the modified text and its semantic preservation through manual evaluation.", "More specifically, for readability, human reviewers are 7427 asked to examine the modified text and rate it as one of: {The text is easy to read', The text can be read with some difficulty', The text is hard to read'}.", "For semantic preservation, reviewers are given the original texts alongside the modified versions and are asked whether text B' (modified text) conveys the same meaning as text A' (orig-inal text).", "The choices are {Yes, Text B conveys the same meaning as Text A', Text B conveys partially the meaning of Text A', No, Text B does not convey the same meaning as Text A'}.", "We use the OLID and SOLID test sets to assess the success of our attack strategies.", "Amongst the several offensive language classifiers considered in this work (see Section 2.2), we make one classifier available to the adversary as a surrogate black-box classifier to guide adversarial modification of each test tweet.", "Note that we do not use Lexicon as an internal classifier as it does not provide useful feedback (only returning 0 or 1 for positive class probabilities).", "We then evaluate the drop in classification accuracy ( ) for each of the remaining classifiers.", "In this section, we first present the results of our proposed adversarial attack approach and then those of existing approaches from prior literature on the OLID dataset.", "Evaluation was also performed on the SOLID dataset and the results followed a similar trend.", "Full results for all attacks are located in the appendix.", "Table 1 presents the results on the OLID dataset.", "Rows specify the attack strategy.", "The first column identifies the surrogate offensive language classifier used by the adversary to guide modifications.", "The remaining columns specify the offensive language classifier whose robustness is being evaluated.", "Cell values are drops in accuracy after adversarial modification.", "Accuracy here refers to the percentage of offensive tweets correctly predicted as offensive.", "Classification accuracy for original text is given in the first row of the table.", "So for example, the final accuracy for NULI where the adversary uses GS-Pre and MIDAS is 44 (61-17).", "Blocks of rows labeled with prefix GS stand for results with greedy word selection strategy while AS stand for results with BLSTM-attention based word selection.", "Note that diagonal entries, where the surrogate classifier is the same as the one being tested for robustness are ignored because the adversary is expected to be quite successful under this condition.", "We indeed find that the accuracy drops close to 0% in these cases.", "Additionally, for the Lexicon based method, we find it does not perform as well as the other classifiers, thereby excluding it from the state-of-the-art (SOTA) category.", "Offensive language classifiers are susceptible to our adversarial attacks.", "Table 1 shows that our adversarial attacks are quite successful against crafty offensive language classifiers.", "For OLID, classifiers see a drop of accuracy in the range of 1146 6 .", "In fact, 50% of attacks cause a drop of 24 and 69% of attacks cause a drop of 20.", "This shows the vulnerability of offensive language classifiers and their vulnerability under our threat model.", "Greedy select (GS) outperforms Attention select (AS) attacks.", "Greedy Select achieves higher average drops in accuracy across classifiers.", "For example, GS -F T achieves an average drop of 26 against NULI while AS -F T achieves only a drop of 17.", "This holds true for both replacement embeddings.", "Although lower, AS still achieves strong drops against vradivchev (average of 35).", "This indicates the strength of a greedy approach, however, attention selection may be more viable in a setting where the number of queries is limited.", "FT embeddings and Pre embeddings see success against different systems.", "Comparing to P re embeddings, F T we see different leads in dropped accuracy depending on the classifier.", "F T see great success against NULI and vradivchev, while P re see success against the other three.", "This indicates that the evasion collection can help add power, especially against popular (BERT-based) classifiers.", "are the most and least robust to attacks.", "Focusing on the GS -F T embedding, NULI has a mean drop in accuracy of 26 (range: 18 39), the lowest across SOTA offensive language classifiers.", "In contrast, vradivchev, performs the best with accuracy of 69 but is also the most vulnerable to our attack model with a mean drop in accuracy of 37 6 Note: The BLSTM attention classifier used for attention based word selection could also be used as an internal classifier.", "However, since this strategy did not perform as well as SOTA classifiers so we do not include these results in the main analysis.", "(range: 29 46), the highest drop of any offensive language classifier.", "This mean is 27 for Perspective and 29 for MIDAS.", "The stark difference between the two BERT systems' robustness most likely stems from the preprocessing step.", "BERT is a context-aware system.", "While NULI's preprocessing helps add context (e.g. converting emojis to text), vradivchev's hinders it.", "Specifically, vradivchev removes stop words.", "This could be a problem as removing this additional information causes the system to miss out on context during training.", "Then, as the attack is more likely to focus on changing non-stop words, vradivchev then loses both contextual information (via stop word removal) as well as offense indicating tokens (the main information it focused on during training).", "for the adversary while MIDAS the least effective.", "Again, focusing on GS-F T , NULI helps the adversary as the surrogate classifier the most by causing an average accuracy drop of 32 (range: 19 46), compared to vradivchev (avg: 28, range: 18 39), Perspective (avg: 25, range: 13 37), and MIDAS (avg: 21, range: 13 29).", "This again emphasizes BERT based methods' ability to understand context and use it effectively in attacks, also seen in previous research (Li et al., 2020).", "Grondahl.", "Table 2 shows the results when methods proposed by Grndahl et al. (2018) are used to obfuscate.", "Note that this approach does not use a surrogate classifier.", "The simpler whitespace and love' word based attacks proposed by Grndahl et al. (2018) have little to no effect on offensive language classifiers which contain a word segmentation pre-processing step.", "These classifiers include NULI (average drop: -3), vradivchev (average drop: 11), and Lexicon (average drop:0).", "MIDAS being ill equipped in this regard sees a drop of 64 when all spaces are removed and love' is added to the text.", "However, when we add a simple word segmentation step during pre-processing the attack loses effectiveness.", "For example, the Remove space, Add love' attack is reduced to a drop of 33 with this shielding pre-processing step, compared to 64 without it.", "Similarly, Perspective also sees drops up to 38 in these settings.", "VIPER.", "Like a whitespace attack, VIPER attacks can be easily prevented using a trivial text preprocessing step.", "To demonstrate this, we added 7429 a pre-processing shielding' step to each system which replaces any non-standard characters with standard ones.", "The results for shielded VIPER attacks are found in Table 2 (Note: The full results for non-shielding against VIPER are found in the appendix.).", "This is in essence logically the reverse of VIPER's obfuscation by character translation process.", "Non-standard characters are those which exist outside a-zA-Z', numbers, and punctuation.", "To do this, as do the VIPER authors, we leverage the NamesList from the unicode database 7 .", "For any non-standard character, the description is searched for in the NamesList and the character which appears after LETTER in the description is used for substitution.", "For example, ' a 'is described as LATIN SMALL LETTER A WITH INVERTED BREVE, and hence would be replaced with a'.", "This simple pre-processing step reduces VIPER's average attack rate from 37 to 7 as shown in the VIPER results.", "In contrast, our proposed attack is not preventable through such simple preprocessing.", "The attack results against SOLID are found in Table", "3. We see similar attack success as seen in OLID, finding even greater drops.", "Specifically, 75% of attacks cause a drop of 40 and 100% attacks cause a drop of at least 33.", "F T embeddings maintain a majority of the meaning and readability.", "We test readability of a sample of 50 tweets from the SOLID dataset, of which all were modified by F T .", "We asked three crowdworkers to assess the 50 tweets for readability.", "For comparison, we asked additional crowdworkers to assess the readability of the original texts.", "This helps explore the true drop in readability of a text.", "Additionally, we showed three other crowdworkers the original text as well and asked them to assess if the obfuscated texted conveyed the same meaning as the original (see Section 4.2 for details).", "We finally combined the crowdworkers votes by taking a majority vote for each example.", "Table 4 presents the results.", "We find that F T scored slightly less in terms of readability than the original texts, but finds replacements with similar meaning.", "Specifically, readability drops from 74% 7 https://www.unicode.org/Public/UCD/ latest/ucd/NamesList.txt to 70% for fully readable, but nearly two thirds retain the same meaning and 96% retain at least partial meaning.", "These numbers help indicate the strength of the attack, even when leveraging a crafty collection of word substitutions.", "To provide insights into texts which retained full meaning versus partial, Table 5 shows a few examples of tweets in their respective categories as voted by crowdworkers.", "F T is able to find many appropriate, non traditional replacements.", "For example, shit is replaced with shxt, in several instances which helps maintain meaning while evading classification.", "As another example, phis-ing a mispelling of phishing is substituted for fake.", "In context, this substitution makes sense.", "Note that while some examples are misspellings, these crafty modifications are ones that are mined from our large evasion text collection and not algorithmically-generated mispellings.", "However, some errors are found after replacement.", "For example, in the Not Similar instance F T replaces fuck with bruh, and shut with walked.", "These errors demonstrate room for improvement when selecting a candidate.", "As discussed in Section 3.1, the adversary's strategy is to make crafty word replacements using a new embedding generated from an evasion collection (here made of deleted tweets not detected by an offense classifier).", "Results show that these embeddings successfully support the adversary at evading offensive language classifiers while maintaining readability and semantics.", "For further insights, we compare the off-the-shelf pretrained ( P re ) embedding with the embedding fine-tuned on the evasion collection ( F T ).", "We examine the embeddings using the 59 words as probes which are both in the offensive Lexicon (Wiegand et al., 2018) and in the OLID test.", "For each word we get the 20 most similar words from P re and from F T for comparison.", "words closer to offensive probe words.", "We calculate the average position of the first evasive word 8 amongst the 20 most similar words.", "P re has an average distance of 11, while F T has an average distance of", "3. Thus, on average, F T is more likely to find evasive replacements.", "For example, in P re dispicable appears as the 3rd most similar word to despicable , but it is the most similar in F T .", "Since 8 Evasion is determined by Perspective API.", "F T could contain some unintelligible words, we repeat the experiment to filter out substitute words used by less than 3 different users.", "The same overall trend still holds.", "Updated embeddings learn creative replacements.", "We manually compare the entries in the two lists ( F T and P re ) of substitute words for each probe word.", "F T learns creative replacements absent in P re .", "Examples include the word azz being the most similar word to ass in F T , but being absent within the most similar word list for P re .", "Similarly, niggah appears as a replacement for bitch in F T , but not in P re .", "These examples, along with the previous distance analysis, illustrate the craftiness in our evasion dataset.", "We first review related work on robustness of text classification in general and then closely related research on evading offensive language classifiers.", "Evading Text Classifiers.", "Prior work has explored ways to evade text classification in general.", "Li et al. (2019) showed that character-level perturbations such as misspellings and word-level perturbations using off-the-shelf GloVe embeddings can evade text classifiers.", "Deri and Knight (2015) proposed an approach to create portmanteaus, which could be extended to adversarial texts.", "Behjati et al. (2019) added a sequence of words to any input to evade text classifiers.", "Zhao et al. (2018) proposed a GAN to generate adversarial attacks on text classification tasks.", "(Li et al., 2020) leverage BERT to propose solutions for replacement words, (Jin et al., 2020) leverage word embeddings, and (Ren et al., 2019) leverage WordNet.", "In contrast to prior work evading text classifiers, our work includes approaches to leverage embeddings built from a special evasion text collection.", "Robustness of Text Classifiers.", "Our work is also relevant to prior studies of the robustness of text classifiers to adversarial inputs.", "Rojas-Galeano (2017) showed that primitive adversarial attacks (e.g., misspellings) can be detected and countered using edit distance.", "Hsieh et al. (2019) evaluated the robustness of self-attentive models in tasks of sentiment analysis, machine translation, and textual entailment.", "We examine robustness of similar models, however, we fine tune our embeddings to be task specific, while they do not, and we also test on the state-of-the-art offensive language classifiers.", "Evading Offensive Language Classifiers.", "Grndahl et al. (2018) examined robustness of hate speech classifiers against adding typos, whitespace, and non-hate words to text.", "As discussed earlier, prior work has shown that such primitive perturbations can be detected and reversed (Li et al., 2019; Rojas-Galeano, 2017).", "In contrast, we focus on more crafty text perturbations in our work.", "Ji and Knight (2018) surveyed the ways text has been encoded by humans to avoid censorship and explain challenges which automated systems would have to overcome.", "This work does not propose an automated approach for text perturbation.", "Eger et al. (2019) proposed VIPER for visual adversarial attacks.", "We implemented VIPER and (Grndahl et al., 2018) as baseline attacks and showed that our approach is more successful overall.", "Overall, our work advances the research in this space by investigating robustness of offensive language classifiers against crafty adversarial attacks.", "In this paper, we showed that state-of-the-art offensive language classifiers are vulnerable to crafty adversarial attacks.", "Our proposed adversarial attacks that leverage greedy and attention-based word selection and context-aware embeddings for word replacement were able to evade offensive language classifiers while preserving readability and semantics much better than prior simpler adversarial attacks.", "We report accuracy drops of up to 46 points or 67% against state-of-the-art offensive language classifiers.", "Furthermore, unlike VIPER and simpler attacks, our proposed attack cannot be easily prevented using pre-processing strategies.", "The user study showed that our adversarial attack was able to maintain similar readability with only a slight drop in semantic preservation.", "Our work also suggests ways to improve the robustness of offensive language classifiers through adversarial training (Kurakin et al., 2017; Madry et al., 2018; Tramr et al., 2018).", "More specifically, our attack relies on the evasion collection, which contains crafty adversarial examples that evade detection by offensive language classifiers but are flagged based on manual feedback by users or human moderators.", "Thus, offensive language classifiers can be adversarially trained on the latest evasion collection from time to time to improve their robustness to the ever evolving adversarial attacks.", "In this context it is noteworthy that continuous availability of large-scale manual feedback is quite unique to the problem of offensive language classification, where popular online social media platforms employ thousands of human moderators (Barrett, 2020)." ]
[ "abstain", "abstain", "abstain", "method", "result", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "abstain", "abstain", "method", "objective", "objective", "abstain", "objective", "abstain", "abstain", "method", "result", "objective", "objective", "result", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "objective", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "abstain", "other", "other", "abstain", "other", "other", "other", "objective", "other", "objective", "other", "abstain", "objective", "result", "objective", "abstain", "objective", "result", "result", "abstain", "abstain", "abstain" ]
[ "In neural machine translation (NMT), monolingual data are usually exploited through a so-called back-translation: sentences in the target language are translated into the source language to synthesize new parallel data.", "While this method provides more training data to better model the target language, on the source side, it only exploits translations that the NMT system is already able to generate using a model trained on existing parallel data.", "In this work, we assume that new translation knowledge can be extracted from monolingual data, without relying at all on existing parallel data.", "We propose a new algorithm for extracting from monolingual data what we call partial translations: pairs of source and target sentences that contain sequences of tokens that are translations of each other.", "Our algorithm is fully unsupervised and takes only source and target monolingual data as input.", "Our empirical evaluation points out that our partial translations can be used in combination with back-translation to further improve NMT models.", "Furthermore, while partial translations are particularly useful for low-resource language pairs, they can also be successfully exploited in resource-rich scenarios to improve translation quality.", "Neural machine translation (NMT) systems usually require a large quantity of high-quality bilingual parallel data for training.", "However, for most language pairs, we do not have such resources, or only in very small quantities, mainly because they are costly to produce.", "On the other hand, monolingual corpora are readily available in large quantity for many languages.", "Previous work has proposed various strategies to integrate monolingual data into NMT systems and has confirmed their usefulness to improve NMT systems.", "The so-called back-translation of monolingual data (Sennrich et al., 2016) is undoubtedly the most prevalent one.", "This approach simply uses a target-to-source MT system to translate monolingual data in the target language into the source language.", "The produced new synthetic parallel corpus can be used together with the original parallel data to increase the size of the training data, and eventually to improve NMT systems significantly and consistently.", "However, on the source side, the synthetic data only contain data that can be generated by the back-translation system trained on some existing parallel data.", "Previous work has also studied the extraction of translation pairs of source and target sentences from monolingual data in their respective languages.", "They have been shown to be useful to train better statistical machine translation (SMT) systems, especially in low-resource conditions.", "Existing methods on sentence pair extraction mainly rely on the availability of comparable corpora as the source of accurate sentence pairs (Abdul Rauf and Schwenk, 2011), or on the robustness of SMT against noise (Goutte et al., 2012) because sentence pairs extracted from unrelated monolingual corpora tend to be noisy (Tillmann and Xu, 2009; Marie and Fujita, 2017).", "Most of them also require pre-trained accurate translation models, those of SMT systems for instance, that we may not have in low-resource conditions.", "Moreover, unlike SMT, NMT has been shown to deal very poorly with noisy training data and still largely underperforms SMT for low-resource language pairs (Koehn and Knowles, 2017) for which comparable corpora are usually not available.", "Even without an accurate translation model, we still have the possibility of extracting sentence pairs from unrelated source and target monolingual data.", "However, this is very challenging since we have no guarantee that there are sentence pairs actually retrievable from a given Source sentence der Mann wurde festgenommen .", "pair of source and target monolingual corpora.", "In this work, we assume", "(i) that a given pair of monolingual corpora contain sentence pairs that are at least partial translations , i.e., pairs of source and target sentences containing phrases (se-quences of tokens) that are translations of each other and", "(ii) that such pairs can help train better NMT systems.", "On these assumptions, we propose a new algorithm that extracts partial translations from trillions of candidate sentence pairs without any supervision.", "Relying on an unsupervised phrase table, our algorithm identifies phrases in a source sentence that have likely translations in a target sentence.", "The extracted partial translations often contain unrelated parts besides aligned phrases.", "Therefore, we also apply a simple but very effective post-processing to make such noisy sentence pairs exploitable for a target-to-source NMT model, as exemplified in Figure", "1. We report on significant improvements in translation quality for two language pairs and under different experimental conditions when using our extracted partial translations to train NMT systems.", "While our method is especially designed to provide new training data for low-resource language pairs, we also observed significant improvements over a strong NMT system trained on large quantity of parallel data.", "Furthermore, we demonstrate the complementarity of our approach with back-translation.", "The whole framework for extracting partial translations is presented in Figure", "2. To extract partial translations, we first induce a phrase table that contains phrases in the source language paired with their most probable translations in the target language (Section 2.1).", "These phrase pairs are collected from the same monolingual data from which we extract partial translations.", "Given source language monolingual data bilingual word embeddings induced phrase table unsupervised phrase table induction source word embeddings target phrases partial translations target language monolingual data partial translation extraction target word embeddings source phrases Figure 2: The framework for extracting partial translations from monolingual data.", "the induced phrase table, we search for sentence pairs that are the most likely partial translations in the monolingual data (Section 2.2).", "Finally, the extracted sentence pairs are post-processed (Section 2.3).", "Recent methods addressed the task of finding word translations from monolingual data without any supervision (Lample et al., 2018a; Artetxe et al., 2018a).", "On the other hand, Marie and Fujita (2018) presented a method for inducing phrase tables from monolingual data using a weakly-supervised framework.", "To make our approach useful in as many translation tasks as possible, including very low-resource scenarios, we propose a fully unsupervised version of the method in Marie and Fujita (2018).", "Using phrases instead of only single tokens promotes the extraction of partial translations containing longer sequences of tokens, rather than those with potentially more but discontinuously translated tokens.", "Regarding all n -grams of tokens in the monolingual data as phrases and searching the translations for each phrase can be extremely costly.", "Therefore, we extract meaningful source and target phrases from their respective monolingual data, following the work by Marie and Fujita (2018).", "1 Beside extracting phrases, we also train word embeddings on the same source and target monolingual data, independently.", "Both source and target embedding spaces are aligned in the same space to make word embeddings bilingual, without any supervision (Artetxe et al., 2018a).", "Using these 1 See Section 3.1 of Marie and Fujita (2018) for the details.", "bilingual word embeddings, we compute bilingual phrase embeddings for each of the extracted phrases through the element-wise addition of the embeddings of constituent words of the phrases.", "Given the source and target phrase sets, we take their Cartesian product to generate all possible pairs of source and target phrases and compute cosine similarity of each pair using their phrase embeddings.", "Each pair is also associated with a translation probability (Lample et al., 2018b): p ( t j | s i ) = exp ( cos( emb ( t j ) , emb ( s i ))) (cid:80) k exp ( cos( emb ( t k ) , emb ( s i ))) , (1) where t j is the j -th phrase in the target phrase list and s i the i -th phrase in the source phrase list, a parameter to tune the peakiness of the distribution 2 (Smith et al., 2017), cos( , ) cosine similarity between two phrase embeddings, and emb ( ) a function returning the bilingual embedding of a given phrase.", "In practice, to keep the set of phrase pairs to score manageable, we first filter the 300k most frequent phrases in each of the source and target phrase sets.", "3 Retrieved phrase pairs may have a very low translation probability, especially when dealing with distant languages and/or noisy monolingual data.", "Therefore, we keep only the n -best target phrases for each source phrase, according to Eq.", "(1).", "While maintaining the coverage by our phrase table of the source monolingual data the same, we ensure that the phrase translations for each source phrase are the most accurate among all the collected target phrases.", "NMT architectures expect parallel sentence pairs as training data, even if we have accurate phrase pairs, they cannot be used directly for training.", "Therefore, we propose an algorithm for extracting sentence pairs from the monolingual data that matches the best possible combinations of phrase pairs from the induced phrase table.", "The pseudo-code of this algorithm is presented in Algorithm", "1. For each source sentence S (l.2), the algorithm first selects from the phrase table pt all the phrase pairs P s whose source side appears in S (l.3).", "It 2 We set = 30 since it gives consistently good results in our preliminary experiments.", "3 This means that we still have to compute the cosine similarity for 90 billion phrase pairs (300k 300k).", "This can be done very efficiently on GPU: https://github.com/ facebookresearch/faiss .", "then creates a bag of target words B t via collecting all the words from the target phrases of P s (l.4).", "Subsequently, we keep the m -best target sentences T m (ll.511) according to the following score: F t ( S, T ) = 2 c s c t c s + c t , (2) where c s = k t /len ( S ) , c t = k t /len ( T ) , with S and T respectively a source and a target sentence, k t the number of tokens in T that are covered by B t , and len ( ) a function that returns the number of tokens in a sentence.", "With the harmonic mean F t between c s and c t , the algorithm searches for target sentences containing as many words translating source tokens as possible while penalizing the retrieval of very long target sentences that may also contain many tokens having no counterparts in the source sentence.", "Then, the algorithm re-ranks the target sentences in T m with the phrase-based forced decoding (PBFD) algorithm (Zhang et al., 2017) (l.12).", "PBFD searches for the best combination of phrase pairs covering S and each target sentence in T m , using the phrase translation probability computed by Eq.", "(1).", "However, the original PBFD algorithm penalizes sentence pair with words that are not covered by any phrase pair.", "It tends to favor very short sentences that are potentially less exploitable for NMT systems, or not even be sentences as it may happen when dealing with noisy monolingual data.", "4 Therefore, we use a slightly modified version which does not penalize uncovered words on the target side in order to favor the extraction of longer target sentences that may contain more translated tokens.", "Finally, we retain only P translations : the m top sentence pairs with the highest PBFD scores (l.15).", "Since the extracted sentence pairs are only partial translations, incorporating them as they are into the training data for NMT may mislead the training of the model due to their noisiness.", "Since our PBFD algorithm does not penalize target words not covered by any phrase pair, the target side of our partial translations contains longer sentences than the source side, potentially with many words unaligned with any word in the source sentences.", "Nonetheless, the back-translation approach has proven that NMT can be trained on noisy data at the source side and fluent sentences at the target side.", "Following this, we use partial translations only to train target-to-source NMT systems, as for back-translated data.", "It means that the source and target languages of our extraction algorithm becomes respectively the target and source languages of the NMT system.", "We want the NMT system to give as much attention as possible to the phrases of the source sentence that have a translation in the target sentence, while ignoring as much as possible the remaining tokens of the source sentence that are likely to be noise, i.e., not translated by the target sentence.", "Dropping these unaligned tokens is not an appropriate solution since it would produce unlikely sequences of translated tokens.", "Then, to make the decoder paying its attention to the translated part in the source sentence, we simply replaced all the tokens not covered by the best combination of phrase pairs found by the PBFD algorithm with a made-up token, UNKPP .", "5 We expect this post-processing step to particularly well suit the training of a Transformer NMT model (Vaswani et al., 2017), because it can easily learn to pay no attention to UNKPP and more attention to correctly translated tokens thanks to the multi-head attention mechanism.", "Moreover, the Transformer model does not memorize complete 4 This includes equations, rows of a table, titles, etc. 5 We made this token different from the usual token reserved for unknown word in the vocabulary, since they are of a different nature.", "sequences and makes time steps independent, unlike recurrent neural networks (RNN).", "It uses instead positional encodings and has a better ability in linking important features from the entire sequence (Chen et al., 2018), which may make easier the learning from noisy sequences, such as the ones created by introducing UNKPP .", "We show in Section 4 that using UNKPP tokens instead of dropping uncovered tokens leads to a better model.", "Nonetheless, a better strategy that we leave for future work could be to apply some forced-decoding, with an SMT system for instance, that translates in the source sentence the parts of the target sentence that are not translated, while preserving the partial translations detected by our algorithm in the source sentence.", "We experimented on three language pairs with different degrees of relatedness between the languages of each pair: English German (en de), English Turkish (en tr), and Bengali Malay (bn ms).", "While our approach is dedicated to improve translation quality for low-resource language pairs, we included the en de pair for a detailed analysis on the impact of using partial translations in addition to much more training data.", "bn ms is expected to be an extremely difficult translation task, because only small quantity of parallel data are available to start with (Section 3.1) and Bengali and Malay are very distant languages which makes very difficult the training of useful unsupervised bilingual word embeddings (Sgaard et al., 2018), which is a key element for inducing the phrase table.", "To train baseline NMT systems, we used for en de and en tr 100k parallel sentences randomly extracted from the parallel data provided for the WMT18 News Translation Task, 6 except the ParaCrawl corpus for en de.", "For bn ms, we used the 18k sentence pairs released by the Asian Language Treebank (ALT) project (Riza et al., 2016).", "7 As validation and test data for en de and en tr, we used Newstest2016 and Newstest2017, respectively, provided by WMT18.", "For bn ms, 6 http://www.statmt.org/wmt18/ translation-task.html 7 http://www2.nict.go.jp/astrec-att/ member/mutiyama/ALT/ we used the official development and test data provided by the ALT project.", "We chose this amount of training data for en de and en tr, since it suits our need for a low-resource translation task while we can still use the available parallel data for further analysis.", "Moreover, we needed enough parallel data to train an NMT model that can produce useful back-translated data.", "In our preliminary experiment, we found out that 100k sentence pairs satisfy the minimum amount to train NMT models for useful back-translation.", "As such, we did not succeed in training useful models for bn ms, due to the difficulty of the task, but still decided to report on the results to provide insights and matters for future work.", "As for monolingual data, we used the English (239M lines) and German (237M lines) NewsCrawl corpora provided by WMT18, and the NewsCrawl and Common Crawl corpora for Turkish (104M lines).", "We extracted monolingual data ourselves from the Common Crawl project 8 for Bengali (5.3M lines) and Malay (4.6M lines).", "To build an unsupervised phrase table, we first extracted 300k most frequent phrases of up to 6 tokens from the entire monolingual data.", "We also trained 200-dimensional word embeddings on the same data with fasttext .", "9 For each source phrase, we retained only the 1-best target phrase in the induced phrase table.", "The search for partial translations was performed only for a random sample of 1M lines from the monolingual data for each target language.", "For each of these lines, we searched for the best partial translation with our algorithm in up to 10M lines randomly extracted from the monolingual data for the source language.", "Then, to maintain a 1:1 ratio with the parallel data, we retained the 100k best partial translations for en de and en tr, 10 and 18k best partial translations for bn ms. All our NMT systems, including baselines, were the Transformer model (Vaswani et al., 2017) trained with Marian (Junczys-Dowmunt et al., 2018).", "Note that we fine-tuned the hyper-parameters for training on our validation data for 8 http://commoncrawl.org/ 9 https://fasttext.cc/ 10 The extraction of 100k partial translations from 10 13 sentence pairs (1M 10M) required around 26 hours of computation using 100 CPUs.", "each language pair in order to get best possible baseline systems.", "We then apply the same hyper-parameters in all the experiments for the given language pair.", "To train systems with partial translations ( partial ), we simply mixed them with the original parallel data during training.", "We also evaluated the systems using back-translated ( backtr ) and copied 11 ( copy ) (Currey et al., 2017) data, separately mixed with the original parallel data.", "Note that these data were generated from the same target sentences sampled for extracting partial translations: partial , backtr , and copy had the same target side but different source side.", "12 Our NMT systems were evaluated with detokenized BLEU-cased.", "The results of our experiments are presented in Table", "1. For en de and en tr, the baseline systems resulted in a poor translation quality below 10 BLEU points.", "This highlights how critical it is to get more training data to better train an NMT model for low-resource translation tasks.", "Adding 100k synthetic parallel sentences generated by back-translation ( backtr ) improved translation quality by 2.0 and 2.1 BLEU points for en de and en tr, respectively.", "Surprisingly, the simplest copy method brought improvements similar to backtr .", "Furthermore, we also observed the complementarity of backtr and copy ( backtr + copy ), with 4.2 and 2.3 BLEU points of improvements for en de and en tr, re-11 The copy approach simply copies the target sentences to the source side.", "This method surprisingly offers good results in low-resource conditions, and a good complementarity with back-translation, for languages with some orthographic similarity.", "12 With some source sentences that may be identical.", "spectively, over the baseline system.", "To verify that it is not the consequence of just giving more weight to the target monolingual data that may be in-domain, we also trained backtr + backtr but did not observe any improvements over backtr .", "For en de, the system using our extracted partial translations ( partial ) outperformed backtr and copy by 0.8 and 0.9 BLEU points, respectively, and the baseline system by 2.8 BLEU points.", "For en tr, partial also significantly outperformed the baseline system, by 1.1 BLEU points.", "However, backtr and copy brought larger improvements.", "We can explain the difference between en de and en tr by the fact that Turkish is more distant from English than German.", "It makes unsupervised bilingual word embeddings more difficult to train for en tr and are consequently significantly less accurate (Sgaard et al., 2018).", "Extraction of accurate and useful partial translations from monolingual data is a more difficult task for en tr.", "While these three kinds of synthetic parallel data, backtr , copy , and partial , present the same target sentences, we found out that mixing all of them with the original parallel led to the best system ( backtr + copy + partial ).", "This result shows the complementarity of these three datasets thanks to the diversity of source sides generated by different means.", "For instance, partial provides new original translations that were not generated by back-translation.", "As expected, bn ms is a very difficult task for NMT due to the small size of the training data.", "We were not able to train an NMT model that can generate useful back-translations.", "The copy method was also unhelpful since Bengali and Malay have different writing systems.", "Our partial translations did not help either, presumably due to the difficulty in unsupervised learning of bilingual word embeddings.", "Note also that we used much less monolingual data for this language pair to train the word embeddings.", "This last result is disappointing, but it confirmed that unsupervised bilingual word embeddings are still far from being useful for truly low-resource and distant language pairs.", "We induced the phrase table from the 300k most frequent source and target phrases.", "An obvious and actually simpler alternative is the use of words Training data en de en tr baseline 7.1 9.3 1-best target word 8.6 9.9 1-best target phrase 9.9 10.4 2-best target phrases 7.6 9.5 5-best target phrases 7.5 9.3 Table 2: BLEU scores of NMT systems based on a phrase table induced from source and target single words, or using the 1-best (as in our default configu-ration), 2-best, or 5-best target phrase translations for each source phrase.", "instead of phrases.", "Using 300k words instead of 300k phrases, for instance, results in a similar cost for phrase table induction but involves a larger vocabulary introducing a better coverage of the monolingual data.", "However, involving more, and consequently less frequent, words means that it also introduces words for which the word embedding will be noisier.", "Another important decision that we made when inducing the phrase table was to take only the 1-best target phrase for each source phrase.", "Involving noisier translations would have a larger impact on our algorithm by inflating the size of B t that would contain more and noisier target tokens, resulting in unworthily high F t scores for target sentences containing these tokens that are nonetheless less likely to be the translations of the corresponding source phrase.", "Table 2 presents results obtained when using single words instead of phrases to induce the phrase table, and when used the 1-, 2-, or 5-best target phrases for each source phrase.", "For en de, using words instead of phrases still led to useful partial translations, with an improvement of 1.5 BLEU points.", "However, it was 1.3 BLEU points lower than when using phrases.", "The use of more than one target phrase for each source phrase in the induced phrase table resulted in the retrieval of noisier partial translations that are much less useful to train better NMT models.", "We observed the similar tendencies for en tr.", "One of the strongest assumption of this work is that partial translations bring translation knowledge complementary to the manually created parallel data used to the train the NMT system.", "This new knowledge is unbiased by the existing parallel data since we induce the phrase table without using any given parallel data.", "On the other hand, we can train a phrase table on the parallel data used Training data en de bn ms baseline 7.1 6.1 induced phrase table 9.9 5.5 standard phrase table 9.4 6.3 Table 3: BLEU scores of NMT systems based on a phrase table induced from monolingual data or based on a standard phrase table trained on the same parallel data also used to train the baseline system.", "to train the NMT system and use it to extract partial translations.", "Owing to the supervision, we can expect such a phrase table to be much more accurate than the induced phrase table.", "However, it would introduce a strong bias that encourages the retrieval of partial translations similar to the parallel data that are already used to train the system.", "Consequently, the extracted partial translations may be less useful than those extracted with an induced phrase table.", "To test the above assumption, we trained a standard phrase table on the given parallel data, extracted partial translations using it, and evaluated their impact on the translation quality.", "Table 3 shows the results for en de and bn ms. The results for en de supports our hypothesis: using a standard phrase table for extracting partial translations has led to a drop of 0.5 BLEU points compared to the use of an induced phrase table.", "In contrast, for bn ms, standard phrase table achieved significantly better results than using the induced phrase table and brought a slight improvement over the baseline system.", "We speculate that the standard phrase table trained only on 18k sentence pairs is not strong enough to bias the extraction of partial translations.", "By replacing unaligned tokens, identified by the PBFD algorithm, with a made-up token UNKPP , we aimed to guide the decoder to ignore them during training.", "This section explores the impact of this post-processing, through comparing the translation quality of NMT systems trained on partial translations without any post-processing ( original ), post-processed by removing unaligned source tokens ( dropped ), and our proposed method that replaces unaligned tokens with UNKPP ( partial ).", "Table 4 presents our results.", "Without any postprocessing, using partial translations brought a significant drop of translation quality of 0.9 BLEU Training data en de en tr baseline 7.1 9.3 original 6.2 7.7 dropped 8.8 10.0 partial 9.9 10.4 Table 4: BLEU scores of NMT systems trained on partial translations with ( dropped , partial ) and without ( original ) post-processing.", "points for en de, and 1.6 BLEU points for en tr, from the baseline system trained only on parallel data.", "This is expected since the partial translations can be very noisy with many unaligned tokens for which the Transformer model will still try to learn a translation in the target sentence.", "Removing them helps significantly with, for instance for en de, a 1.7 BLEU points of improvements over the baseline system.", "However, their removal hides the existence of unaligned tokens and produces sequences of tokens that are unlikely in the source language, presumably misleading the training of the NMT model.", "Indeed, replacing them with a made-up token further improved the translation quality by 1.1 BLEU points.", "The baseline systems evaluated in Section 3 were trained on 100k parallel data and augmented with the same amount of back-translated sentences and partial translations.", "Our algorithm retrieved the best partial translation for each one of the 1M source sentences, and then ranked them according to the PBFD score in order to select the most accurate ones.", "In fact, partial translations at lower rank contained more unaligned tokens, as shown in Figure 3, and also incorrectly aligned tokens.", "Using these noisier partial translations may disturb the training of the NMT system.", "In contrast, we can easily increase the quantity of the back-translated data of a similar quality.", "To verify this assumption, we evaluated NMT models for en de and en tr trained on different quantities of original parallel data, back-translated data, and partial translations.", "13 The results are presented in Table 5.", "As expected, using more back-translated data was much more helpful than using more partial translations.", "For en de, in combination with 100k parallel sentences, using 13 We did not oversample the original parallel data to match the size of original and synthetic parallel data, as commonly performed in multilingual NMT (Johnson et al., 2017).", "300k back-translated data achieved better results than using 300k partial translations, with an advantage of 2.4 BLEU points, in contrast to our observation with 100k additional data (Table 1).", "The gap was even more significant when using 1M additional data: back-translated data achieved 4.9 BLEU points higher than using partial translations.", "Nonetheless, for en de, over the configu-ration using 100k partial translations (9.9 absolute BLEU points), using 300k and 1M partial translations improved by 0.9 and 2.1 BLEU points, respectively.", "We observed a similar tendency for en tr.", "Searching for partial translations in more monolingual data would result in the extraction of a larger number of more accurate partial translations, and presumably help obtain even better NMT models.", "Mixing partial translations, back-translated data, and parallel data remains our best configura-tion.", "We consistently obtained better results than when using only either partial translations or back-translated data as training data.", "Even when using the full parallel data of 5.6M sentence pairs and 1M back-translated data 14 for en de, partial translations still brought an improvement of 0.5 BLEU points.", "Our best system reached 28.2 BLEU points.", "15 This last result confirms that using partial translations as additional training data has also the potential to improve a state-of-the-art NMT system, while it is much more effective in low-resource scenarios.", "There are various methods for extracting sentence pairs from monolingual corpora.", "However, most of them rely on the availability of document-level information, in comparable corpora for instance, and usually for one specific domain, to efficiently extract accurate sentence pairs (Abdul Rauf and Schwenk, 2011).", "Other methods extract sentence pairs from completely unrelated monolingual corpora (Tillmann and Xu, 2009; Marie and Fujita, 2017).", "However, they still rely on an existing accurate translation model trained on large parallel data, introducing a strong bias in the retrieval of sentence pairs.", "Unlike existing methods, our algorithm for retrieving partial translations is efficient enough to work on large unrelated monolingual data without relying on any document-level information, and also fully unsupervised.", "Without any bias toward some existing parallel data, it is very suitable for low-resource scenarios.", "Previous work has also exploited monolingual data in the target language for improving NMT systems (Sennrich et al., 2016; Currey et al., 2017; Hoang et al., 2018).", "As demonstrated in this paper, our approach is complementary to previous work, since partial translations can introduce novel information into training.", "To the best of our knowledge, this is the first work to propose a method for extracting sentence pairs from source and target 14 For this experiment, the NMT system for back-translation was also trained on the full parallel data.", "15 Only 0.1 BLEU points lower than the best reported result at WMT17 for this task.", "The winning systems used much more back-translated data and an ensemble of several models for decoding.", "unrelated monolingual corpora that can be used to train better NMT systems, without requiring any modification of current NMT model architecture.", "Wang et al. (2017) proposed a method to train an RNN-based NMT system on partially aligned translations only.", "However, this method cannot straightforwardly be applied to the state-of-the-art Transformer architecture.", "In contrast, our proposed method does not assume a particular architecture of NMT, nor requires any modifications of the NMT implementation.", "In addition, they assume not only that a phrase table is given for their low-resource language pairs, but also that the phrase pairs in the given phrase table are very accurate.", "We rather focus on augmenting the training data, without assuming phrase pairs of high accuracy.", "Training NMT only on our extracted partial translations could also be worth investigating.", "As confirmed in Section 4.1, the quality of induced phrase table nevertheless affects the usefulness of resulting partial translations.", "Recent advances in unsupervised MT (Artetxe et al., 2018b; Lample et al., 2018b) have shown that we can obtain phrase tables of better quality through iterating generation of synthetic parallel data and (pseudo-)training of a phrase table on such data.", "We plan to evaluate whether better phrase tables result in more useful partial translations.", "We presented a new algorithm for extracting partial translations from unrelated monolingual corpora.", "Our algorithm is fully unsupervised, i.e., it does not rely on any existing human-made bilingual data, making itself suitable for low-resource language pairs.", "We demonstrated that very noisy partial translations can be transformed into useful training data for NMT systems with a simple postprocessing.", "While we designed our method specifically for low-resource scenarios, we also showed that partial translations are useful for further improving a state-of-the-art NMT system trained on large parallel data and back-translated synthetic parallel data.", "In our future work, we will study the impact of using more partial translations of better quality to train NMT systems.", "We assume that we can collect better partial translations by searching in more monolingual data.", "Moreover, we also observed that the top-ranked sentence pairs extracted by our algorithm may be translations of very good quality.", "We will study the possibility of using such sentence pairs as development data to enable the tuning of unsupervised SMT and NMT systems (Lample et al., 2018b).", "We will also analyze whether our partial translations are useful because of their noisy nature, since noisy synthetic data have recently been proven useful in some specific configurations (Edunov et al., 2018).", "We would like to thank the reviewers for their useful comments and suggestions, and Jingyi Zhang for providing the implementation of the phrase-based forced decoding for our experiments.", "A part of this work was conducted under the program Promotion of Global Communications Plan: Research, Development, and Social Demonstration of Multilingual Speech Translation Technology of the Ministry of Internal Affairs and Communications (MIC), Japan." ]
[ "abstain", "abstain", "objective", "objective", "method", "result", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "objective", "method", "abstain", "method", "result", "objective", "objective", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "other", "objective", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "result", "result", "abstain", "abstain", "method", "method", "method", "abstain", "method", "method", "other", "abstain", "method", "method", "method", "abstain", "method", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "method", "other", "other", "objective", "objective", "other", "other", "other", "other", "other", "objective", "other", "objective", "objective", "other", "abstain", "abstain", "objective", "abstain", "objective", "result", "method", "method", "result", "method", "method", "other", "other" ]
[ "Reranking models enable the integration of rich features to select a better output hypothesis within an n-best list or lattice.", "These models have a long history in NLP, and we revisit discriminative reranking for modern neural machine translation models by training a large transformer architecture.", "This takes as input both the source sentence as well as a list of hypotheses to output a ranked list.", "The reranker is trained to predict the observed distribution of a desired metric, e.g. BLEU, over the n-best list.", "Since such a discriminator contains hundreds of millions of parameters, we improve its generalization using pre-training and data augmentation techniques.", "Experiments on four WMT directions show that our discriminative reranking approach is effective and complementary to existing generative reranking approaches, yielding improvements of up to 4 BLEU over the beam search output.", "Reranking models take a number of different output hypotheses generated by a baseline model and select one hypothesis based on more powerful features.", "Before the recent re-emergence of neural networks, these models have been well studied for several NLP tasks including parsing (Charniak and Johnson, 2005; Collins and Koo, 2005) and statistical machine translation (Och et al., 2004; Shen et al., 2004).", "Traditional statistical models (SMT) based on n-gram counts made very strong independence assumptions where features would only capture very local context information to avoid sparsity and poor generalization.", "A large n-best list produced by these models would then be passed to a discriminatively trained reranker which leverages features engineered to capture more global context (Och et al., 2004) yielding significant improvements to the quality of the translations.", "On the other hand, modern neural models (NMT) make much weaker independence assumptions because predictions of standard sequence-to-sequence models depend on the entire source sentence as well as the target prefix generated.", "However, reranking may still be beneficial for two reasons: First, NMT systems are subject to exposure bias (Ranzato et al., 2016), i.e., models are never exposed to their own generations at training time, while a reranking model has been trained on model outputs.", "Second, beam search with autoregressive models uses the chain rule to sum individual token-level probabilities to obtain a target sequence probability.", "However, individual probabilities are based on a limited amount of target context, while a reranking model can condition on the entire target context.", "Indeed, recent generative reranking approaches applied to NMT, such as Noisy-Channel Decoding (NCD, Yee et al. 2019) which leverages a pre-trained language model and a backward model, show strong improvements over beam search outputs, as demonstrated in recent WMT evaluations (Ng et al., 2019).", "In this paper, we explore whether training large transformer models using the reranking objective can further improve performance.", "Our model, dubbed DrNMT , takes as input the entire source sentence and an n-best list of output hypotheses to predict a distribution of sentence-level evaluation scores, such as BLEU.", "1 This setup is similar to earlier work with SMT, except that the baseline model is an NMT model and the reranker is a big transformer architecture as opposed to a log-linear model on top of discrete or human engineered features.", "Unfortunately, optimizing for the task of interest does not always lead to better performance.", "Overfitting to the training set is a potential concern, as the 1 Our approach is general and enables optimizing any user-specified metric, or combinations thereof.", "reranker has hundreds of millions of parameters yet it receives only one gradient and weight update per source/target sentence pair as opposed to one per token as for standard NMT models.", "In our work, we mitigate overfitting in two ways.", "First, we leverage the success of pre-training by finetuning masked language models (MLM; Devlin et al. 2019) which initializes the model with features trained on much more training data.", "Second, we augment the original dataset with back-translated data (BT; Sennrich et al. 2016).", "Experiments show that DrNMT can match the performance of a strong NCD baseline and that their combination leads to further improvements as measured by BLEU, TER and also human evaluation.", "Our method is inspired by the seminal work of Shen et al. (2004) and Och et al. (2004) who introduced and popularized discriminative reranking to SMT.", "Besides using a weaker MT system to generate the n-best list, these works relied on a linear discriminator trained on human-designed features as opposed to a transformer taking the raw source sentence and hypothesis.", "Most work using NMT has focused on generative reranking methods (Liu et al., 2018; Imamura and Sumita, 2017; Wang et al., 2017), where the reranker's parameters are optimized using a criterion which is different from the metric of interest.", "For instance, Yu et al. (2017); Yee et al. (2019) perform noisy-channel decoding where hypotheses are scored by linearly combining the output of the forward model, a target-side language model and a backward model which scores the source sentence given the hypothesis.", "These methods have shown remarkable improvements over the output of beam decoding, despite not being trained for the reranking task (except for the two or three hyper-parameters of the linear combination of scores which are tuned on a validation set).", "Another approach belonging to this class of methods is the one proposed by Salazar et al. (2019), which employs the scores from a masked language model (MLM).", "While this method employs a transformer architecture, it is still not trained for the task of interest.", "To the best of our knowledge, there is only concurrent work by Naskar et al. (2020) which attempts at training discriminatively a reranker for NMT.", "They use a pair-wise margin loss on hypotheses sampled from the NMT, while we learn to rank the full n-best list produced by beam.", "Their experiments also show that the reranker performs better when directly conditioned on the source sentence.", "However, they do not compare nor combine their method with NCD like we do.", "Both their work and our work are however an extension of Deng et al. (2020), who proposed to train a discriminator to improve neural language modeling.", "There is also a large body of literature on different ways to combine SMT and NMT by using one to rerank the other, since SMT is generally better at adequacy while NMT is better at fluency.", "For instance, Auli and Gao (2014) uses an RNN discriminator to rerank the n-best list produced by a phrase-based SMT.", "Instead, Ehara (2017) does the opposite, using an SMT discriminator to rerank an n-best list produced by an NMT.", "Finally, our work is also related to recent attempts at using adversarial training to improve MT (Wu et al., 2018; Zhang et al., 2018).", "Unlike these approaches our method is much simpler because we do not update the parameters of the MT system generating the hypotheses.", "Moreover, our discriminator is trained to predict the distribution of desired metric and it is used at decoding time to rerank, while GAN-based MT would only retain the generator.", "Given a source sentence x , an NMT model generates a set of hypotheses U ( x ) = { u 1 , u 2 , ..., u n } in the target language.", "The goal of this work is to learn a reranker that produces higher scores for hypotheses of better quality, as defined in terms of a user-specified metric ( u, r ) such as BLEU (Pap-ineni et al., 2002a), where quality is measured with respect to a reference r .", "As illustrated in Figure 1, our reranker is a transformer architecture which takes as input the concatenation of the source sentence x and hypothesis u U ( x ) .", "The architecture includes also position embeddings and language embeddings, to help the model represent tokens that are shared between the two languages (Conneau and Lample, 2019).", "The final hidden state corresponding to the start of sentence token ( (cid:104) s (cid:105) ) serves as the joint representation for ( x, u ) ; let us denote this feature vector as z R d .", "The reranker associates a scalar score o R to ( x, u ) by applying a one hidden layer Transformer <s> 0 en + + This 1 en + + is 2 en + + a 3 en + + book 4 en + + </s> 5 en + + Das 1 de + + ist 2 de + + ein 3 de + + Buch 4 de + + </s> 5 de + + Token embeddings Position embeddings Language embeddings score Figure 1: Illustration of DrNMT , a pre-trained transformer architecture which takes as input both the source sentence as well as a hypothesis and outputs a scalar score.", "neural network with d tanh hidden units to z , as default in the design of the classification head of RoBERTa (Liu et al., 2019).", "The parameters of the reranker are denoted by and include the parameters of the transformer, all the embeddings and also the top projection block mapping the feature vector to the scalar score.", "Each hypothesis u i in the set U ( x ) is therefore processed independently and yields a score o i .", "We train the reranker discriminatively, hence the name DrNMT for Discriminative Reranker for NMT, by minimizing the KL-divergence between the target distribution and the model output distribution, DKL ( p T || p M ) (Cao et al., 2007).", "For each x , the model output distribution is a softmax over all n hypotheses in the n-best list: p M ( u i | x ; ) = exp( o i ( u i | x ; )) (cid:80) nj =1 exp( o j ( u j | x ; )) , (1) where we made explicit that the score o j is conditioned on the input x and parameter vector .", "Notice that we do not enforce any additional factorization.", "In particular, we do not assume that the score is computed auto-regressively.", "The target distribution is defined as a normalized distribution of the end metric ( u i , r ) which we assume to improve as it takes on larger values: p T ( u i ) = exp( ( u i , r ) /T ) (cid:80) nj =1 exp( ( u j , r ) /T ) , (2) where T is the temperature to control the smoothness of the distribution.", "In practice, we apply a minmax normalization on .", "We subtract each value by the minimum in the hypothesis set, and divide the result by the difference between the maximum and the minimum value, so that the best hypothesis scores 1 and the worst 0 .", "This helps the optimization as it reduces the variance of the gradients, as pointed out by Edunov et al. (2018).", "The parameters of DrNMT are then learned by minimizing the KL divergence over the training dataset.", "For a given training example, we have: L ( ) = n (cid:88) j =1 p T ( u j ) log p M ( u j | x ; ) .", "We minimize this loss over the training set by stochastic gradient descent using standard back-propagation of the error, since all terms are differentiable.", "In order to alleviate overfitting, we employ dropout regularization (Srivastava et al., 2014), we pre-train the model (Conneau et al., 2019) and we also perform data augmentation by training on back-translated data (BT) (Sennrich et al., 2016).", "See 5.3 for details.", "At test time, generation proceeds by first having the NMT generate the n-best list, and then by applying the reranker to select the best hypothesis.", "Since the score of the forward model is also available, unless otherwise specified we rerank using a weighted combination of both; this is dubbed as DrNMT .", "In the experiments we also report results by adding all the other scores from NCD, namely the backward model score and the language model score.", "We denote this variant by DrNMT + NCD.", "Whenever we combine scores from various models we tune the additional hyper-parameters controlling the weighted combination by random search on the validation set (Yee et al., 2019).", "In this section we describe the datasets, baselines and model details.", "We experiment on four language pairs: German-English (De-En), English-German (En-De), English-Tamil (En-Ta) and Russian-English (Ru-En).", "For training on De-En and En-De, we use NewsCommentary from WMT'19 (Barrault et al., 2019) and NewsCrawl2018 for the parallel dataset and target side monolingual data, respectively.", "We validate on newstest2014 and newstest2015, and test on newstest2016, 2017, 2018 and 2019.", "For En-Ta, we use all bitext and monolingual data shared by the WMT'20 news translation task for training, and the officially released development and test sets for validation and testing purposes.", "For Ru-En, we use all the parallel data from WMT'19 (Barrault et al., 2019) and NewsCrawl2018 as the monolingual dataset for training, validate on newstest2015 and 2016, and test on newstest 2017, 2018 and 2019.", "We follow the steps in Ng et al. (2019) for data preprocessing, including sentence deduplication, language identification filtering on all bitext and monolingual data (Joulin et al., 2017) and in-domain filtering (Moore and Lewis, 2010) on Tamil CommonCrawl data.", "Table 1 shows the resulting size of each dataset.", "For the base NMT models, we learn 30K byte-pair encoding (BPE) units for De-En and En-De, 20K BPE units for En-Ta and 24K BPE units for Ru-En separately, using the sentencepiece toolkit (Kudo and Richardson, 2018).", "All systems are evaluated using SACREBLEU (Post, 2018).", "We use the Transformer (Vaswani et al., 2017) architecture and train MT models using bitext data only.", "These are the models that generate the n-best list, and which serve also as a lower bound for the performance of DrNMT .", "BT data is generated from beam decoding with beam size equal to 5.", "Since the bitext data of En-Ta originates from seven different sources, we prepend dataset tags to each source sentence to indicate the origin (Kobus et al., 2017).", "We do not prepend any tags on the validation and test sets when decoding, as this choice worked best during cross-validation.", "In general and for each language pair, we tune the model architecture and De-En En-De En-Ta Ru-En bitext training 326K 326K 621K 28.9M validation 5.2K 5.2K 2K 5.8K test 11K 11K 1K 8K monolingual 17M 37M 27M 17M Table 1: Number of sentences in each dataset used in the experiments after pre-processing.", "In addition to beam decoding, we consider two reranking baselines.", "First, we consider the method recently introduced by Salazar et al. (2019).", "In its simplest formulation, this takes a pre-trained masked language model (MLM) on the target side, and iteratively masks one word of the hypothesis at the time and aggregates the corresponding scores to yield a score for the whole hypothesis.", "Then, this score is combined with the score of the forward model to rerank the n-best list; this is dubbed as fw + MLM.", "We also have a version of MLM which is tuned on our target side monolingual dataset; we dub this fw + MLM-ft.", "Finally, we consider reranking using noisy channel decoding (NCD; Yee et al. 2019).", "NCD reranks by taking a weighted combination of three scores: the forward model score, the score of a target-side language model (LM), and the score of a backward model.", "A length penalty is then applied on the combined score.", "The weights and the length penalty are tuned on the validation set via random search.", "All LMs are transformers with 16 blocks, 16 attention heads and embedding size 1024.", "They are trained on the target side monolingual data only.", "We use XLM-R Base2 (Conneau et al., 2019), a transformer-based multilingual MLM trained on more than 2.5T of of filtered CommonCrawl data in 100 languages, including En, De, Ta and Ru, as the pre-trained model for DrNMT .", "The same model is also used in the MLM baseline described in 5.2.", "The XLM-R Base model consists of 12 transformer blocks, 12 attention heads, embedding size 768 (270M params) and has a vocabulary size of 250K BPE units.", "As each training sample of XLM-R only contained one single language, we further enhance the model with two language embeddings, 2 https://github.com/pytorch/fairseq/ tree/master/examples/xlmr De-En En-De En-Ta Ru-En BLEU valid test valid test valid test valid test beam (fw) 24.7 27.7 23.1 26.6 8.8 6.0 33.5 34.3 + MLM (Salazar et al., 2019) 25.7 28.7 23.5 27.1 8.8 5.8 33.8 34.8 + MLM-ft (Salazar et al., 2019) 25.8 28.8 23.7 27.5 8.8 5.8 33.9 35.0 + LM 26.3 29.2 24.3 28.5 9.4 6.2 34.6 35.8 NCD (Yee et al., 2019) 27.2 30.9 24.8 29.1 9.7 6.3 35.3 36.8 DrNMT 27.6 31.5 24.7 29.0 9.7 6.4 35.3 37.1 + NCD 27.9 31.8 25.1 29.7 10.0 6.5 35.7 37.3 oracle BLEU 33.3 37.4 31.4 35.9 13.6 9.5 45.3 47.0 Table 2: Validation and test BLEU with beam size 50.", "initialized from random, to indicate the source and target languages for the reranker.", "We perform beam decoding on both bitext and BT data using the baseline MT models to generate n-best lists with 50 hypotheses.", "We combine n-best lists from both bitext and BT as training data for the rerankers for De-En, En-De and En-Ta, and use only BT data for Ru-En.", "We train DrNMT with batch size 512, use Adam (Kingma and Ba, 2015) and early-stop when the validation performance does not improve after 12K parameter updates.", "All hyper-parameters, including learning rate, number of warmup steps, dropout rate, etc., are tuned on the validation set.", "All models are implemented and trained using fairseq (Ott et al., 2019) 3 .", "In this section we report the main findings of our work.", "When optimizing for BLEU as metric, the performance of DrNMT and baselines for De-En, En-De, En-Ta and Ru-En is summarized in Table 2.", "The findings are similar across the four language directions.", "We therefore focus the discussion on the De-En test set results.", "First, we notice that all methods improve over the beam search output with gains ranging from 1 .", "0 to 4 .", "1 BLEU.", "However, there may be still room for improvement as the oracle performance suggests.", "The oracle is computed by selecting the best hypotheses based on BLEU with respect to the human reference.", "Of course, the oracle may be not achievable because of uncertainty in the translation task.", "3 Code for reproducing the results can be found at: https://github.com/pytorch/fairseq/ tree/master/examples/discriminative_reranking_nmt Second, Salazar et al. (2019)'s method, particularly the version fine-tuned on the in-domain training dataset, improves upon beam by 1.1 BLEU points.", "However, the improvement over beam is not as large as with NCD, which improves upon beam by 3.2 BLEU points, suggesting that among the non-discriminative reranking methods NCD performs the best.", "Third, DrNMT performs on par (En-Ta, En-De and Ru-En) or better (De-En) than NCD, showing that discriminative reranking can be very competitive.", "Note, that the reranker requires only one additional forward pass through the hypotheses generated by beam, while NCD requires two forward passes (one for the LM and one for the backward MT model).", "Therefore, our reranker works at least as well as NCD while requiring roughly half of the compute.", "Fourth, the discriminative reranker and NCD are complementary to each other, since combining both achieves the best performance overall across the three language directions, with gains between 0.9 BLEU (De-En) and 0.2 (En-Ta) compared to NCD, and an overall gain between 4.1 BLEU (De-En) and 0.5 (En-Ta) compared to the beam baseline.", "Fifth, the gain brought by discriminative reranking can be better appreciated by comparing fw + LM and DrNMT , as the major difference between the two approaches is the objective function used for training them (generative language modeling instead of prediction of the distribution of BLEU scores).", "We can see that in all cases, discriminative reranking yields better translations, with gains between 0 .", "2 and 2 .", "3 BLEU points depending on the language direction.", "Finally, we notice that En-Ta is a difficult lan-valid test BLEU TER BLEU TER beam 24.7 60.9 27.7 58.0 DrNMT (B) 27.6 57.7 31.5 54.1 + NCD 27.9 57.9 31.8 54.2 DrNMT (T) 27.0 57.3 30.7 53.5 + NCD 27.3 57.0 31.1 53.4 Table 3: Average validation and test BLEU and TER on WMT19 De-En with beam size 50 from rerankers trained with different metrics (B: BLEU, T: TER).", "guage pair, in which the baseline NMT is weak and none of the reranking approaches work nearly as well as in the other language directions.", "The difference between validation and test BLEU scores suggests also a certain degree of overfitting to the validation set.", "Despite this, our reranker still yields the largest improvement over beam.", "Appendix B shows similar trends when test performance is measured in terms of translation error rate (TER) (Snover et al., 2006), showing that DrNMT is not particularly overfitting to the training metric.", "Human evaluation: We randomly sample 750 sentences from the De-En test sets and collect human ratings.", "We perform A/B testing, where a rater can see the source sentence together with translated sentences from two systems.", "We conduct two rounds of human evaluation by comparing the proposed DrNMT + NCD vs. beam, and DrNMT + NCD vs. NCD.", "For each sentence, we collect three ratings (between 0 to 100) and average the scores, treating sentences with a score difference less than 5 as equally good.", "Out of the 750 sentences, our proposed method generates better translation than beam on 149 sentences and is worse on 82 sentences, and it performs better than NCD on 123 sentences and worse on 108 sentences, corroborating the gains observed when measuring with BLEU.", "Next we show that DrNMT works with other user-specified metrics, study how performance varies with the number of hypotheses and perform several ablation studies to better understand its critical components.", "In order to validate the generality of DrNMT , we consider as metric the opposite of TER, so that larger values indicate better translation quality.", "Table 3 shows validation and test performance in terms of both BLEU and TER when optimizing for either one of the two metrics.", "While the two metrics are correlated, the best results are achieved when optimizing for the metric used at test time.", "We examine the effect of training the reranker with different sizes of the n-best list, U ( x ) .", "Even though we fix the n-best list size at training time, we can apply the reranker on n-best lists of different sizes at test time.", "Figure 2 shows the performance of DrNMT on De-En validation sets from four rerankers trained with 5, 10, 20 and 50 hypotheses, respectively.", "As the size of the n-best list during test time increases, the performance of all rerankers and NCD improve.", "On the other hand, the performance of beam decoding starts to saturate early at beam size 10.", "A reranker trained with 50 hypotheses gives a 1.4 BLEU improvement over beam decoding when beam size is only 5 at test time, and the improvement increases to 3.4 BLEU as we increase the beam size to 200 at test time.", "DrNMT consistently perform better than or equally well as NCD in all training and testing scenarios.", "Interestingly, a reranker trained with more hypotheses performs better than one trained with fewer hypotheses, regardless of the beam size used at test time.", "For instance, when the beam size is 20 at test time, the reranker trained with beam 50 improves over beam by 2.3 BLEU points, while the one which was trained with 20 like at test time, improves by 2.2 BLEU points.", "To our surprise, a reranker trained with only 5 hypotheses can still yield a 3.2 BLEU gain compared with beam decoding when used to rerank 200 hypotheses during test time, indicating that the reranker suffers little from the mismatch between training and testing conditions.", "As a result, depending on available compute resources, one can decide to set the number of hypotheses to the largest value possible to get better test time performance with larger n-best lists, while being robust to the particular choice used at training time.", "We report an ablation study by probing all major design choices made.", "We train DrNMT by optimizing BLEU and evaluate it on the validation set of the De-En task using 50 hypotheses both at training and test time.", "Table 4 summarizes all the results.", "Pre-training: We investigate the importance of pre-training by comparing with a reranker of the same size initialized with random weights.", "Table 4 shows that a randomly initialized reranker performs significantly less well, with a decrease of 0.8 BLEU.", "In addition to lower performance, a randomly initialized reranker also trains more slowly, by requiring 1 .", "6 more weight updates compared to the pre-trained reranker to converge.", "This corroborates our choice to pre-train, as the reranking task is fairly related to the pre-training task and we lack sufficient labeled data to train such a large model from scratch.", "Notice that our pre-trained reranker trains for at most two passes over the data before starting to overfit to its training set.", "Source sentence: When comparing fw + LM against DrNMT to assess the impact of training discriminatively, we did not take into account a confounding factor which is the fact that the LM does not attend over the source sentence.", "Indeed, Salazar et al. (2019) score hypotheses without taking into account the source sentence.", "What is the gain brought by considering also the source sentence?", "To answer this question we compare our reranker with a reranker that takes as input only the hypotheses.", "As shown in Table 4, including the source sentences achieves a small gain of 0.2 BLEU.", "Normalization: We apply minmax normalization and set T = 0 .", "5 when computing the target distribution in the training objective, so that for every source sentence, the range of the BLEU scores of its hypotheses is between 0 and 2.", "This choice yields a 0.4 BLEU improvement compared to a reranker trained with the raw BLEU scores.", "Training data: So far we've been training the reranker with both bitext and BT data.", "In Table 4, we see that training the reranker with only bitext data deteriorates the model's performance by 2 BLEU points.", "The model starts overfitting after 15 passes over the small bitext (around 9,000 parameter updates).", "Incorporating the BT data helps alleviate this issue.", "The model achieves the best validation performance after 1.9 passes over the combination of bitext and BT data (around 63,000 parameter updates).", "Model size: We explore building the reranker using only the first few layers of the XLM-R Base model.", "Since beam hypotheses often differ only locally on isolated phrases, one may wonder whether more local features, as those produced by a shallower reranker may work better.", "Moreover, reducing the model capacity may help preventing overfitting.", "Compared with either only three or six transformer blocks, Table 4 shows that deeper and bigger models work better, despite being more prone to overfitting and despite capturing more global information about their input.", "We conclude our empirical evaluation by investigating how reranking works on top of baseline NMT models trained with back-translation, and by reporting two variations of model architectures.", "As before, we report results on the validation set of the De-En task with n-best list of size 50, using BLEU as metric.", "plied on the n-best list produced by a baseline NMT model trained with back-translation?", "As shown in Table 2 the beam baseline on validation was at 24.7 BLEU, while if we train the NMT by adding back-translated data, BLEU increases to 31.6 (Table 5).", "In this case, we train the reranker using hypotheses generated by the more powerful NMT model trained with back-translated data.", "From Table 5, we can see that DrNMT gives 1.5 BLEU improvement over the beam decoding baseline, and combining NCD and reranker gives an additional gain of 0.5 BLEU, which is less than what we reported in Table 2 but still confirming the overall finding of discriminative reranker and NCD performing similarly while being complementary to each other.", "Causal vs. bidirectional: As the complete hypothesis is available during reranking, the architecture of our reranker is bidirectional as it conditions on the whole sentence.", "This contrasts with how the baseline NMT model generates hypotheses and how it scores them with beam which leverages an auto-regressive decomposition.", "Here we explore the importance of joint modeling and consider an alternative reranker which consists of an encoder and a causal decoder, and which is therefore initialized from the base NMT generating the n-best list.", "Given a source sentence and a hypothesis as input, the output of the decoder is a T d matrix (notice that hidden states are causal), where T is the number of tokens of the hypothesis, and d is the hidden dimension.", "We average the output across position to obtain a d -dimensional representation and apply the same one-hidden layer neural network to obtain a reranking score.", "Table 6 shows that our bidirectional architecture outperforms the causal architecture by 0.8 BLEU.", "Set reranker: While our training objective considers the full set of hypotheses of each source valid encoder + causal decoder 26.8 bi-directional (proposed) 27.6 Table 6: Effect of a causal vs. non-causal reranker.", "sentence, the reranker scores each pair of ( x , u i ) in isolation; it never compares hypotheses directly.", "We therefore explore an architecture that computes cross-hypothesis features.", "In the original reranker architecture, the model produces a d dimensional representation for each ( x , u i ).", "We add another transformer block that computes self-attention across the set of n representations for { ( x, u ) | u U ( x ) } .", "We then apply the one hidden layer projection block to map each d dimensional vector to a single score as before, yielding n scores for reranking.", "This design enables the model to have set-level information during reranking, and thus the scoring has to be performed on the full set at once.", "Table 7 shows that these two model variants perform the same, suggesting that set level representations may need to be captured at a lower layer of the transformer.", "We leave this avenue of exploration for future work.", "Reranking is effective for both SMT and NMT.", "Inspired by work done almost two decades ago (Shen et al., 2004; Och, 2003), we studied discriminative reranking for NMT and found that it performs at least as well as the strongest generative reranking method we are aware of, namely noisy channel decoding (NCD) (Yee et al., 2019) as long as care is taken to alleviate overfitting.", "There is a subtle trade-off between improvements stemming from optimizing the end metric and addressing exposure bias on the one hand, and poor generalization and sample inefficiency of discriminative training on the other hand.", "In this study we regularize the reranker by using dropout, by pre-training on large corpora and by performing data augmentation.", "Empirically, we found that NCD and our discriminative reranker are complementary to each other, yielding sizeable improvements over each other and the beam baseline.", "Our reranker is computationally less demanding than NCD, since it consists of a single model while NCD requires scoring using two additional models.", "Our reranker is also robust to the choice of the size of the n-best list and other hyper-parameters settings.", "In the future we plan to investigate better ways to alleviate sample inefficiency, as well as to design more effective architectures to score at the set level." ]
[ "abstain", "method", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "method", "method", "other", "method", "abstain", "other", "other", "other", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "other", "abstain", "method", "other", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "method", "result", "method", "abstain", "objective" ]
[ "Crowdsourcing is widely used to create data for common natural language understanding tasks.", "Despite the importance of these datasets for measuring and refining model understanding of language, there has been little focus on the crowdsourcing methods used for collecting the datasets.", "In this paper, we compare the efficacy of interventions that have been proposed in prior work as ways of improving data quality.", "We use multiple-choice question answering as a testbed and run a randomized trial by assigning crowdworkers to write questions under one of four different data collection protocols.", "We find that asking workers to write explanations for their examples is an ineffective stand-alone strategy for boosting NLU example difficulty.", "However, we find that training crowdworkers, and then using an iterative process of collecting data, sending feedback, and qualifying workers based on expert judgments is an effective means of collecting challenging data.", "But using crowdsourced, instead of expert judgments, to qualify workers and send feedback does not prove to be effective.", "We observe that the data from the iterative protocol with expert assessments is more challenging by several measures.", "Notably, the human model gap on the unanimous agreement portion of this data is, on average, twice as large as the gap for the baseline protocol data.", "Crowdsourcing is a scalable method for constructing examples for many natural language processing tasks.", "Platforms like Amazon's Mechanical Turk give researchers access to a large, diverse pool of people to employ (Howe, 2006; Snow et al., 2008; Callison-Burch, 2009).", "Given the ease of data collection with crowdsourcing, it has been frequently Equal contribution.", "used for collecting datasets for natural language understanding (NLU) tasks like question answering (Mihaylov et al., 2018), reading comprehension (Rajpurkar et al., 2016; Huang et al., 2019), natural language inference (Dagan et al., 2005; Bowman et al., 2015; Williams et al., 2018; Nie et al., 2020a), and commonsense reasoning (Talmor et al., 2019).", "There has been substantial research devoted to studying crowdsourcing methods, especially in the human-computer interaction literature (Kittur et al., 2008, 2011; Bernstein et al., 2012).", "However, most prior research investigates methods for collecting accurate annotations for existing data, for example labeling objects in images or labeling the sentiment of sentences (Hsueh et al., 2009; Liu et al., 2019a; Sun et al., 2020).", "There are some small-scale studies that use writing tasks, like writing product reviews, to compare crowdsourcing methodologies (Dow et al., 2012).", "However, we are unaware of any prior work that directly evaluates the effects of crowdsourcing protocol design choices on the quality of the resulting data for NLU tasks.", "Decisions around methodology and task design used to collect datasets dictate the quality of the data collected.", "As models become stronger and are able to solve existing NLU datasets, we have an increasing need for difficult, high-quality datasets that are still reliably solvable by humans.", "As a result, our thresholds for what makes a dataset acceptable become stricter: The data needs to be challenging, have high human-agreement, and avoid serious annotation artifacts (Gururangan et al., 2018).", "To make collecting such large-scale datasets feasible, making well-informed crowdsourcing design decisions becomes crucial.", "Existing NLP datasets have been crowdsourced with varying methods.", "The prevailing standard is to experiment with task design during pilots that are run before the main data collection (Vaughan, 2018).", "This piloting process is essential to design-Figure 1: The initial pool of crowdworkers are randomly assigned to one of four protocols and the datasets are collected in parallel.", "ing good crowdsourcing tasks with clear instructions, but the findings from these pilots are rarely discussed in published corpus papers, and the pilots are usually not large enough or systematic enough to yield definitive conclusions.", "In this paper, we use a randomized trial to directly compare crowdsourcing methodologies to establish general best practices for NLU data collection.", "We compare the efficacy of three types of crowdsourcing interventions that have been used in previous work.", "We use multiple-choice question answering in English as a testbed for our study and collect four small datasets in parallel including a baseline dataset with no interventions.", "We choose QA as our test-bed over the similarly popular testbed task of natural language inference (NLI) because of our focus on very high human-agreement examples which calls for minimizing label ambiguity.", "In multiple-choice QA, the correct label is the answer choice that is most likely to be correct , even if there is some ambiguity in whether that choice is genuinely true .", "In NLI however, if more than one label is plausible, then resolving the disagreement by ranking labels may not be possible (Pavlick and Kwiatkowski, 2019).", "In the trial, crowdworkers are randomly assigned to one of four protocols: BASELINE , JUSTIFICATION , CROWD , or EXPERT .", "1 In BASELINE , crowdworkers are simply asked to write question-answering examples.", "In JUSTIFICATION they are tasked with also writing explanations for their examples, prompting self-assessment.", "For the EXPERT and CROWD protocols, we train work-1 All the data is available at https://github.com/nyu-mll/crowdsourcing-protocol-comparison.", "ers using an iterative process of collecting data, sending feedback, and qualifying high performing workers to subsequent rounds.", "We use expert-curated evaluations in EXPERT , and crowdsourced evaluations in CROWD for generating feedback and assigning qualifications.", "We use a a standard of high pay and strict qualifications for all protocols.", "We also validate the data to discard ambiguous and unanswerable examples.", "The experimental pipeline is sketched in Figure 1. To quantify the dataset difficulty, we collect additional label annotations to establish human performance on each dataset and compare these to model performance.", "We also evaluate the difficulty of the datasets for typical machine learning models using IRT (Baker and Kim, 1993; Lalor et al., 2016).", "We find that the EXPERT protocol dataset is the most challenging.", "The humanmodel gap with RoBERTa LARGE (Liu et al., 2019b) on the unanimous agreement portion of EXPERT is 13.9 percentage point, compared to 7.0 on the BASELINE protocol.", "The gap with UnifiedQA (Khashabi et al., 2020) is 6.7 on EXPERT , compared to 2.9 on BASELINE .", "However, the CROWD evaluation data is far less challenging than EXPERT , suggesting that expert evaluations are more reliable than crowdsourced evaluations for sending feedback and assigning qualifications.", "We also find that the JUSTIFICATION intervention is ineffective as a stand-alone method for increasing NLU data quality.", "A substantial proportion of the explanations submitted are duplicates, reused for multiple examples, or give trivial reasoning that is not specific to the example.", "Lastly, to evaluate the datasets for serious annotation artifacts we test the guessability of answers by omitting the questions from the model input.", "This partial-input baseline achieves the lowest accuracy on EXPERT , showing that the interventions used to successfully boost example difficulty may also reduce annotation artifacts.", "Creating NLU Corpora Existing NLU datasets have been collected using a multitude of methods, ranging from expert-designed, to crowdsourced, to automatically scraped.", "The widely used Winograd schema dataset by Levesque et al. (2012) is constructed manually by specialists and it has 273 examples.", "Larger NLU datasets, more appropriate for training neural networks, are often crowdsourced, though the crowdsourcing methods used vary widely.", "Popular datasets, such as SQuAD (Ra-jpurkar et al., 2016) for question answering and SNLI (Bowman et al., 2015) for natural language inference, are collected by providing crowdworkers with a context passage and instructing workers to write an example given the context.", "Rogers et al. (2020) crowdsource QuAIL, a QA dataset, by using a more constrained data collection protocol where they require workers to write nine specific types of question for each passage.", "QuAC (Choi et al., 2018) is crowdsourced by pairing crowdworkers, providing one worker with a Wikipedia article, and instructing the second worker to ask questions about the hidden article.", "Recently, there has been a flurry of corpora collected using adversarial models in the crowdsourcing pipeline.", "Dua et al. (2019), Nie et al. (2020a), and Bartolo et al. (2020) use models in the loop during data collection, where crowdworkers can only submit examples that cannot be solved by the models.", "However, such datasets can be biased towards quirks of the model used during data collection (Zellers et al., 2019; Gardner et al., 2020).", "Crowdsourcing Methods While crowdsourcing makes it easy to collect large datasets quickly, there are some clear pitfalls: Crowdworkers are generally less knowledgeable than field experts about the requirements the data needs to meet, crowdwork can be monotonous resulting in repetitive and noisy data, and crowdsourcing platforms can create a market for lemons where fast work is incentivized over careful, creative work because of poor quality requesters (Akerlof, 1978; Chandler et al., 2013).", "Daniel et al. (2018) give a broad overview of the variables at play when trying to crowdsource high-quality data, discussing many strategies available to requesters.", "Motivated by the use of self-assessment in teaching Boud (1995), Dow et al. (2012) study the effectiveness of self-assessment and external assessment when collecting data for product reviews.", "They find that both strategies are effective for improving the quality of submitted work.", "However, Gadiraju et al. (2017) find that crowdworker self-assessment can be unreliable since poor-performing workers overestimate their ability.", "Drapeau et al. (2016) test a justify-reconsider strategy: Crowdworkers justify their annotations in a relation extraction task, they are shown a justification written by a different crowdworker, or an expert, and are asked to reconsider their annotation.", "They find that this method significantly boosts the accuracy of annotations.", "Another commonly used strategy when crowdsourcing NLP datasets is to only qualify workers who pass an initial quiz or perform well in preliminary crowdsourcing batches (Wang et al., 2013; Cotterell and Callison-Burch, 2014; Ning et al., 2020; Shapira et al., 2020; Roit et al., 2020).", "In addition to using careful qualifications, Roit et al. (2020) send workers feedback detailing errors they made in their QA-SRL annotation.", "Writing such feedback is labor-intensive and can become untenable as the number of workers grows.", "Dow et al. (2011) design a framework of promoting crowdworkers into shepherding roles to crowdsource such feedback.", "We compare expert and crowdsourced feedback in our EXPERT and CROWD protocols.", "We run our study on Amazon Mechanical Turk.", "2 At launch, crowdworkers are randomly assigned to one of four data collection protocols, illustrated in Figure 1. 3 To be included in the initial pool, workers need to have an approval rating of 98% or higher, have at least 1,000 approved tasks, and be located in the US, the UK, or Canada.", "This task is used for collecting question-answer pairs in the crowdsourcing pipeline for all four pro-2", "tocols.", "Crowdworkers assigned to the BASELINE protocol are presented with only this task.", "In this writing task, we provide a context passage drawn from the Open American National Corpus (Ide and Suderman, 2006).", "4 Inspired by Hu et al. (2020), we ask workers to write two questions per passage with four answer choices each.", "We direct workers to ensure that the questions are answerable given the passage and that there is only one correct answer for each question.", "We instruct them to limit word overlap between their answer choices and the passage and to write distracting answer choices that will seem plausibly correct to someone who hasn't carefully read the passage.", "To clarify these criteria, we provide examples of good and bad questions.", "Workers assigned to the JUSTIFICATION protocol are given the writing task described above (Section 3.1) and are also tasked with writing a 13 sentence explanation for each question.", "They are asked to explain the reasoning needed to select the correct answer choice, mentioning what they think makes the question they wrote challenging.", "Tutorial Workers assigned to the CROWD and EXPERT protocols are directed to a tutorial upon assignment.", "The tutorial consists of two quizzes and writing tasks.", "The quizzes have four steps.", "In each step workers are shown a passage, two question candidates and are asked to select which candidate", "(i) is less ambiguous,", "(ii) is more difficult,", "(iii) is more creative, or", "(iv) has better distracting answer choices.", "These concepts are informally described in the writing task instructions, but the tutorial makes the rubric explicit, giving crowdworkers a clearer understanding of our desiderata.", "We give workers immediate feedback on their performance during the first quiz and not the second so that we can use it for evaluation.", "Lastly, for the tutorial writing tasks, we provide two passages and ask workers to write two questions (with answer choices) for each passage.", "These questions are graded by three experts 5 using a rubric with the same metrics described in the quiz, shown in Figure 2. We give the qualification to continue onto 4 Following MultiNLI (Williams et al., 2018), we select the ten genres from OANC that are accessible to non-experts: Face-to-face, telephone, 911, travel, letters, slate, verbatim, government, OUP, and fiction.", "5 The expert annotators are authors of this paper and Dhara Mungra.", "1. Is the question answerable and unambiguous?", "Yes No Yes, but the label is wrong 2. How closely do you think someone would need to read the passage to correctly answer the question?", "Wouldn't need to read it Quickly skim a few words or one sentence Quickly skim a few sentences Read the whole passage May need to read the passage more than once 3. How creative do you think the question is?", "Not creative A little creative Fairly creative Very creative 4. Does the example have distracting answer choices?", "Yes No Figure 2: The grading rubric used to evaluate examples submitted during the intermediate writing rounds in the EXPERT and CROWD protocols.", "the writing tasks to the top 60% of crowdworkers who complete the tutorial.", "We only qualify the workers who wrote answerable, unambiguous questions, and we qualify enough workers to ensure that we would have a large pool of people in our final writing round.", "Intermediate Writing Rounds After passing the tutorial, workers go through three small rounds of writing tasks.", "At the end of each round, we send them feedback and qualify a smaller pool of workers for the next round.", "We only collect 400 500 examples in these intermediate rounds.", "At the end of each round, we evaluate the submitted work using the same rubric defined in the tutorial.", "In the EXPERT protocol, three experts grade worker submissions, evaluating at least four questions per worker.", "The evaluation annotations are averaged and workers are qualified for the next round based on their performance.", "The qualifying workers are sent a message with feedback on their performance and a bonus for qualifying.", "Appendix A gives details on the feedback sent.", "Evaluating the examples in each round is labor-intensive and challenging to scale (avg. 30 expert-min. per worker).", "In the CROWD protocol we experiment with crowdsourcing these evaluations.", "After the first intermediate writing round in CROWD , experts evaluate the submitted work.", "The evaluations are used to qualify workers for the second writing round and to promote the top 20% of workers into a feedback role.", "After intermediate writing rounds 2 and 3, the promoted workers are tasked with evaluating all the examples (no one evaluates their own work).", "We collect five evaluations per example and use the averaged scores to send feedback and qualify workers for the subsequent round.", "For both CROWD and EXPERT protocols, the top 80% of workers are requalified at the end of each round.", "Of the 150 workers who complete the tutorial, 20% qualify for the final writing round.", "Our qualification rate is partly dictated by a desire to have a large enough pool of people in the final writing task to ensure that no dataset is skewed by only a few people (Geva et al., 2019).", "Cost We aim to ensure that our pay rate is at least US $15/hr for all tasks.", "The total cost per question, excluding platform fees, is $1.75 for the BASELINE protocol and $2 for JUSTIFICATION .", "If we discard all the data collected in the intermediate writing rounds, the cost is $3.76 per question for EXPERT , 6 and $5 for CROWD .", "The average pay given during training to workers that qualify for the final writing task in EXPERT is about $120/worker (with an estimated 67 hours spent in training).", "In CROWD , there is an additional cost of $85/worker for collecting crowdsourced evaluations.", "The cost per example, after training, is $1.75 per question for both protocols, and total training cost does not scale linearly with dataset size, as one may not need twice as many writers for double the dataset size.", "More details on our payment and incentive structure can be found in Appendix B. 4 Data Validation We collect label annotations by asking crowdworkers to pick the correct answer choice for a question, given the context passage.", "In addition to the answer choices written by the writer, we add an Invalid question / No answer option.", "We validate the data from each protocol.", "For CROWD and EXPERT , we only validate the data from the final large writing rounds.", "Data from all four protocols is shuffled and we run a single validation task, collecting either two or ten annotations per example.", "We use the same minimum qualifications as the writing task (Section 3), and require that workers 6 The discarded data collected during training was annotated by experts, and if we account for the cost of expert time used, the cost for EXPERT increases to $4.23/question.", "This estimate is based on the approximate hourly cost of paying a US PhD student, including benefits and tuition.", "first pass a qualification task.", "The qualification task consists of 5 multiple-choice QA examples that have been annotated by experts.", "7 People who answer at least 3 out of 5 questions correctly receive the qualification to work on the validation tasks.", "Of the 200 crowdworkers who complete the qualification task, 60% qualify for the main validation task.", "Following Ho et al. (2015), to incentivize higher quality annotations, we include expert labeled examples in the validation task, constituting 10% of all examples.", "If a worker's annotation accuracy on these labeled examples falls below 50%, we remove their qualification (7 workers are disquali-fied through this process), conversely workers who label these examples correctly receive a bonus.", "10-Way Validation Pavlick and Kwiatkowski (2019) show that annotation disagreement may not be noise, but could be a signal of true ambiguity.", "Nie et al. (2020b) recommend using high-human-agreement data for model evaluation to avoid such ambiguity.", "To have enough annotations to filter the data for high human agreement and to estimate human performance, we collect ten annotations for 500 randomly sampled examples per protocol.", "Cost We pay $2.50 for the qualification task and $0.75 per pair of questions for the main validation task.", "For every 3 out of 4 expert-labeled examples a worker annotates correctly, we send a $0.50 bonus.", "We collect around 1,500 question-answer pairs from each protocol design: 1,558 for BASELINE , 1,534 for JUSTIFICATION , 1,600 for CROWD , and 1,580 for EXPERT .", "We use the validation annotations to determine the gold-labels and to filter out examples: If there is no majority agreement on the answer choice, or if the majority selects invalid question , the example is discarded ( 5% of ex-amples).", "For the 2-way annotated data, we take a majority vote over the two annotations plus the original writer's label.", "For the 10-way annotated data, we sample four annotations and take a majority vote over those four plus the writer's vote, reserving the remainder to compute an independent estimate of human performance.", "For the 10-way annotated subsets of the data, we take a majority vote over the six annotations that are not used when determining the gold answer, and compare the result to the gold answer to estimate human performance.", "Table 1 shows the result for each dataset.", "The EXPERT and CROWD datasets have lower human performance numbers than BASELINE and JUSTIFICATION .", "This is also mirrored in the inter-annotator agreement for validation, where Krippendorf's (Krippendorff, 1980) is 0.67 and 0.71 for EXPERT and CROWD , compared to 0.81 and 0.77 for BASELINE and JUSTIFICATION (Table 3 in Appendix C).", "The lower agreement may be reflective of the fact that while these examples are still clearly human solvable, they are more challenging than those in BASELINE and JUSTIFICATION As a result, annotators are prone to higher error rates, motivating us to look at the higher agreement portions of the data to determine true dataset difficulty.", "And while the agreement rate is lower for EXPERT and CROWD , more than 80% of the data still has high human-agreement on the gold-label, where at least 4 out of 5 annotators agree on the label.", "The remaining low-agreement examples may have more ambiguous questions, and we follow Nie et", "al.'s (2020b) recommendation and focus our analysis on the high-agreement portions of the dataset.", "We test two pretrained models that perform well on other comparable QA datasets: RoBERTa LARGE (Liu et al., 2019b) and UnifiedQA-v2 (Khashabi et al., 2020).", "We fine-tune RoBERTa LARGE on RACE (Lai et al., 2017), a large-scale multiple-choice QA dataset that is commonly used for training (Sun et al., 2019).", "We fine-tune 6 RoBERTa LARGE models and report the average performance across runs.", "The UnifiedQA-v2 model is a single T5-based model that has been trained on 15 QA datasets.", "8 We also fine-tune RoBERTa LARGE on CosmosQA and QuAIL, finding that zero-shot model performance is best with RACE fine-tuning but that the trends in model accuracy across our four datasets are consistent (Appendix D).", "As shown in Table 1, model accuracy on the full datasets is lowest for EXPERT , followed by CROWD JUSTIFICATION , and then BASELINE .", "However, model accuracy alone does not tell us how much 8 The authors of UnifiedQA kindly shared the unreleased v2 model with us.", "headroom is left in the datasets.", "Instead, we look at the difference between the estimated human performance and model performance.", "HumanModel gap The trends in the human model gap on the 10-way annotated sample are inconsistent across models.", "For a more conclusive analysis, we focus on the higher-agreement portions of the data where label ambiguity is minimal.", "On the high agreement section of the datasets, both models' performance is weakest on EXPERT .", "RoBERTa LARGE shows the second largest human model gap on CROWD , however for UnifiedQA JUSTIFICATION is the next hardest dataset.", "This discrepancy between the two types of iterative feedback protocols is even more apparent in the unanimous agreement portion of the data.", "On the unanimous agreement examples, both models show the lowest performance on EXPERT but UnifiedQA achieves near perfect performance on CROWD .", "This suggests that while the CROWD protocol used nearly the same crowdsourcing pipeline as EXPERT , the evaluations done by experts are a much more reliable metric for selecting workers to qualify and for generating feedback, at the cost of greater difficulty with scaling to larger worker pools.", "This is confirmed by inter-annotator agreement: Expert agreement on the rubric-based evaluations has a Krippendorf's of 0.65, while agreement between crowdworker evaluations is 0.33.", "Self-Justification Model performance on the unanimous agreement examples of JUSTIFICATION is comparable to, or better than, performance on BASELINE .", "To estimate the quality of justifications, we manually annotate a random sample of 100 justifications.", "About 48% (95% CI: [38%, 58%]) are duplicates or near-duplicates of other justifications, and of this group, nearly all are trivial (e.g. Good and deep knowledge is needed to answer this question ) and over half are in non-fluent English (e.g. To read the complete passage to understand the question to answer. ).", "On the other hand, non-duplicate justifications are generally of much higher quality, mentioning distractors, giving specific reasoning, and rewording phrases from the passage (e.g. Only #1 is discussed in that last paragraph. The rest of the parts are from the book, not the essay. Also the answer is paraphrased from zero-sum to one's gain is another's loss ).", "While we find that JUSTIFICATION does not work as a stand-alone strategy, we cannot conclude that self-justification would Partial input P + A Q + A ABASELINE 69.9 (4.7) 41.9 (2.9) 34.9 (2.4) JUSTIFICATION 57.9 (1.3) 38.3 (2.2) 33.9 (6.3) CROWD 57.7 (3.1) 43.9 (2.0) 35.2 (1.9) EXPERT 52.0 (1.5) 42.8 (1.8) 35.7 (1.4) Table 2: Accuracy (std.) of partial input baselines.", "be equally ineffective if combined with more aggressive screening to exclude crowdworkers who author trivial or duplicate justifications.", "Gadiraju et al. (2017) also recommend using the accuracy of a worker's self-assessments to screen workers.", "Cross-Protocol Transfer Since the datasets from some protocols are clearly more challenging than others, it prompts the question: are these datasets also better for training models?", "To test cross-protocol transfer, we fine-tune RoBERTa LARGE on one dataset and evaluate on the other three.", "We find that model accuracy is not substantively better from fine-tuning on any one dataset (Table 5, Appendix E).", "The benefit of EXPERT being a more challenging evaluation dataset does not clearly translate to training.", "However, these datasets may be too small to offer clear and distinguishable value in this setting.", "Annotation Artifacts To test for undesirable artifacts, we evaluate partial input baselines (Kaushik and Lipton, 2018; Poliak et al., 2018).", "We take a RoBERTa LARGE model, pretrained on RACE, and fine-tune it using five-fold cross-validation, providing only part of the example input.", "We evaluate three baselines: providing the model with the passage and answer choices only, the question and answer choices only, and the answer choices alone.", "Results are shown in Table 2. The pas-sage+answer baseline has significantly lower performance on the EXPERT dataset in comparison to the others.", "This indicates that the iterative feedback and qualification method using expert assessments not only increases overall example difficulty but may also lower the prevalence of simple artifacts that can reveal the answer.", "Performance of the question+answer and answer-only baselines is comparably low on all four datasets.", "that the difficulty of the datasets is correlated with average answer length (Figure 3).", "The hardest dataset, EXPERT , also has the longest answer options with 0 5 10 15 20 Number of words 0.00 0.05 0.10 0.15 0.20 0.25 D e n s i t y Expert Crowd Justification Baseline Correct answers Incorrect answers Figure 3: Distribution of answer lengths.", "an average of 9.1 words, compared to 3.7 for BASELINE , 4.1 for JUSTIFICATION , and 6.9 for CROWD .", "This reflects the tendency of the 1and 2-word answers common in the BASELINE and JUSTIFICATION datasets to be extracted directly from the passage.", "While sentence-length answers, more common in EXPERT and CROWD , tend to be more abstractive.", "Figure 3 also shows that incorrect answer options tend to be shorter than correct ones.", "This pattern holds across all datasets, suggesting a weak surface cue that models could exploit.", "Using an answer-length based heuristic alone, accuracy is similar to the answer-only model baseline: 34.2% for BASELINE , 31.7% for JUSTIFICATION , 31.5% for CROWD , and 34.3% for EXPERT .", "Wh-words We find that the questions in EXPERT and CROWD protocols have similar distributions of wh-words, with many why questions and few who or when questions compared to the BASELINE and JUSTIFICATION protocols, seemingly indicating that this additional feedback prompts workers to write more complex questions.", "Non-Passage-Specific Questions We also observe that many questions in the datasets are formulaic and include no passage-specific content, for instance Which of the following is true?", ", What is the main point of the passage?", ", and Which of the following is not mentioned in the passage?", ".", "We manually annotate 200 questions from each protocol for questions of this kind.", "We find that there is no clear association between the dataset's difficulty and the frequency of such questions: 15% of questions in EXPERT are generic, compared to 4% for CROWD , 10% for JUSTIFICATION , and 3% for BASELINE .", "We might expect that higher quality examples that require reading a passage closely would ask questions that are specific rather than generic.", "But our results suggest that difficulty may be due more to the subtlety of the answer options, and the presence of distracting options, rather than the complexity or originality of the questions.", "Order of Questions We elicit two questions per passage in all four protocols with the hypothesis that the second question may be more difficult on aggregate.", "However, we find that there is only a slight drop in model accuracy from the first to second question on the CROWD and EXPERT datasets (1.0 and 0.7 percentage points).", "And model accuracy on BASELINE remains stable, while it increases by 2.7 percentage points on JUSTIFICATION .", "A task design with minimal constraints, like ours, does not prompt workers to write an easier question followed by a more difficult one, or vice versa.", "Individual examples within any dataset can have different levels of difficulty.", "To better understand the distribution of difficult examples in each protocol, we turn to Item Response Theory (IRT; Baker and Kim, 1993), which has been used to estimate individual example difficulty based on model responses (Lalor et al., 2019; Martnez-Plumed et al., 2019).", "Specifically, we use the three-parameter logistic (3PL) IRT model, where an example is characterized by discrimination, difficulty, and guessing parameters.", "Discrimination defines how effective an example is at distinguishing between weak and strong models, difficulty defines the minimum ability of a model needed to obtain high performance, and the guessing parameter defines the probability of a correct answer by random guessing.", "Following Vania et al. (2021), we use 90 Transformer-based models fine-tuned on RACE, with varying ability levels, and use their predictions on our four datasets as responses.", "For comparison, we also use model predictions on QuAIL and CosmosQA.", "Refer to Appendix F for more details.", "Figure 4 shows the distribution of example difficulty for each protocol.", "Also plotted are the difficulty parameters for the intermediate rounds of data that are collected in the iterative feedback protocols.", "9 We see that EXPERT examples have the highest median and 75th percentile difficulty scores, 9 The IRT parameters for discrimination range from 0.6 to 2.1, while for guessing they range from 0.03 to 0.74.", "However, we observe that the distributions of both parameters across the four datasets are similar.", "while BASELINE scores the lowest.", "We also note that the greatest gain in difficulty for CROWD examples happens between rounds 1 and 2, the only feedback and qualification stage that is conducted by experts.", "This offers further evidence that expert assessments are more reliable, and that crowdsourcing such assessments poses a significant challenge.", "While the examples in EXPERT have higher difficulty scores than the other protocols, the scores are significantly lower than those for CosmosQA and QuAIL (all four datasets show similar discrimination scores to CosmosQA and QuAIL).", "The data collection methods used for both CosmosQA and QuAIL differ substantially from methods we tested.", "Rogers et al. (2020) constrain the task design for QuAIL and require workers to write questions of specific types, like those targeting temporal reasoning.", "Similarly, in CosmosQA workers are encouraged to write questions that require causal or deductive commonsense reasoning.", "In contrast, we avoid dictating question type in our instructions.", "The IRT results here suggest that using prior knowledge to slightly constrain the task design can be effective for boosting example difficulty.", "In addition to differing task design, CosmosQA and QuAIL also use qualitatively different sources for passages.", "Both datasets use blogs and personal stories, QuAIL also uses texts from published fiction and news.", "Exploring the effect of source text genre on crowdsourced data quality is left to future work.", "We present a study to determine effective protocols for crowdsourcing difficult NLU data.", "We run a randomized trial to compare interventions in the crowdsourcing pipeline and task design.", "Our results suggest that asking workers to write justifications is not a helpful stand-alone strategy for improving NLU dataset difficulty, at least in the absence of explicit incentives for workers to write high-quality justifications.", "However, we find that training workers using an iterative feedback and requalification protocol is an effective strategy for collecting high-quality QA data.", "The benefit of this method is most evident in the high-agreement subset of the data where label noise is low.", "We find that using expert assessments to conduct this iterative protocol is fruitful, in contrast with crowdsourced assessments that have much lower inter-annotator agreement and the noisy signal from these assessments does not boost example difficulty.", "We thank Dhara Mungra for her early contributions to this project, and for being one of the expert graders during data collection.", "We also thank Daniel Khashabi for giving us access to UnifiedQA-v2 for our experiments.", "This work has benefited from financial support to SB by Eric and Wendy Schmidt (made by recommendation of the Schmidt Futures program), Apple, and Intuit, and from in-kind support by the NYU High-Performance Computing Center and by NVIDIA Corporation (with the donation of a Titan V GPU).", "SS was supported by JST PRESTO Grant No.", "JPMJPR20C4.", "This material is based upon work supported by the National Science Foundation under Grant No. 1922658.", "Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.", "We are cognizant of the asymmetrical relationship between requesters and workers in crowdsourcing, and we take care to be responsive employers and to pay a wage commensurate with the high-quality work we're looking for.", "So in additional to the ethical reasons for paying fair wages, our successes with collecting high-quality NLU data offer weak evidence that others should also follow this practice.", "However, the mere existence of more research on NLU crowdsourcing with positive results could arguably encourage more people to do crowdsourcing under a conventional model, with low pay and little worker recourse against employer malpractice.", "The only personal information we collect from workers is their Mechanical Turk worker IDs, which we keep secure and will not release.", "However, we do not engage with issues of bias during data collection and we expect that the data collected under all our protocols will, at least indirectly, reinforce stereotypes.", "We confirmed with New York University's IRB that crowdsourced NLP dataset construction work, including experimental work on data collection methods, is exempt from their oversight.", "The only personal information we collect from workers is their Mechanical Turk worker IDs, which we keep secure and will not release." ]
[ "abstain", "abstain", "objective", "method", "result", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "result", "abstain", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "method", "abstain", "method", "method", "result", "abstain", "abstain", "abstain", "result", "abstain", "method", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "result", "result", "abstain", "objective", "other", "other", "other", "other", "other", "other", "other", "method", "method", "abstain", "method", "method", "method", "method" ]
[ "Recent work on unsupervised question answering has shown that models can be trained with procedurally generated question-answer pairs and can achieve performance competitive with supervised methods.", "In this work, we consider the task of unsupervised reading comprehension and present a method that performs test-time learning (TTL) on a given context (text passage), without requiring training on large-scale human-authored datasets containing context-question-answer triplets.", "This method operates directly on a single test context, uses self-supervision to train models on synthetically generated question-answer pairs, and then infers answers to unseen human-authored questions for this context.", "Our method achieves accuracies competitive with fully supervised methods and significantly outperforms current unsupervised methods.", "TTL methods with a smaller model are also competitive with the current state-of-the-art in unsupervised reading comprehension.", "Reading comprehension is the task in which systems attempt to answer questions about a passage of text.", "Answers are typically found in the passage as text-spans or can be inferred through various forms of reasoning (Rajpurkar et al., 2016).", "The answer to the following question: Who is the President of the United States? depends on the timeframe and context of the passage provided, and will be different for news articles written in 2001 vs. 2021.", "If the context is the script of the TV series The West Wing, the answer is Jed Bartlet, and even in this fictional setting, it will later change to Matt Santos.", "Knowledge sources such as Wikipedia get updated when new events occur (such as the outcome of elections), or new facts about the world are revealed (such as scientific discoveries), with contributors adding new information and removing information that is no longer valid (Almeida et al., 2007).", "With such context-dependent answers and continual changes in knowledge, it is hard to justify training models over fixed corpora for tasks such as question answering (QA).", "We would like models to answer questions based on the given context and not to learn biases from datasets or historical news articles.", "Moreover, supervised learning has been shown to perform poorly in QA tasks with adversarial examples (Jia and Liang, 2017), domain shift (Jia and Liang, 2017; Yogatama et al., 2019; Kamath et al., 2020), and biased or imbalanced data (Agrawal et al., 2018; McCoy et al., 2019).", "For example, QA systems trained on Wikipedia fail to generalize to newer domains such as Natural Questions (Ren-nie et al., 2020) or biomedical data (Wiese et al., 2017), and suffer a significant drop in accuracy.", "Even small semantics-preserving changes to input sentences, such as the substitution of words by synonyms, have been shown to degrade performance in NLP tasks (Alzantot et al., 2018; Jia et al., 2019).", "Continual changes in text corpora are inevitable, thus calling for the development of robust methods that can reliably perform inference without being subject to biases.", "Supervised Question Answering faces challenges such as the need for large-scale (usually human-authored) training corpora to train models.", "Such corpora typically require significant postprocessing and filtering to remove annotation artifacts (Sakaguchi et al., 2020).", "To address these challenges, some recent methods (Lewis et al., 2019; Li et al., 2020) approach question answering as an unsupervised learning task.", "A significant advantage of this approach is that it can be extended to domains and languages for which collecting a large-sized human-authored training corpus is challenging.", "Methods for unsupervised QA procedurally generate a large corpus of (context, question, answer) triples, and train large neural language models, such as BERT (Devlin et al., 2019).", "In this work, we focus on unsupervised reading comprehension (RC) under evolving contexts and present the Test-Time Learning\" paradigm for this task. RC the task of answering questions about a passage of text, acts as the perfect setting for robust question-answering systems that do not overfit to training data. While large-scale language models trained on large datasets may contain global information, the answer needs to be extracted from the given context. Thus, our work seeks to learn unsupervised reading comprehension without access to human-authored training data but instead operates independently on each test context. This makes our method distribution-blind' where each new context is assumed to be a novel distribution. The test-time learning (TTL) framework enables smaller models to achieve improved performance with small procedurally generated question-answer pairs, and is summarized below: a single context (text passage) c i is given, from which we procedurally generate QA pairs; these QA pairs are used to train models to answer questions about c i ; the inference is performed on previously unseen questions for c i . This framework has a simple assumption that every context comes from a distinct distribution. Hence, parameters learned for the previous context might not be useful to generalize to other contexts. This assumption holds where the contexts evolve over time, and rote memorization of answers might lead to wrong predictions. As such, the above process is repeated for each new context c i . For question-answer generation, we use simple methods such as cloze-translation (Lewis et al., 2019), template-based question-answer generation (Fabbri et al., 2020) and question-answer semantic role labeling (QA-SRL) (He et al., 2015). We use two neural transformer-based language models, BERT-Large (Devlin et al., 2019) and DistilBert (Sanh et al., 2019), to study the efficacy of our framework with large and small transformer models. We evaluate our method on two reading comprehension datasets, SQuAD (Rajpurkar et al., 2016) and NewsQA (Trischler et al., 2017). We investigate test-time training under multiple learning settings: (1) single-context learning the standard setting, (2) K -neighbor learning by retrieving top-K multiple related contexts for each test context, (3) curriculum learning progressively learning on question-types of increasing order of complexity, (4) online learning sequentially finetuning models on each incoming test sample.", "Our experimental findings are summarized below: Test-time learning methods are effective for the task of reading comprehension and surpass current state-of-the-art on two benchmarks: SQuAD and NewsQA.", "Online TTL trained over K-neighboring contexts of the test context is the best version with EM/F1 gains of 7 .", "3% / 7 .", "8% on SQuAD 1.1 and 5 .", "3% / 6 .", "9% on NewsQA.", "DistilBERT which has less than 15 th of the number of model parameters of BERT-Large is competitive with current SOTA methods that use BERT-Large.", "Consider a reading comprehesion test dataset D test = { ( c i , q i , a i ) } ni =1 with context text passages c i , human-authored questions q i and true answers a i .", "The QA model g ( ) is parameterized by = ( f , h ) where f are parameters for the feature extractor, and h for the answering head.", "The answer is predicted as a text-span, given by the start and stop positions [ y start , y stop ] .", "Contemporary unsupervised RC models (Lewis, 2019; Li et al., 2020) are trained on a large dataset D train = { ( c i , q i , a i ) } ni =1 , where the QA pairs are synthetically generated from the context.", "In our setting, we do not use such large training datasets, but instead directly operate on individual test contexts c i D test .", "Given c i , M synthetic question-answer pairs { ( q ji , a ji ) } Mj =1 are procedurally generated as described in Section", "3. The QA model parameters are trained over the synthetic data to predict the span of the answer [ y start , y stop ] by optimizing the loss (cid:96) ans : minimize M (cid:88) j =1 (cid:96) ans ( c ji , q ji , ) (1) (cid:96) ans = (cid:96) CE ( y start , a start ) + (cid:96) CE ( y stop , a stop ) (2) where (cid:96) CE is cross-entropy loss.", "Single-Context Test-Time RC.", "This is the standard formulation of test-time learning in this paper, with Equation 1 optimizing over , i.e. for each context c i , the feature extractor f is re-initialized with pre-trained BERT, and the answering head h is randomly initialized.", "K -neighbor Test-Time RC.", "In this version, K contexts similar to the test-context c i are grouped together, and Equation 1 is optimized over each set of similar contexts as opposed to single contexts in the standard setting.", "We index contexts in a Lucene-based information retrieval system (Gormley and Tong, 2015) and retrieve top-K similar contexts given c i , which we call Context Expansion with IR described in Section", "3. Curriculum Test-Time RC.", "In the curriculum learning version, questions are ordered in increasing order of complexity.", "We generate different types of questions, such as, semantic role labelling, cloze-completion, template-based and dependency tree-based translation of cloze questions to natural questions.", "This provides an ordering of complexity and we study the effect of test-time training with such an increasing complexity.", "Online Test-Time RC.", "In the online test-time learning (TTLO), test samples are considered to be encountered in sequence.", "As such, answering head parameters h are updated sequentially without being randomly re-initialized like in the standard single-context setting.", "For each new test context c i , h is initiliazed with the optimal pa-rameteres from the previous test context c i 1 to optimize Equation", "1. 3 Self-Supervised QA Generation In this section, we detail our framework for procedurally generating QA pairs from a given context.", "We use named-entity recognition from Spacy (Hon-nibal and Montani, 2017), dependency parsing from Berkeley Neural Parser (Stern et al., 2017) and semantic role labeling (He et al., 2015) as our core methods to extract plausible answers and generate natural questions.", "As described in our task formulation, we create a set of M question-answer pairs { ( q ji , a ji ) } Mj =1 for the given context c i .", "Cloze Generation.", "Statements in which the answer is replaced with a mask or blank token are called cloze questions.", "We follow the steps provided in Lewis et al. (2019) in which answers are replaced with a special token depending on the answer category.", "For example, in a sentence, They were descended from Norse raiders and pirates from Denmark the answer Denmark is replaced by [LOCATION ], resulting a cloze question: They were descended from Norse raiders and pirates from [LOCATION ] .", "utilized to rephrase cloze questions into more natural questions by using rule-based methods from Lewis et al. (2019).", "Template-based Question Generation utilizes simple template-based rules to generate questions.", "Given a context of format: [FRAGMENT A][A NSWER ][F RAGMENTB] a template of the format Wh+B+A+? replaces the answer with a Wh-word (e.g., who,what,where) as described in Fabbri et al. (2020).", "Dependency Parsing-based Question Generation.", "In this method, we use dependency recon-struction to translate clozes to natural questions as described in Li et al. (2020), according to the following steps:", "1. Right child nodes of the answer are retained and left children are pruned.", "2. For each node of the parse tree, if the child node's subtree contains the answer, the child node is moved to the first child node.", "3. An in-order traversal is performed on the reconstructed tree.", "A rule-based mapping is applied to replace the special mask token of the cloze with an appropriate Wh-word.", "QA-Semantic Role Labeling (QA-SRL) was proposed by He et al. (2015) as a method to annotate NLP data, by using QA pairs to specify textual arguments and their roles.", "As seen in Figure 1, for the context sentences: They were descended from Norse raiders and pirates from Denmark. , The distinct cultural and ethnic identity of the Normans emerged initially in the first half of the 10th century and it continued to evolve. the following QA pairs were generated, (What was someone descended from?, Norse) , (What evolved?, distinct cultural and ethnic diversty) We can observe the questions are short and use generic descriptors and pronouns such as some-thing and someone instead of specific references calling for the model to have greater semantic understanding of the given context.", "Context Expansion using IR is used in the K neighbor version of TTL.", "For Context Expansion, we index all paragraphs present in a Wikipedia dump in ElasticSearch.", "During test-time learning, we preprocess the context c i by removing the most frequent stop-words, and use it as a seed query to search and retrieve top-K similar contexts.", "This provides us with related paragraphs that describe similar topics, and consequently more diverse and slightly larger number of QA pairs to train compared to only c i .", "We then generate QA pairs using the above described methods.", "We study the effect of varying the number of most similar contexts ( K ) on the downstream QA performance.", "Datasets.", "We evaluate our learning framework on two well-known reading comprehension datasets: SQuAD 1.1 (Rajpurkar et al., 2016) and NewsQA (Trischler et al., 2017).", "QA Model.", "We focus on training two transformer-encoder based models, BERT-Large (Devlin et al., 2019) trained with whole-word masking and DistilBERT (Sanh et al., 2019).", "BERT-Large is used by current state-of-the-art methods on unsupervised extractive QA tasks and has 345 million trainable parameters.", "On the other hand, DistilBERT is a knowledge-distilled transformer-encoder based model and only has 66 million parameters ( 5 smaller than BERT-Large), allowing us to study the efficacy of TTL with respect to model-size.", "Metrics.", "We use the standard metrics for extractive QA macro Exact Match , where the predicted answer span is directly matched with the ground-truth, and macro F1 , which measures the overlap between the predicted and the ground-truth spans.", "For comparisons with existing unsupervised methods, since TTL operates directly on test instances, we report validation set performance only for SQuAD 1.1, as the test set is hidden.", "Training Setup.", "For all test-time learning variants, we limit the maximum number of questions generated per context to 4000 and the maximum number of training steps to 1500 .", "The number of training steps is linearly dependent on the selected batch size [16 , 64] .", "For our K -neighbor TTL setup that uses Context Expansion, we limit the number of retrieved contexts to 500 .", "In Curriculum Test-Time RC, we ensure that all variants have an equal number ( 1000 ) of generated QA-pairs per-context.", "We evaluate multiple learning rates within the range 1e-5 to 5e-5.", "We use the Adam (Kingma and Ba, 2014) optimizer and truncate the paragraphs to a maximum sequence length of 384 .", "The number 384 was chosen by evaluating the 99 th percentile of the combined length of question and the contexts, to reduce training overhead and GPU memory size.", "Long documents are split into multiple windows with a stride of 128 .", "All SQuAD 1.1 NewsQA Models Dev Test Dev Test DCR (2016) 62.5 / 71.2 62.5 / 71.0 / -/ mLSTM (2016) 64.1 / 73.9 64.7 / 73.7 34.4 / 49.6 34.9 / 50.0 FastQAExt (2017) 70.3 / 78.5 70.8 / 78.9 43.7 / 56.1 42.8 / 56.1 R-NET (2017) 71.1 / 79.5 71.3 / 79.7 / -/ BERT-Large (2019) 84.2 / 91.1 85.1 / 91.8 / -/ SpanBERT (2020) / -88.8 / 94.6 / -/ 73.6 DistilBERT (2019) 77.7 / 85.8 / -57.2 / 64.8 56.1 / 63.5 Table 1: Results (EM / F1) from supervised methods on SQuAD 1.1 and NewsQA.", "experiments were conducted on two Nvidia RTX-8000 GPUs.", "We use ten percent of the training data to perform three hyper-parameter trials for each variant.", "We train models with three random seeds, and report the mean F1 and EM scores.", "Baselines.", "As we generate our own data using QA-SRL, we use the following strong baselines.", "First, we train BERT-Large with generated data from previous methods described in Section 3 and our method (which contains additional QA-SRL samples).", "Second, we replicate the baselines using the low parameter-count model DistilBERT ( 66 million vs 345 million for BERT-Large).", "Third, for a fair comparison to Single-Context and K neighbor test-time learning where we train models for each context independently, we propose a baseline where we train on all the test contexts together, referred to as All test contexts.", "We also evaluate all TTL variants on two initializations of feature-extractor parameters", "1. default initialization of BERT-Large, i.e. f pre-trained on masked language modeling and next-sentence prediction tasks, and h randomly initialized for each context and trained from scratch, or", "2. f and h further pre-trained on 100 K synthetic QA pairs generated procedurally using our methods described in Section 3 with contexts taken from the Wikipedia corpus.", "We compare our results with current state-of-the-art supervised methods (Table 1) and unsupervised methods (Table 2) on SQuAD 1.1 and NewsQA.", "The previous best unsupervised method with both BERT-Large and DistilBERT is Li et al. (2020).", "Our best TTL method is the Online version (TTLO), with a pre-training phase and a randomly-shuffled ordering of QA pairs with an average of 3000 QA pairs per context, trained with only SQuAD 1.1 NewsQA Models Dev Test Dev Test BERT-Large + Dhingra et al.", "100 steps.", "With this setup, we are able to improve the state-of-the-art for the SQuAD benchmark with BERT-Large by 7 .", "8% exact-match accuracy and 7 .", "3% F1 score.", "With DistilBERT, the best TTL method shows an improvement of 15 .", "5% EM and 20 .", "6% F1 over DistilBERT-based baseline, as shown in Table", "2. In NewsQA, TTL improves BERT-Large performance by 5 .", "3% EM and 6 .", "9% F1 score, and with DistilBERT shows an improvement of 7 .", "2% EM and 7 .", "2% F1 score.", "Training BERT-Large and DistilBERT with our data i.e. with a combined synthetic corpus created via all four QA-pair generation methods, marginally improves the F1 score.", "This shows that our QA generation methods lead to an improvement over existing unsupervised QA generation methods as shown in Table", "2. However, the TTL framework leads to even larger gains ( 20% Figure 2: Comparison of F1 scores of TTL models when trained with an increasing number of labeled training samples on SQuAD. TTLOOnline TTL. for SQuAD and 10% for NewsQA), indicating the benefits of test-time learning.", "This result also points to the limits of training with a large number of contexts compared to training on individual contexts.", "This limitation is especially profound in lower parameter models, such as DistilBERT.", "In Reading Comprehension, since the answer comes from the context, understanding the context is much more relevant.", "It has a higher inductive bias than learning to comprehend a significantly large number of contexts during training.", "For instance, there are multiple contexts about Normans in the SQuAD dataset, one of which is shown in Figure", "1. But each context may have different historical persons referred to as the leaders or rulers of the Normans.", "Answers to questions such as Who was the leader of the Normans are better learned for each context separately than from all contexts.", "Pre-training on several contexts is indeed beneficial to obtain better parameter initializations, as observed in Table 2, which can be further independently finetuned for each context during TTL.", "We evaluate our best method under the few-shot setting, i.e. when models are trained with a limited number of human-authored QA pairs from the training datasets.", "Figure 2 shows a comparison with an increasing number of labeled training samples for SQuAD.", "TTL-Online is consistently better than existing methods and achieves 81 .", "6% F1 score with just 100 labeled samples.", "This indicates that this learning framework can reduce the number of in-domain human-authored samples required for training.", "TTL-Online is also consistently better than (Li et al., 2020) which the previous best unsupervised method for SQuAD.", "All methods (which use BERT-Large as backbone) converge to similar Curriculum Order Default init.", "performance, with an increasing number of additional human-authored samples.", "This indicates the saturation of the inductive bias that can be incorporated into the architecture using current human-authored annotations.", "We study the different variants of test-time learning and effects of hyperparameters, such as the number of training steps and the number of contexts, on the validation split for both datasets.", "Single-Context vs K -neighbor Test-Time RC.", "In Table 3, we compare all TTL variants.", "We observe that training with additional contexts has a significant impact on F1 score, compared to training on only the given test context c i .", "This may be simply explained as more synthetic training samples from similar contexts leading to a better generalization to human-authored samples.", "Although similar work in image classification (Sun et al., 2020) and super-resolution (Shocher et al., 2018) show a substantial performance improvement in a single sample learning, we observe that context expansion is beneficial for reading comprehension.", "neighbors contexts, K , and observe that F1 scores continue to increase till a limit ( 500 ).", "This is consistent in both BERT-Large and DistilBERT, as well as in the two datasets, SQuAD and NewsQA.", "Our hypothesis is that there exists an optimal number of QA pairs that the model benefits from, and a maximum threshold on the number of similar contexts after which, the model starts to overfit to the synthetic nature of the QA pairs.", "Randomly initialized v/s Pre-trained f , h .", "We study the effect of re-initializing the question answering head and further pre-training using a set of procedurally generated QA-pairs on downstream test-time learning in Figure 4 and Table", "3. While F1 scores achieved without pre-training are comparable to prior methods, pre-training leads to improved performance and also faster convergence, as shown in Figure", "4. This can be attributed to better initial weights, which are further finetuned during the test-time learning phase.", "We studied pretraining with 50 k , 100 k , and 200 k QA pairs and observed the best performance with 100 k samples.", "Curriculum Test-time learning.", "In Table 4 we study the effect of curriculum TTL, compared to the baseline of the default random-shuffled QA pairs.", "Interestingly, using a random ordering rather than a defined curriculum begets the best performance.", "Among the three curriculum ordering that we utilized, [QA-SRL, TEMPLATE-BASED (T), DP (DEPENDENCYPARSING-BASED )] was effective but slightly lower than the performance with random ordering.", "However, training with QA-SRL at the end has a distinctly negative effect.", "We hypothesize that the model starts to overfit to the shorter vague questions from QA-SRL and for-gets\" more natural questions.", "Hence, it loses generalizability to the human-authored questions.", "and evaluated on a continuous stream of contexts and QA-pairs.", "From Table 3 and Figures 3, 4 and 5, we can observe that TTL-Online consistently outperforms the single-context variant.", "One key observation is that the model achieves its best performance within 100 training steps (batch size of 48 ), whereas the base version needs around 300 to 500 steps.", "This fast adaptation enables a faster inference time, compared to h being trained from scratch.", "We studied the effect of different random orderings of the test samples and observed the deviation as 1 .", "6 % in F1 scores, which indicates ordering of test samples has a minor effect.", "Effect of Batch Size and Learning Rate.", "Batch-size and learning rate have strong effects on online test-time learning.", "We observe that resuming with the learning rate of the last epoch of the pre-training with synthetic QA pairs achieves the best F1 scores.", "We do not use any weight decay.", "A persistent optimizer state between contexts is critical.", "Similarly, we hypothesize that the batch-layer normalization statistics pre-computed in transformer encoder layers get updated in further pre-training with QA pairs, leading to a better estimation during TTL.", "For the base variant of TTL, a higher, fixed learning rate of 3e-5 with a batch size of 32-48 achieves the best F1 scores.", "Effect of number of Training steps and QA pairs is studied in Figures 4 and", "5. To limit inference time per test context, we observe TTL variants initialized with pre-trained achieve the top performance within 150 training steps, whereas those trained with default initialization need 200 300 steps.", "In Figure 5, we can observe the variants achieve their best F1 scores around 3 k QA pairs.", "This appears consistent with 100 train steps with a batch size of 24 32 .", "Surprisingly, DistilBERT with pre-trained performs equally well compared to BERT-Large with no pre-training on synthetic question-answer pairs.", "Effect of TTL on inference time.", "TTL and its variants all increase the inference time as compared to traditional inference.", "For the best variant of TTL-Online with BERT-Large, we train for 100 steps with a batch size of 48 samples, which leads to an inference time of 5 minutes per context.", "Each context contains, on average 6 7 questions in SQuaD 1.1 and NewsQA.", "The best variant of DistilBERT, although has a lower average inference time of 1 .", "6 minutes per context, by employing several engineering tricks, such as saving models on RAM instead of the disk by using tmpfs (Sny-der, 1990), and using mixed-precision training (Mi-cikevicius et al., 2018).", "In comparison, non-TTL methods have inference times in the range 10 K samples/sec with a GPU hardware of Nvidia V100 16GB.", "TTL inference time is limited by the current computation power of the GPUs but is potentially remediable.", "However, with an increase in CUDA cores in GPUs and RAM size, we estimate the inference time can be further improved.", "Moreover, with newer efficient transformer architectures such as Linformer (Wang et al., 2020) and Big Bird (Za-heer et al., 2020), it is possible for this inference time to be further reduced.", "It will be an interesting future work to increase TTL's efficiency further while retaining its strength of generalizing to evolving distributions.", "Error Analysis.", "We analyzed 100 wrongly answered samples from SQuAD validation split and observed the model is biased towards answering named-entities.", "This is not unexpected as most of our QA-pair generation methods are focused on named-entity answers.", "For example, for the question Is it easier or harder to change EU law than stay the same? , the TTL DistilBERT model generates EU , whereas the ground-truth answer is harder.", "Although QA-SRL generates more diverse answers, the corresponding questions are vague and much more synthetic, leaving scope for improving QA pair generation to include a variety of question and answer types in the future.", "Another source of errors is the alternate plausible answers generated by our models, shown in Table", "5. 6 Related Work Extractive QA.", "The goal for extractive question answering (EQA) is to predict a span of text in a context document as the answer to a question.", "Various benchmarks have been established to evaluate the capability of EQA models on corpuses from different domains such as Wikipedia-based question answering in SQuAD (Rajpurkar et al., 2016), Natural Questions dataset (Kwiatkowski et al., 2019), as well as questions requiring complex reasoning to extract answers in HotPotQA (Yang et al., 2018); questions about news' articles in NewsQA (Trischler et al., 2017); and about trivia-facts in TriviaQA (Joshi et al., 2017).", "Unsupervised QA.", "For many of the aforementioned extractive QA benchmarks, human-like performance has been reached via supervised methods.", "Unfortunately, these methods do not transfer well to new domains, and the collection of training data in new domains and new languages may not always be feasible.", "To address this, unsupervised EQA has been proposed as a challenge (Lewis et al., 2019), in which aligned (context, question, answer) triplets are not available.", "Self-supervised data-synthesis methods (Lewis et al., 2019; Banerjee and Baral, 2020; Rennie et al., 2020; Fabbri et al., 2020; Li et al., 2020; Banerjee et al., 2020) have been used for question answering by procedurally generating QA pairs and training models on these synthetic data.", "Self-Supervised Learning.", "The key idea in self-supervision is to design auxiliary tasks so as to and extract semantic features from unlabeled samples, for which input-output data samples can be created from unlabeled datasets.", "Self-supervision has been used to train large transformer-based language models such as BERT (Devlin et al., 2019) and T5 (Raf-fel et al., 2020) for the auxiliary task of masked token prediction, and XLNET (Yang et al., 2019) for token prediction given any combination of other tokens in the sequence.", "ELECTRA (Clark et al., 2019) instead of masking tokens, jointly trains a generator to substitute input tokens with plausible alternatives and a discriminator to predict the presence or absence of substitution.", "MARGE (Lewis et al., 2020) is trained to retrieve a set of related multi-lingual texts for a target document, and to reconstruct the target document from the retrieved documents.", "The goal of self-supervised pretext task design is to come up with tasks that are as close to the main task, to learn better representations.", "In NLP, QA format provides us such an opportunity where we can leverage NER, SRL, Cloze Completion as auxiliary tasks for complex QA.", "super-resolution (Glasner et al., 2009; Freedman and Fattal, 2011; Shocher et al., 2018) that do not require access to external training datasets but instead formulate a self-supervised task for upsam-pling natural image patches recurring at different scales in the image.", "Test-time training (TTT) (Sun et al., 2020) for image classification makes use of rotation prediction Gidaris et al. (2018) as an auxiliary task to implicitly learn image classification at test-time and shows improved robustness.", "While we can directly synthesize main-task data (QA pairs) from the context and do not require an auxiliary task, our work is closely related to TTT.", "Domain Adaptation.", "Pre-training for the tasks such as masked language modeling or other synthetic tasks on unlabeled corpora for a new domain has been evaluated for commonsense reasoning (Mitra et al., 2019) and classification tasks (Gu-rurangan et al., 2020).", "On the other hand, our work can be viewed as task-specific self-supervision with each new context as a new domain.", "In this work, we propose test-time learning (TTL) as a new framework for unsupervised extractive question answering (EQA).", "We present four variants of TTL with a simple but effective context expansion method.", "We utilize four question-answer pair generation methods for EQA and propose using QA-SRL as an additional source of QA pairs, to supplement prior methods.", "We show TTL enables understanding of contexts at test-time, without human-authored annotations, and significantly improves EQA, including low parameter models.", "We envision TTL as a framework that can direct work in reading comprehension to be viewed as a problem of ever-evolving datasets instead of a static corpus.", "Natural language itself undergoes continuous evolution (Gentner and France, 1988; Traugott and Dasher, 2001; Hamilton et al., 2016) via changes in preference for syntactical structures; creation of new words and phrases; and changing usage frequencies and semantics for existing words.", "TTL can potentially be applied to such scenarios with semantic drift or domain shift.", "Further improvements w.r.t. selection of similar contexts for K-neighbor TTL could be explored by leveraging hard sample selection, hard negative mining, bootstrapping, and contrastive learning, along with improved currculum strategies.", "The authors acknowledge support from the DARPA SAIL-ON program W911NF2020006, ONR award N00014-20-1-2332 and NSF grant 1816039; and thank the reviewers for their feedback.", "Our test-time learning method treats every new test instance as a new distribution, and does not rely on a human-authored training dataset.", "We believe that this is a possible way to avoid learning spurious correlations or linguistic priors, especially when it comes to socio-cultural and historical biases that have been shown to percolate into models for various NLP tasks (Hendricks et al., 2018; Ku-rita et al., 2019; Sheng et al., 2019).", "On the other hand, if the test context itself contains biased, false, or propaganda statements, our model will use those statements to extract answers.", "We would not want models trained on such data to be deployed in the real world.", "However, because model parameters are randomly initialized for each new context in the standard version of our framework, if contexts are fact-checked by reliable sources, then we believe our model will be relatively bias-free, as compared to pre-trained language models for which it is hard to trace why a certain prediction was made.", "Test-time learning allows us to disentangle biases learned from single contexts, from biases learned by language models from large corpora." ]
[ "abstain", "method", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "objective", "objective", "objective", "objective", "objective", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "objective", "result", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "method", "abstain", "abstain" ]
[ "In encoder-decoder neural models, multiple encoders are in general used to represent the contextual information in addition to the individual sentence.", "In this paper, we investigate multi-encoder approaches in document-level neural machine translation (NMT).", "Surprisingly, we find that the context encoder does not only encode the surrounding sentences but also behaves as a noise generator.", "This makes us rethink the real benefits of multi-encoder in context-aware translation some of the improvements come from robust training.", "We compare several methods that introduce noise and/or well-tuned dropout setup into the training of these encoders.", "Experimental results show that noisy training plays an important role in multi-encoder-based NMT, especially when the training data is small.", "Also, we establish a new state-of-the-art on IWSLT FrEn task by careful use of noise generation and dropout methods.", "Sentence-level neural machine translation (NMT) systems ignore the discourse phenomena and encode the individual source sentences with no use of contexts.", "In recent years, the context-aware models which learn contextual information from surrounding sentences have shown promising results in generating consistent and coherent translations (Zhang et al., 2018; Voita et al., 2018; Kim et al., 2019; Voita et al., 2019; Bawden et al., 2018; Miculicich et al., 2018; Maruf and Haffari, 2018; Maruf et al., 2019).", "to form a context-aware input sequence (Agrawal et al., 2018; Tiedemann and Scherrer, 2017), whereas a more widely-used approach utilizes additional neural networks to encode context sentences (Jean et al., 2017; Voita et al., 2018; Zhang et al., 2018).", "Here we name the former as the single-encoder approach and name the latter as the multi-encoder approach.", "However, large-scale document corpora are not easily available.", "Most context-aware NMT systems are evaluated on small datasets and sig-nificant BLEU improvements are reported (Wang et al., 2017; Zhang et al., 2018; Tu et al., 2018).", "In our experiments, we find that the improvement persists if we feed pseudo sentences into the context encoder, especially when we train the system on small-scale data.", "A natural question here is: How much does the improvement come from the leverage of contextual information in multi-encoder ?", "In this work, we aim to investigate what kinds of information that the context-aware model captures.", "We re-implement several widely used context-aware architectures based on the multi-encoder paradigm, and do an in-depth analysis to study whether the context encoder captures the contextual information.", "By conducting extensive experiments on several document-level translation benchmarks, we observe that: The BLEU gaps between sentence-level and context-aware models decrease when the sentence baselines are carefully tuned, e.g., proper use of dropout.", "The multi-encoder systems are insensitive to the context input.", "Even randomly sampled sentences can bring substantial improvements.", "The model trained with the correct context can achieve better performance during inference without the context input.", "Our contribution is two folds:", "(i) We find that the benefit of the multi-encoder context-aware approach is not from the leverage of contextual information.", "Instead, the context encoder acts more like a noise generator to provide richer training signals.", "(ii) The finding here inspires us to develop a simple yet effective training strategy: we add a Gaussian-noise to the encoder output, which can effectively alleviate the overfitting, especially on small datasets.", "Here we describe two ways of introducing contextual information into NMT systems.", "The input of the single-encoder system is the concatenation of the context sentences and the current sentence, with a special symbol inserted to distinguish them (Tiedemann and Scherrer, 2017; Agrawal et al., 2018).", "Then the extended sentence is fed into the standard Transformer.", "These systems may face the challenge of encoding extremely long inputs, resulting in inefficient computation.", "The multi-encoder models take the surrounding sentences as the context and employ an additional neural network to encode the context, that is, we have a source-sentence encoder and a context encoder.", "Figure 1 shows two methods of integrating the context into NMT in the multi-encoder paradigm.", "Next we show that most of the multi-encoder approaches (Voita et al., 2018; Zhang et al., 2018) are instances of the models described below.", "Outside integration .", "As shown in Figure", "1(a), the representations of the context and the current sentence are firstly transformed into a new representation by an attention network.", "Then the attention output and the source sentence representation are fused by a gated sum.", "Inside integration .", "Alternatively, the decoder can attend to two encoders respectively (Figure", "1(b)).", "Then, the gating mechanism inside the decoder is employed to obtain the fusion vector.", "We evaluated the document-level approaches on several publicly available datasets.", "For Chinese-English (Zh-En) and French-English (Fr-En), we used Ted talks from IWSLT15 and IWSLT16 (Cet-tolo et al., 2012) evaluation campaigns as the training data.", "We validated on dev2010 , and tested on tst2010-2013 (Zh-En), tst2010 (Fr-En) respectively.", "For English-German (En-De), we evaluated on WMT18 task 1 .", "For more convincing results, we also randomly sampled 500k/1M/2M/5M sentence pairs from the Chinese-English corpus provided by WMT 2 and test on newstest2017 .", "We preprocessed the sentences with Moses tokenizer 3 except Chinese sentences and used byte pair encoding (Sen-nrich et al., 2016) with 32K merged operations to 1 We used the News-Commentary v14 as the train set 2 http://www.statmt.org/wmt19/translation-task.html 3 http://www.statmt.org/moses Lang.", "segment words into sub-word units.", "The Chinese sentences were word segmented by the tool provided within NiuTrans (Xiao et al., 2012).", "For Fr-En and Zh-En tasks, we lowercased all sentences to obtain comparable results with previous work.", "We also conducted experiments on a larger English-Russian (En-Ru) dataset provided by Voita et al. (2018), consisting of 2M sentence pairs selected from publicly available OpenSubtitles2018 corpus.", "The data statistics of each language pair can be seen in Table 1.", "We chose the Transformer-base model as the sentence-level baseline.", "The context encoder also used the same setting as the sentence-level baseline.", "We used Adam (Kingma and Ba, 2014) for optimization, and trained the systems on a single TiTan V GPU 4 .", "The learning rate strategy was the same as that used in Vaswani et al. (2017).", "Our implementation was based on Fairseq (Ott et al., 2019).", "More details can be found in our repository 5 .", "To study whether the context-encoder network captures contextual information in training, we present three types of context as the input of the context-encoder:", "Context : the previous sentence of the current sentence.", "Random : a sentence consisting of words randomly sampled from the source vocabulary.", "Fixed : a fixed sentence input for context-encoder.", "Weight sharing (Voita et al., 2018) and two-stage training (Zhang et al., 2018) strategies have been proven essential to build strong context-aware systems.", "The former shared the first N-1 blocks of 4 For En-Ru and Zh-En we trained models on 4 GPUs 5 The source code is available at https://github.", "context encoder with the source encoder, and the latter first trained a standard sentence-level Transformer and finetuned the document-level Transformer with an extra context-encoder.", "We first evaluated the importance of two training strategies for multi-encoder systems.", "We selected the multi-encoder with Outside integration (see Section 2) as the context-aware model and trained systems with two training strategies on the En-De task respectively.", "As shown in Table 2, we find that both two strategies outperform the sentence-level baseline by a large margin.", "The model with two-stage training performs slightly better than the weight-sharing system in terms of BLEU.", "To our surprise, the context-encoder with a single-layer can compete with a six-layers model.", "We suspect that this is because the training data is limited and we do not need a sophisticated model to fit it.", "Therefore, we choose the two-stage training and single-layer context-encoder for all experiments in the remainder of this paper.", "Table 3 shows the results of several context-aware models on different datasets.", "We see, first of all, that all multi-encoder models, including both Inside and Outside approaches outperform the sentence-level baselines by a large margin on the Zh-En and En-De datasets with a small p value of dropout .", "Also, there are modest BLEU improvements on the Fr-En and En-Ru tasks.", "When the models are regularized by a larger dropout , all systems obtain substantial improvements but the gaps between sentence-level and multi-encoder systems decrease significantly.", "We deduce that if the context-aware systems rely on the contextual information from the preceding sentence, the performance of Random and Fixed should dramatically decrease due to the incorrect context.", "Surprisingly, both Random and Fixed systems achieve comparable performance or even System Zh-En Fr-En En-De En-Ru p = 0 .", "higher BLEU scores than Context in most cases (See Table 3).", "A possible explanation is that the context encoder does not only model the context.", "Instead, it acts more like a noise generator to provide additional supervised signals to train the sentence-level model.", "To verify the assumption of robust training, we followed the work (Srivastava et al., 2014; Berger et al., 1996).", "We turned off the context-encoder during the inference process, and made the inference system perform as the sentence-level baseline.", "Table 4 shows that both Context and Random inference without context-encoder obtain modest BLEU improvements.", "This confirms that the information extracted by context-encoder just plays a role like introducing randomness into training (e.g., dropout ), which is a popular method used in robust statistics.", "We argue that three types of context provide noise signals to disturb the distribution of the sentence-level encoder output.", "The BLEU improvements of both Outside and Inside are mainly due to the richer noise signals which can effectively alleviate the overfitting.", "signed a simple yet effective method to regularize the training process: A Gaussian noise is added to the encoder output instead of the embedding (Cheng et al., 2018).", "We sample a vector (cid:15) N (cid:0) 0 , 2 I (cid:1) from a Gaussian distribution with variance 2 , where is a hyper-parameter.", "As seen in Table 5, the systems with Gaussian-noise significantly outperform the sentence-level baselines, and are slightly better than the Outside-context counterpart.", "Moreover, a natural question is whether further improvement can be achieved by combining the Context with the Gaussian-noise method.", "From the last line in Table 5, we observe no more improvement at all.", "The observation here convinced the assumption again that the context-encoder plays a similar role with the noise generator.", "Most previous results are reported on small training datasets.", "Here we examine the effects of the noise-based method on different sized datasets.", "We trained the Inside-Random model and the Gaussian-noise model on different datasets consisting of 500K to 5M sentence pairs.", "Seen from Figure 2, the baseline model achieves better translation performance when we increase the data size.", "More interestingly, it is observed that InsideRandom and Gaussian-noise perform slightly better than 500k 1M 2M 5M 18 20 22 24 Data Volume BLEU Base Inside Gaussian Figure 2: BLEU scores vs. different data volume on Zh-En sentence-level dataset.", "the baseline, and the gaps gradually decrease with the volume increasing.", "This is reasonable that models trained on large-scale data may suffer less from the overfitting problem.", "Context-aware NMT systems incorporating the contextual information generate more consistent and coherent translations than sentence-level NMT systems.", "Most of the current context-aware NMT models can be classified into two main categories, single-encoder systems (Tiedemann and Scherrer, 2017) and multi-encoder systems (Jean et al., 2017; Voita et al., 2018; Zhang et al., 2018).", "Voita et al. (2018) and Zhang et al. (2018) integrated an additional encoder to leverage the contextual information into Transformer-based NMT systems.", "Miculicich et al. (2018) employed a hierarchical attention network to model the contextual information.", "Maruf and Haffari (2018) built a context-aware NMT system using a memory network, and Maruf et al. (2019) encoded the whole document with selective attention network.", "However, most of the work mentioned above utilized more complex modules to capture the contextual information, which can be approximately regarded as multi-encoder systems.", "For a fair evaluation of context-aware NMT methods, we argue that one should build a strong enough sentence-level baseline with carefully regularized methods, especially on small datasets (Kim et al., 2019; Sennrich and Zhang, 2019).", "Beyond this, Bawden et al. (2018) and Voita et al. (2019) acknowledged that BLEU score is insufficient to evaluate context-aware models, and they emphasized that multi-encoder architectures alone had a limited capacity to exploit discourse-level context.", "In this work, we take a further step to explore the main cause, showing that the context-encoder acts more like a noise generator, and the BLEU improvements mainly come from the robust training instead of the leverage of contextual information.", "Additionally, Cheng et al. (2018) added the Gaussian noise to word embedding to simulate lexical-level perturbations for more robust training.", "Differently, we added the Gaussian noise to the encoder output which plays a similar role with context-encoder, which provides additional training signals.", "We have shown that, in multi-encoder context-aware NMT, the BLEU improvement is not attributed to the leverage of contextual information.", "Even though we feed the incorrect context into training, the NMT system can still obtain substantial BLEU improvements on several small datasets.", "Another observation is that the NMT models can even achieve better translation quality without the context encoder.", "This gives us an interesting finding that the context-encoder acts more like a noise generator, which provides rich supervised training signals for robust training.", "Motivated by this, we significantly improve the sentence-level systems with a Gaussian noise imposed on the encoder output.", "Experiments on large-scale training data demonstrate the effectiveness of this method.", "This work was supported in part by the National Science Foundation of China (Nos. 61876035 and 61732005), the National Key R&D Program of China (No. 2019QY1801) and the Opening Project of Beijing Key Laboratory of Internet Culture and Digital Dissemination Research.", "The authors would like to thank anonymous reviewers for their comments." ]
[ "abstain", "objective", "result", "objective", "method", "abstain", "objective", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "objective", "method", "result", "objective", "objective", "objective", "objective", "result", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "other", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "method", "other", "objective", "other", "method", "result", "result", "abstain", "method", "objective", "abstain", "other", "other" ]
[ "Semantic representations in the form of directed acyclic graphs (DAGs) have been introduced in recent years, and to model them, we need probabilistic models of DAGs.", "One model that has attracted some attention is the DAG automaton, but it has not been studied as a probabilistic model.", "We show that some DAG automata cannot be made into useful probabilistic models by the nearly universal strategy of assigning weights to transitions.", "The problem affects single-rooted, multi-rooted, and unbounded-degree variants of DAG automata, and appears to be pervasive.", "It does not affect planar variants, but these are problematic for other reasons.", "Abstract Meaning Representation (AMR; Ba-narescu et al. 2013) has prompted a flurry of interest in probabilistic models for semantic parsing.", "AMR annotations are directed acyclic graphs (DAGs), but most probabilistic models view them as strings (e.g. van Noord and Bos, 2017) or trees (e.g. Flanigan et al., 2016), removing their ability to represent coreferenceone of the very aspects of meaning that motivates AMR.", "Could we we instead use probabilistic models of DAGs?", "To answer this question, we must define probability distributions over sets of DAGs.", "For inspiration, consider probability distributions over sets of strings or trees, which can be defined by weighted finite automata (e.g. Mohri et al., 2008; May et al., 2010): a finite automaton generates a set of strings or treescalled a languageand if we assume that probabilities factor over its transitions, then any finite automaton can be weighted to define a probability distribution over this language.", "This assumption underlies powerful dyEqual contribution.", "namic programming algorithms like the Viterbi,", "forward-backward, and inside-outside algorithms.", "What is the equivalent of weighted finite automata for DAGs?", "There are several candidates (Chiang et al., 2013; Bjorklund et al., 2016; Gilroy et al., 2017), but one appealing contender is the DAG automaton (Quernheim and Knight, 2012) which generalises finite tree automata to DAGs explicitly for modeling semantic graphs.", "These DAG automata generalise an older formalism called planar DAG automata (Kamimura and Slutzki, 1981) by adding weights and removing the planarity constraint, and have attracted further study (Blum and Drewes, 2016; Drewes, 2017), in particular by Chiang et al. (2018), who generalised classic dynamic programming algorithms to DAG automata.", "But while Quernheim and Knight (2012) clearly intend for their weights to define probabilities, they stop short of claiming that they do, instead ending their paper with an open problem: Investigate a reasonable probabilistic model. We investigate probabilistic DAG automata and prove a surprising result: For some DAG automata, it is impossible to assign weights that define non-trivial probability distributions .", "We exhibit a very simple DAG automaton that generates an infinite language of graphs, and for which the only valid probability distribution that can be defined by weighting transitions is one in which the support is a single DAG, with all other graphs receiving a probability of zero.", "Our proof relies on the fact that a non-planar DAG automaton generates DAGs so prolifically that their number grows factorially in their size, rather than exponentially as in other automata.", "It holds for DAG automata that allow multiple roots or nodes of unbounded degree.", "But it breaks down when applied to the planar DAGs of Kamimura and Slutzki (1981), which are nevertheless too restrictive to model semantic graphs.", "Our result does not mean that it is impossible to define a probability distribution for the language that a DAG automaton generates.", "But it does mean that this distribution does not factor over the automaton's transitions, so crucial dynamic programming algorithms do not generalise to DAG automata that are expressive enough to model semantic graphs.", "We are interested in AMR graphs like the one below for Rahul bakes his cake (Figure 1, left), which represents entities and events as nodes, and relationships between them as edges.", "Both nodes and edges have labels, representing the type of an entity, event, or relationship.", "But the graphs we model will only have labels on nodes.", "These node-labeled graphs can simulate edge labels using a node with one incoming and one outgoing edge, as in the graph on the right of Figure 1. bake Rahul cake ARG 1 ARG 0 POSS bake Rahul cake ARG 0 ARG 1 POSS Figure 1: A graph with both node and edge labels (left) and an equivalent graph with only node labels (right).", "Definition 1. A node-labeled directed graph over a label set is a tuple G = ( V, E, lab , src , tar ) where V is a finite set of nodes, E is a finite set of edges, lab : V is a function assigning labels to nodes, src : E V is a function assigning a source node to every edge, and tar : E V is a function assigning a target node to every edge.", "A node with no incoming edges is called a root and a node with no outgoing edges is called a leaf", "The degree of a node is the number of edges connected to it, so the degree of v is | IN ( v ) OUT ( v ) | .", "A path in a directed graph from node v to node v (cid:48) is a sequence of edges ( e 1 , . . . , e n ) where src ( e 1 ) = v , tar ( e n ) = v (cid:48) and src ( e i +1 ) = tar ( e i ) for all i from 1 to n 1 .", "A cycle in a directed graph is any path in which the first and last nodes are the same (i.e., v = v (cid:48) ).", "A DAG is connected if every pair of its nodes is connected by a sequence of edges, not necessarily directed.", "Because DAGs do not contain cycles, they must always have at least one root and one leaf, but they can have multiple roots and multiple leaves.", "However, our results apply in different ways to single-rooted and multi-rooted DAG languages, so, given a label set , we distinguish between the set of all connected DAGs with a single root, G 1 ; and those with one or more roots, G .", "Finite automata generate strings by transitioning from state to state.", "Top-down tree automata generalise string finite automata by transitioning from a state to an ordered sequence of states, generating trees top-down from root to leaves; while bottom-up tree automata transition from an ordered sequence of states to a single state, generating trees bottom-up from leaves to root.", "The planar DAG automata of Kamimura and Slutzki (1981) generalise tree automata, transitioning from one ordered sequence of states to another ordered sequence of states (Section 4).", "Finally, the DAG automata of Quernheim and Knight (2012) transition from multisets of states to multisets of states, rather than from sequences to sequences, and this allows them to generate non-planar DAGs.", "We summarise the differences in Table 1 below.", "For the remainder of this section and the next, we will focus only on non-planar DAG automata, and when we refer to DAG automata, we mean this type.", "To formally define them, we need a notation for multisetssets that can contain repeated elements.", "A multiset is a pair ( S, m ) where S is a finite set and m : S N is a count functionthat is, m ( x ) counts the number of times x appears in the multiset.", "The set of all finite multisets over S is M ( S ) .", "When we write multisets, we will often simply enumerate their elements.", "For example, { p, q, q } is the multiset containing one p and two b a b c b a a", "q 's, and since multisets are unordered, it can also be written { q, p, q } or { q, q, p } .", "We write for a multiset containing no elements.", "Definition 2. A DAG automaton is a triple A = ( Q, , T ) where Q is a finite set of states; is a finite set of node labels; and T is a finite set of transitions of the form where is a node label, M ( Q ) is the left-hand side, and M ( Q ) is the right-hand side.", "Example 1. Let A = ( Q, , T ) be a DAG automaton where Q = { p, p (cid:48) , q } , = { a, b, c, d, e } and the transitions in T are as follows: a { p } ( t 1 ) { p } b { p, q } ( t 2 ) { p } c { p (cid:48) } ( t 3 ) { p (cid:48) , q } d { p (cid:48) } ( t 4 ) { p (cid:48) } e ( t 5 ) 2.1.1 Generating Single-rooted DAGs A DAG automaton generates a graph from root to leaves.", "To illustrate this, we'll focus on the case where a DAG is allowed to have only a single root, and return to the multi-rooted case in Section 3.1.", "To generate the root, the DAG automaton can choose any transition with on its left-hand side these transitions behave like transitions from the start state in a finite automaton on strings, and always generate roots.", "In our example, the only available transition is t 1 , which generates a node labeled a with a dangling outgoing edge in state p , as in Figure", "2(i).", "The set of all such dangling edges is the frontier of a partially-generated DAG.", "While there are edges on the frontier, the DAG automation must choose and apply a transition whose left-hand side matches some subset of them.", "In our example, the automaton can choose either t 2 or t 3 , each matching the available p edge.", "The edges associated with the matched states are attached to a new node with new outgoing frontier edges specified by the transition, and the matched states are removed from the frontier.", "If our automaton chooses t 2 , it arrives at the configuration in Figure", "2(ii), with a new node labeled b , new edges on the frontier labeled p and q , and the incoming p state forgotten.", "Once again, it must choose between t 2 and t 3 it cannot use the q state because that state can only be used by t 4 , which also requires a p (cid:48) on the frontier.", "So each time it applies t 2 , the choice between t 2 and t 3 repeats.", "If the automaton applies t 2 again and then t 3 , as it has done in Figure", "2(iii), it will face a new set of choices, between t 4 and t 5 .", "But notice that choosing t 5 will leave the q states stranded, leaving a partially derived DAG.", "We consider a run of the automaton successful only when the frontier is empty, so this choice leads to a dead end.", "If the automaton chooses t 4 , it has an additional choice: it can combine p (cid:48) with either of the available q states.", "If it combines with the lowermost q , it arrives at the graph in Figure", "2(iv), and it can then apply t 4 to consume the remaining q , followed by t 5 , which has on its right-hand side.", "Transitions to behave like transitions to a final state in a finite automaton, and generate leaf nodes, so we arrive at the complete graph in Figure", "2(v).", "If the p (cid:48) state in Figure", "2(iii) had instead combined with the upper q , a different DAG would result, as shown in Figure 2(vi-vii).", "The DAGs in Figure", "2(v) and Figure", "2(vii) are planar, which means they can be drawn without crossing edges.", "1 But this DAG automaton can also produce non-planar DAGs like the one in Figure 3. To see that it is non-planar, we first contract each dotted edge by removing it and fusing its endpoints into a single node.", "This gives us the minor 1 While the graph in Figure", "2(vii) is drawn with crossing b d edges, one of these edges can be redrawn so that they do not cross.", "subgraph K 3 , 3 , and any graph with a K 3 , 3 minor is non-planar (Wagner, 1937).", "We define the language generated by a DAG automaton in terms of recognition, which asks if an input DAG could have been generated by an input automaton.", "We recognise a DAG by finding a run of the automaton that could have generated it.", "To guess a run on a DAG, we guess a state for each of its edges, and then ask whether those states simulate a valid sequence of transitions.", "A run of a DAG automaton A = ( Q, , T ) on a DAG G = ( V, E, lab , src , tar ) is a mapping : E Q from edges of G to automaton states Q .", "We extend to multisets by saying ( { e 1 , . . . , e n } ) = { ( e 1 ) , . . . , ( e n ) } , and we call a run accepting if for all v V there is a corresponding transition ( IN ( v )) lab ( v ) ( OUT ( v )) in T .", "DAG G is recognised by automaton A if there is an accepting run of A on G .", "Example 2. The DAGs in Figure", "2(v) and", "2(vii) are recognised by the automaton in Example 1. The only accepting run for each DAG is denoted by the blue edge labels.", "2.2 Probability and Weighted DAG Automata Definition 3. Given a language L of DAGs, a probability distribution over L is any function p : L R meeting two requirements: (R1) Every DAG must have a probability between 0 and 1, inclusive.", "Formally, we require that for all G L , p ( G ) [0 , 1] .", "(R2)", "The probabilities of all DAGs must sum to one.", "Formally, we require (cid:80) G L p ( G ) = 1 .", "R1 and R2 suffice to define a probability distribution, but in practice we need something stronger than R1: all DAGs must receive a non-zero weight, since in practical applications, objects with probability zero are effectively not in the language.", "While there are many ways to define a function that meets requirements R1' and R2, probability distributions in natural language processing are widely defined in terms of weighted automata or grammars, so we adapt a common definition of weighted grammars (Booth and Thompson, 1973) to DAG automata.", "Definition 5. A weighted DAG automaton is a pair ( A, w ) where A = ( Q, , T ) is a DAG automaton and w : T R is a function that assigns real-valued weights to the transitions of A .", "Since weights are functions of transitions, we will write them on transitions following the node label and a slash ( / ).", "For example, if p a q is a transition and 2 is its weight, we write p a/ 2 q .", "Example 3. Let ( A, w ) be a weighted DAG automaton with A = ( Q, , T ) , where Q = { p, q } , = { a, b, c } , and the weighted transitions of T are as follows: a/ 0 .", "Definition 6. Given a weighted DAG automaton ( A, w ) and a DAGG = ( V, E, lab , src , tar ) with an accepting run , we extend w to compute the weight of the run w ( ) by multiplying the weights of all of its transitions: w ( ) = (cid:89) v V w ( ( IN ( v )) lab ( v ) ( OUT ( v ))) Example 4. The DAG automaton of Example 3 generates the DAG in Figure 4, shown with its only accepting run in blue and the weighted transitions that generated it in grey.", "The weight of the accepting run is 0 .", "5 0 .", "5 0 .", "5 1 = 0 .", "125 .", "Let RA ( G ) be the set of all accepting runs of a DAG G using the automaton A .", "We extend w to calculate the weight of a DAG G as the sum of the weights of all the runs that produce it: w ( G ) = (cid:88) RA ( G ) w ( ) .", "While all weighted DAG automata assign real values to DAGs, not all weighted DAG automata define probability distributions.", "To do so, they must also satisfy requirements R1 and R2.", "Example 5. Consider the weighted automaton in Example 3. Every DAG generated by this automaton must use t (cid:48) 1 and t (cid:48) 3 exactly once, and can use t (cid:48) 2 any number of times.", "If we let G n be the DAG that uses t (cid:48) 2 exactly n times, then the language L defined by this automaton is (cid:83) n NG n .", "Since w ( G n ) = w ( t (cid:48) 1 ) w ( t (cid:48) 2 ) n w ( t (cid:48) 3 ) and w ( t (cid:48) 1 ) , w ( t (cid:48) 2 ) and w ( t (cid:48) 3 ) are positive, w satisfies R1 and: (cid:88) G L w ( G ) = (cid:88) n =0 w ( G n ) = (cid:88) n =0 w ( t (cid:48) 1 ) w ( t (cid:48) 2 ) n w ( t (cid:48) 3 ) = (cid:88) n =0 0 .", "5 n +1 = 1 Thus w also satisfies R2 and the weighted automaton in Example 3 is probabilistic.", "Definition 8.", "A probabilistic automaton ( A, w ) over language L ( A ) is probabilistic with full support if and only if w has full support of L ( A ) .", "For every finite automaton over strings or trees, there is a weighting of its transitions that makes it probabilistic (Booth and Thompson, 1973), and it is easy to show that it can be made probabilistic with full support.", "For example, string finite automata have full support if for every state the sum of weights on its outgoing transitions is 1 and each weight is greater than 0. 2 But as we will show, this is not always possible for DAG automata.", "We will exhibit a DAG automaton that generates factorially many DAGs for a given number of nodes, and we will show that for any nontrivial assignment of weights, this factorial growth rate causes the weight of all DAGs to sum to infinity.", "Theorem 1. Let A be the automaton defined in Example 1. There is no w that makes ( A , w ) probabilistic with full support over L s ( A ) .", "Proof.", "In any run of the automaton, transition t 1 is applied exactly once to generate the single root, placing a p on the frontier.", "This gives a choice between t 2 and t 3 .", "If the automaton chooses t 2 , it keeps one p on the frontier and adds a q , and must then repeat the same choice.", "Suppose it chooses t 2 exactly n times in succession, and then chooses t 3 .", "Then the frontier will contain n edges in state q and one in state p (cid:48) .", "The only way to consume all of the frontier states is to apply transition t 4 exactly n times, consuming a q at each step, and then apply t 5 to consume p (cid:48) and complete the derivation.", "Hence in any accepting run, t 1 , t 3 and t 5 are each applied once, and t 2 and t 4 are each applied n times, for some n 0 .", "Since transitions map uniquely to node labels, it follows that every DAG in L s ( A ) will have exactly one node each labeled a , c , and e ; and n nodes each labeled b and d .", "When the automaton applies t 4 for the first time, it has n choices of q states to consume, each distinguished by its unique path from the root.", "The second application of t 4 has n 1 choices of q , and the i th application of t 4 has n ( i 1) choices.", "Therefore, there are n ! different ways to consume the q states, each producing a unique DAG.", "Let f ( n ) be the weight of a run where t 2 has been applied n times, and to simplify our notation, let B = w ( t 1 ) w ( t 3 ) w ( t 5 ) , and C = w ( t 2 ) w ( t 4 ) .", "Let c ( n ) be the number of unique runs where t 2 has been applied n times.", "By the above: f ( n ) = w ( t 1 ) w ( t 2 ) n w ( t 3 ) w ( t 4 ) n w ( t 5 ) = BC n c ( n ) = n !", "Now we claim that any DAG in L s ( A ) has exactly one accepting run, because the mapping of 2 Assuming no epsilon transitions, in our notation for DAG automata restricted to strings this would include transitions to , which correspond to states with a final probability of 1 (Mohri et al., 2008).", "node labels to transitions also uniquely determines the state of each edge in an accepting run.", "For example, a b node must result from a t 2 transition and a d node from a t 4 transition, and since the output states of t 2 and input states of t 4 share only a q , any edge from a b node to a d node must be labeled q in any accepting run.", "Now let G L s ( A ) be a DAG with n nodes labeled b .", "Since G has only one accepting run, we have: w ( G ) = f ( n ) Let L n be the set of all DAGs in L s ( A ) with n nodes labeled b .", "Then L s ( A ) = (cid:83) n =0 L n and: (cid:88) G L s ( A ) w ( G ) = (cid:88) n =0 (cid:88) G L n w ( G ) = (cid:88) n =0 c ( n ) f ( n ) = (cid:88) n =0 ( n !) (cid:0) BC n (cid:1) Hence for ( A , w ) to be probabilistic with full support, R1' and R2 require us to choose B and C so that, respectively, BC n (0 , 1] for all n and (cid:80) n =0 n !", "BC n = 1 .", "Note that this does not constrain the component weights of B or C to be in (0 , 1] they can be any real numbers.", "But since R1' requires BC n to be positive for all n , both B and C must also be positive.", "If either were 0, then BC n would be 0 for n > 0 ; if either were negative, then BC n would be negative for some or all values of n .", "Now we show that any choice of positive C causes (cid:80) G L s ( A ) w ( G ) to diverge.", "Given an infinite series of the form (cid:80) n =0 a n , the ratio test (D'Alembert, 1768) considers the ratio between adjacent terms in the limit, lim n | a n +1 | | a n | .", "If this ratio is greater than 1, the series diverges; if less than 1 the series converges; if exactly 1 the test is inconclusive.", "In our case: lim n | ( n + 1)!", "Hence (cid:80) G L s ( A ) diverges for any choice of C , equivalently for any choice of weights.", "So there is no w for which ( A , w ) is probabilistic with full support over L s ( A ) .", "Note that any automaton recognising L s ( A ) must accept factorially many DAGs in the number of nodes.", "Our proof implies that there is no probabilistic DAG automaton for language L s ( A ) , since no matter how we design its transitionseach of which must be isomorphic to one in A apart from the identities of the statesthe factorial will eventually overwhelm the constant factor corresponding to C in our proof, no matter how small it is.", "Theorem 1 does not rule out all probabilistic variants of A .", "It requires R1'if we only require the weaker R1, then a solution of B=1 and C=0 makes the automaton probabilistic.", "But this trivial distribution is not very useful: it assigns all of its mass to the singleton language { a c e } .", "Theorem 1 also does not mean that it is impossible to define a probability distribution over L s ( A ) with full support.", "If, for every DAG G with n nodes labeled b , we let p ( G ) = 1 2 n +1 n !", ", then: (cid:88) G L s ( A ) w ( G ) = (cid:88) n =0 1 2 n +1 n ! n !", "But this distribution does not factor over transitions, so it cannot be used with the dynamic programming algorithms of Chiang et al. (2018).", "A natural way to define distributions using a DAG automaton is to define two conditional probabilities: one over the choice of nodes to rewrite, given a frontier; and one over the choice of transition, given the chosen nodes.", "The latter factors over transitions, but the former does not, so it also cannot use the algorithms of Chiang et al. (2018).", "3 Theorem 1 only applies to single-rooted, nonplanar DAG automata of bounded degree.", "Next we ask whether it extends to other DAG automata, including those that recognise multi-rooted DAGs, DAGs of unbounded degree, and planar DAGs.", "What happens when we consider DAG languages that allow multiple roots?", "In one reasonable interpretation of AMRbank, over three quarters of the DAGs have multiple roots (Kuhlmann and Oepen, 2016), so we want a model that permits this.", "4 Section 2.1.1 explained how a DAG automaton can be constrained to generate single-rooted languages, by restricting start transitions (i.e. those 3 In this model, the subproblems of a natural dynamic program depend on the set of possible frontiers, rather than subsets of nodes as in the algorithms of Chiang et al. (2018).", "We do not know whether this could be made efficient.", "4 AMR annotations are single-rooted, but they achieve this by duplicating edges: every edge type, like ARG 0, has an inverse type, like ARG 0-O F .", "The number cited here assumes edges of the second type are converted to the first type by reversing their direction.", "with on the left-hand side) to a single use at the start of a derivation.", "To generate DAGs with multiple roots, we simply allow start transitions to be applied at any time.", "We still require the resulting DAGs to be connected.", "For an automaton A , we define its multi-rooted language L m ( A ) as { G G | A recognises G } .", "Although one automaton can define both single-and multi-rooted DAG languages, these languages are incomparable.", "Drewes (2017) uses a construction very similar to the one in Theorem 1 to show that single-rooted languages have very expressive path languages, which he argues are too expressive for modeling semantics.", "5 Since the constructions are so similar, it natural to wonder if the problem that single-rooted automata have with probabilities is related to their problem with expressivity, and whether it likewise disappears when we allow multiple roots.", "We now show that multi-rooted languages have the same problem with probability, because any multi-rooted language contains the single-rooted language as a sublanguage.", "Corollary 1. Let A be the automaton defined in Example 1. There is no w that makes ( A , w ) probabilistic with full support over L m ( A ) .", "Proof.", "By their definitions, L s ( A ) L m ( A ) , so: (cid:88) G L m ( A ) w ( G ) = (cid:88) G L s ( A ) w ( G ) + (cid:88) G L m ( A ) \\ L s ( A ) w ( G ) The first term is by Theorem 1 and the second is positive by R1', so the sum diverges.", "Hence there is no w for which ( A , w ) is probabilistic with full support over L m ( A ) .", "The maximum degree of any node in any DAG recognised by a DAG automaton is bounded by the maximum number of states in any transition, because any transition generates a node with | | incoming edges and | | outgoing edges.", "So, the families of DAG languages we have considered all have bounded degree.", "5 The path language of a DAG is the set of strings that label a path from a root to a leaf, and the path language of a DAG language is the set of all such strings over all DAGs.", "For example, the path language of the DAG in Figure", "2(v) is { abde, abbdde, abbcdde } .", "Berglund et al. (2017) show that path languages of multi-rooted DAG automata are regular, while those of single-rooted DAG automata characterised by a partially blind multi-counter automaton.", "DAG languages with unbounded degree could be useful to model phenomena like coreference in meaning representations, and they have been studied by Quernheim and Knight (2012) and Chiang et al. (2018).", "These families generalise and strictly contain the family of bounded-degree DAG languages, so they too, include DAG automata that cannot be made probabilistic.", "We introduced DAG automata as a tool for modeling the meaning of natural language, but the DAG automaton in Theorem 1 is very artificial, so it's natural to ask whether it has any real relevance to natural language.", "We will argue informally that this example illustrates a pervasive problem with DAG automataspecifically, we conjecture that the factorial growth we observe in Theorem 1 arises under very mild conditions that arise naturally in models of AMR.", "Consider object control in a sentence like I help Ruby help you and its AMR in Figure 5. help I help Ruby you ARG 1 ARG 0 ARG 2 ARG 0 ARG 2 Figure 5: The AMR for I help Ruby help you.", "We can extend the control structure unboundedly with additional helpers, as in I help Briony help Kim-Joy help Ruby help you, and this leads to unboundedly long repetitive graphs like the one in Figure 6. These graphs can be cut to separate the sequence of help predicates from their arguments, as illustrated by the dashed blue line.", "I help Briony help Kim-Joy help Ruby help you (cid:7) ARG 1 ARG 0 ARG 2 ARG 1 ARG 0 ARG 2 ARG 1 ARG 0 ARG 2 ARG 0 ARG 2 Figure 6: The AMR for I help Briony help Kim-Joy help Ruby help you shown with a cut.", "Let a cut be a set of edges such that removing them splits the graph into two connected subgraphs: one containing the root, and the other containing all the leaves.", "Any cut in a complete graph could have been the frontier of a partially-derived graph.", "What if the number of edges in a cutor cut-width can be unbounded, as in the language of AMR graphs that model object control?", "Since a DAG automaton can have only a finite number of states, there is some state that can occur unboundedly many times in a graph cut.", "All edges in a cut with this state can be rewired by permuting their target nodes, and the resulting graph will still be recognised by the automaton, since the rewiring would not change the multiset of states into or out of any node.", "If each possible rewiring results in a unique graph then the number of recognised graphs will be factorial in the number of source nodes for these edges, and the argument of Theorem 1 can be generalised to show that no weighting of any DAG automaton over the graph language makes it probabilistic with full support.", "For example, in the graph above, all possible rewirings of the ARG 2 edges result in a unique graph.", "6 Although edge labels are not states, their translation into node labels implies that they can only be associated to a finite number of transitions, hence to a finite number of states in any corresponding DAG automaton.", "A full investigation of conditions under which Theorem 1 generalises is beyond the scope of this paper.", "Conjecture 1. Under mild conditions, if language L ( A ) of a DAG automaton A has unbounded cut-width, there is no w that makes ( A, w ) probabilistic with full support.", "The fundamental problem with trying to assign probabilities to non-planar DAG automata is the factorial growth in the number of DAGs with respect to the number of nodes.", "Does this problem occur in planar DAG automata?", "Planar DAG automata are similar to the DAG automata of Section 2 but with an important difference: they transition between ordered sequences of states rather than unordered multisets of states.", "We write these sequences in parentheses, and their order matters: ( p, q ) differs from ( q, p ) .", "We write (cid:15) for the empty sequence.", "When a planar DAG automaton generates DAGs, it keeps a strict order over the set of frontier states at all times.", "A transition whose left-hand side is ( p, q ) can only be applied to adjacent states p and q in the frontier, with 6 This is also a problem linguistically, since many of the rewired graphs no longer model object control.", "p preceding q .", "The matched states are replaced in the frontier by the sequence of states in the transi-tion's right-hand side, maintaining order.", "In the non-planar case, n applications of t 2 can generate n ! unique DAGs, but n applications of the corresponding transition t (cid:48)(cid:48) 2 in this automaton can only generate one DAG.", "To see this, consider the partially derived DAG on the left of Figure 7, with its frontier drawn in order from left to right.", "The p (cid:48) state can only combine with the q state immediately to its right, and since dead-ends are not allowed, the only possible choice is to apply t (cid:48)(cid:48) 4 twice followed by t (cid:48)(cid:48) 5 , so the DAG on the right is the only possible completion of the derivation.", "This automaton is probabilistic when w ( t (cid:48)(cid:48) 1 ) = w ( t (cid:48)(cid:48) 2 ) = 1 / 2 , w ( t (cid:48)(cid:48) 3 ) = w ( t (cid:48)(cid:48) 4 ) = w ( t (cid:48)(cid:48) 5 ) = 1 , and indeed the argument in Theorem 1 does not apply to planar automata since the number of applicable transitions is linear in the size of the frontier.", "But planar DAG automata have other problems that make them unsuitable for modeling AMR.", "The first problem is that there are natural language constructions that naturally produce nonplanar DAGs in AMR.", "For example, consider the sentence Four contestants mixed, baked and ate cake.", "Its AMR, shown in Figure 8, is not planar because it has a K 3 , 3 minor, and it is easy to see from this example that any coordination of three predicates sharing two arguments produces this structure.", "In the first release of AMR, 117 out of 12844 DAGs are non-planar.", "The second problem is that planar DAG au-and bake mix eat contestant 4 cake OP 1 OP 2 OP 3 ARG 0 ARG 1 ARG 0 ARG 1 ARG 0 ARG 1 QUANTITY Figure 8: An AMR for the sentence Four contestants mixed, baked and ate a cake.", "tomata model Type-0 string derivations by design (Kamimura and Slutzki, 1981).", "This seems more expressive than needed to model natural language and means that many important decision problems are undecidablefor example, emptiness, which is decidable in polynomial time for non-planar DAG automata (Chiang et al., 2018).", "Table 2 summarises the properties of several different variants of DAG automata.", "It has been argued that all of these properties are desirable for probabilistic models of meaning representations (Drewes, 2017).", "Since none of the variants supports all properties, this suggests that no variant of the DAG automaton is a good candidate for modeling meaning representations.", "We believe other formalisms may be more suitable, including several subfamilies of hyperedge replacement grammars (Drewes et al., 1997) that have recently been proposed (Bjorklund et al., 2016; Matheja et al., 2015; Gilroy et al., 2017).", "This work was supported in part by the EP-SRC Centre for Doctoral Training in Data Science, funded by the UK Engineering and Physical", "Sciences Research Council (grant EP/L016427/1) and the University of Edinburgh.", "We thank Esma Balkir, Nikolay Bogoychev, Shay Cohen, Marco Damonte, Federico Fancellu, Joana Ribeiro, Nathan Schneider, Milos Stanojevic, Ida Szubert, Clara Vania, and the anonymous reviewers for helpful discussion of this work and comments on previous drafts of the paper." ]
[ "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other" ]
[ "Meta-learning promises few-shot learners that quickly adapt to new distributions by repurposing knowledge acquired from previous training.", "However, we believe meta-learning has not yet succeeded in NLP due to the lack of a well-defined task distribution, leading to attempts that treat datasets as tasks.", "Such an ad hoc task distribution causes problems of quantity and quality.", "Since there's only a handful of datasets for any NLP problem, meta-learners tend to overfit their adaptation mechanism and, since NLP datasets are highly heterogeneous, many learning episodes have poor transfer between their support and query sets, which discourages the meta-learner from adapting.", "To alleviate these issues, we propose DRECA ( D ecomposing datasets into Re asoning Ca tegories), a simple method for discovering and using latent reasoning categories in a dataset, to form additional high quality tasks.", "DRECA works by splitting examples into label groups, embedding them with a finetuned BERT model and then clustering each group into reasoning categories.", "Across four few-shot NLI problems, we demonstrate that using DRECA improves the accuracy of meta-learners by 1.54%.", "A key desideratum for human-like understanding is few-shot adaptation.", "Adaptation is central to many NLP applications since new concepts and words appear often, leading to distribution shifts.", "People can effortlessly deal with these distribution shifts by learning these new concepts quickly and we would like our models to have similar capabilities.", "While finetuning large pre-trained transformers is one way to facilitate this adaptation, this procedure requires thousands of samples where humans might require only a few.", "Can these pre-trained transformers be made to achieve few-shot adaptation?", "One promising direction is meta-learning (Schmidhuber, 1987; Ben-Figure 1: Overview of our approach.", "We embed all examples with BERT, and then cluster within each label group separately (red and green correspond to entailment and not_entailment respectively).", "Then, we group clusters from distinct label groups to form tasks.", "gio et al., 1997).", "Meta-learning promises few-shot classifiers that can adapt to new tasks by repurposing skills acquired from training tasks.", "An important prerequisite for successful application of meta-learning is a task-distribution from which a large number of tasks can be sampled to train the meta-learner.", "While meta-learning is very appealing, applications in NLP have thus far proven challenging due to the absence of a well-defined set of tasks that correspond to re-usable skills.", "This has led to less effective ad hoc alternatives, like treating entire datasets as tasks.", "Treating entire datasets as tasks has two major issues.", "The first issue is learner overfitting (Rajen-dran et al., 2020), where a meta-learner overfits its adaptation mechanism to the small number of training tasks, since there's only a small number of supervised datasets available for any NLP problem.", "Second, the heterogeneity of NLP datasets can lead to learning episodes that encourage memorization overfitting (Yin et al., 2020; Rajendran et al., 2020), a phenomenon where a meta-learner ignores the support set, and doesn't learn to adapt.", "datasets into Re asoning Ca tegories or DRECA", "DRECA is a meta data augmentation strategy that takes as input a set of tasks (entire datasets), and then decomposes them to approximately recover some of the latent reasoning categories underlying these datasets, such as various syntactic constructs within a dataset, or semantic categories such as quantifiers and negation.", "These reasoning categories are then used to construct additional fewshot classification tasks, augmenting the original task distribution.", "We illustrate these steps in Fig. 1.", "DRECA first embeds the examples using a BERT model finetuned over all the datasets.", "We then run k-means clustering over these representations to produce a refinement of the original tasks.", "Experiments demonstrate the effectiveness of our simple approach.", "As a proof of concept, we adapt the classic sine-wave regression problem from Finn et al. (2017) to mimic the challenges of the NLP setting, and observe that standard meta-learning procedures fail to adapt.", "However, a model that meta-learns over the underlying reasoning types shows a substantial improvement.", "Then, we consider the problem of natural language inference (NLI).", "We show that meta-learners augmented with DRECA improve over baselines by 1.54 accuracy points across four separate NLI few-shot problems without requiring domain-specific engineering or additional unlabeled data.", "Few-shot learning in NLP.", "The goal of learning from few examples has been studied for various NLP applications.", "Common settings include few-shot adaptation to new relations (Han et al., 2018), words (Holla et al., 2020), domains (Bao et al., 2020; Yu et al., 2018; Geng et al., 2019), and language pairs (Gu et al., 2018).", "Since these applications come with well-defined task distributions, they do not have the same overfitting challenges.", "On the other hand, many works deal with few-shot adaptation in settings with no clear task distribution (Dou et al., 2019; Bansal et al., 2020a) but do not address meta-overfitting, and thus are complementary to our work.", "Overfitting and Task Augmentation.", "The memorization problem in meta-learning is studied in Yin et al. (2020) who propose a meta-regularizer to mitigate memorization overfitting, but don't study learner overfitting.", "Task augmentation for mitigating overfitting in meta-learners is first studied in Rajendran et al. (2020) in the context of few-shot label adaptation.", "Hsu et al. (2019) propose CACTUs, a clustering-based approach for unsupervised meta-learning in the context of few-shot label adaptation for images.", "While also based on clustering, CACTUs creates meta-learning tasks where the goal is to predict cluster membership of images, whereas our work is focused on using clusters to subdivide pre-existing tasks for mitigating meta-overfitting in NLP.", "Most closely related to our work is the SMLMT method from Bansal et al. (2020b).", "SMLMT creates new self-supervised tasks that improve meta-overfitting but this does not directly address the dataset-as-tasks problem we identify.", "In contrast, we focus on using clustering as a way to subdivide and fix tasks that already exist.", "This approach allows us to mitigate meta-overfitting without additional unlabeled data.", "In Section 6, we compare our model against SMLMT, and demonstrate comparable or better performance.", "We consider the problem of Natural Language Inference or NLI (MacCartney and Manning, 2008; Bowman et al., 2015), also known as Recognising Textual Entailment (RTE) (Dagan et al., 2005).", "Given a sentence pair x = ( p, h ) where p is referred to as the premise sentence, and h is the hypothesis sentence, the goal is to output a binary label 1 y { 0 , 1 } indicating whether the hypothesis h is entailed by the premise p or not.", "For instance, the sentence pair (The dog barked, The animal barked) is classified as entailed, whereas the sentence pair (The dog barked, The labrador barked) would be classified as not entailed.", "As shown in Table 1, NLI datasets typically encompass a broad range of linguistic phenomena.", "Apart from the reasoning types shown in Table 1, examples may also vary in terms of their genre, syntax, annotator writing style etc. leading to extensive linguistic variability.", "Taken together, these factors of variation make NLI datasets highly heterogeneous.", "The goal of meta-learning is to output a meta-learner f : ( S i , x iq ) (cid:55) y that takes as input a support set S i of labeled examples and a query point", "x iq and returns a prediction y .", "In the usual meta-learning setting, these support and query sets are defined as samples from a task T i , which is a collection of labeled examples { ( x i , y i ) } .", "In N -way k -shot adaptation, each T i is an N -way classification problem, and f is given k examples per label to adapt.", "A simple baseline for meta-learning is to train a supervised model on labeled data from training tasks, and then finetune it at test time on the support set.", "This can be powerful, but is ineffective for very small support sets.", "A better alternative is episodic meta-learning, which explicitly trains models to adapt using training tasks Episodic Training.", "In the standard setup for training episodic meta-learners, we are given a collection of training tasks.", "We assume that both train and test tasks are i.i.d. draws from a task distribution p ( T ) .", "For each training task T tr i p ( T ) , we create learning episodes which are used to train the meta-learner.", "Each learning episode consists of a support set S i = { ( x is , y is ) } and a query set Q i = { ( x iq , y iq ) } .", "The goal of episodic meta-learning is to ensure that the meta-learning loss L ( f ( S i , x iq ) , y iq ) is small on training tasks T tr i .", "Since train tasks are i.i.d. with test tasks, this results in meta-learners that achieve low loss at test time.", "Several algorithms have been proposed for meta-learning that follow this general setup, such as Matching Networks (Vinyals et al., 2016), MANN (Santoro et al., 2016), Prototypical Networks (Snell et al., 2017) and MAML (Finn et al., 2017).", "In this work, we use MAML as our meta-learner.", "produce an initialization , such that after performing gradient descent on h using S i , the updated model h (cid:48) i can make accurate predictions on Q i .", "MAML consists of an inner loop and an outer loop .", "In the inner loop, the support set S i is used to update model parameters , to obtain task-specific parameters (cid:48) i , (cid:48) i = (cid:88) ( x is ,y is ) S i L ( h ( x is ) , y is ) .", "These task-specific parameters are then used to make predictions on Q i .", "The outer loop takes gradient steps over such that task-specific parameters (cid:48) i perform well on Q i .", "Since (cid:48) i is itself a differentiable function of , we can perform this outer optimization using gradient descent, Opt (cid:18) , (cid:88) ( x iq ,y iq ) Q i L ( h (cid:48) i ( x iq ) , y iq ) (cid:19) .", "where Opt is an optimization algorithm typically chosen to be Adam.", "The outer loop gradient is typically computed in a mini-batch fashion by sampling a batch of episodes from distinct training tasks.", "The gradient L ( h (cid:48) i ( x iq ) , y iq ) involves back-propagation through the adaptation step which requires computing higher order gradients.", "This can be computationally expensive so a first order approximation (FoMAML), L ( h (cid:48) i ( x iq ) , y iq ) (cid:48) i L ( h (cid:48) i ( x iq ) , y iq ) (4) is often used instead (Finn et al., 2017).", "As mentioned earlier, training tasks in NLP are often entire datasets, leading to a small number of heterogeneous training tasks.", "Thus, to train a meta-learner for NLI, our training tasks T tr i are NLI datasets .", "At test time, we are given new datasets that we must adapt to, given a support set of randomly drawn examples from the dataset.", "Meta Overfitting.", "Consider learning episodes sampled from an NLI dataset (Table 2).", "NLI datasets consist of a wide range of linguistic phenomena, and so we expect an episode to be comprised of a diverse set of reasoning categories.", "Such heterogeneous episodes can lead to scenarios where the support and query sets do not have any overlap in reasoning skills, causing the model to ignore the support set.", "This is known as memorization overfitting.", "Moreover, since we have a limited number of datasets, the meta-learner is exposed to a very small number of tasks at meta-training time causing it to generalize poorly to test tasks.", "This is known as learner overfitting (Rajendran et al., 2020).", "Dataset.", "Consider the sine-wave regression problem from Finn et al. (2017) where each task corresponds to learning a sine wave mapping with a fixed amplitude and phase offset.", "As shown in Fig.", "2(a), each support and query set consists of points drawn from the same sine wave mapping.", "The key observation here is that since support and query examples are drawn from the same mapping, we might expect a meta-learner to use the support set for adaptation.", "In the NLP case, since tasks are heterogeneous, support and query examples may belong to different reasoning categories.", "We instantiate this by letting support and query points come from different sine waves (Fig.", "2(b)).", "(b) Three datasets from our 2D sine wave regression.", "Each dataset is a unit square with multiple reasoning categories; A reasoning category is a distinct sinusoid along a ray that maps x = ( x 1 , x 2 ) to the value of the sine-wave y at that point.", "square sampled from a 10 10 grid over x 1 = [ 5 , 5] and x 2 = [ 5 , 5] .", "Within each dataset, we construct multiple reasoning categories by defin-ing each reasoning category to be a sine wave with a distinct phase offset.", "This is illustrated in Fig.", "2(b) where each unit square represents a dataset, and sine waves along distinct rays correspond to reasoning categories.", "The target label y for the regression task is defined for each category by a randomly sampled phase [0 . 1 , 2 ] and y = sin( (cid:107) x (cid:98) x (cid:99)(cid:107) 2 ) .", "At meta-training time, we sample a subset of these 100 squares as our training datasets, and then evaluate few-shot adaptation to reasoning categories from held out datasets at meta-test time.", "We start by considering MAML-BASE , a meta-learner that is trained directly over a dataset-based task distribution.", "Concretely, we define each training task as a dataset and randomly sample episodes to train the meta-learner.", "Note that since episodes are drawn uniformly at random from an entire dataset, we expect support and query sets to often contain points from disjoint reasoning categories (Fig.", "2(b)), making adaptation infeasible.", "Thus, we expect pre and post adaptation losses to be similar, which is indeed reflected in the learning curves in Fig.", "3(a).", "We observe that the orange and blue lines, corresponding to pre and post adaptation losses respectively, almost overlap.", "In other words, the meta-learner ignores the support set entirely.", "This is what we mean by memorization overfitting .", "Next we consider MAML-ORACLE , a meta-learner that is trained on tasks based on the underlying reasoning categoriesdistinct sine waves.", "Consequently, support and query sets are both drawn from the same sine wave, similar to Finn et al. (2017) making adaptation feasible.", "From Fig.", "3(b), we observe large gaps between pre and post adaptation losses which indicates that memorization overfitting has been mitigated.", "These experiments confirm our hypothesis about the challenges of meta-learning with heterogeneous task distributions.", "Since NLI datasets require a wide range of skills, we might expect similar challenges on few-shot NLI as well.", "In this section, we introduce our approach for extracting reasoning categories for NLI.", "The key observation here is that high quality sentence pair representations, such as those obtained from a finetuned BERT model, can bring out the microstructure of NLI datasets.", "Indeed, the fact that pretrained transformers can be used to create meaningful clusters has been shown in other recent works (c.f. Aharoni and Goldberg (2020); Joshi et al. (2020)).", "At a high level, the goal of DRECA is to take a heterogeneous task (such as a dataset) and produce a decomposed set of tasks.", "In doing so, we hope to obtain a large number of relatively homogeneous tasks that can prevent meta overfitting.", "Given a training task T tr i , we first group examples by their labels, and then embed examples within each group with an embedding function EMBED ( . ) .", "Concretely, for each N -way classification task T tr i we form groups g il = { ( EMBED ( x pi ) , y pi ) | y pi = l } .", "Then, we proceed to refine each label group into K clusters via k-means clustering to break down T tr i into groups { C j ( g il ) } Kj =1 for l = 1 , 2 , . . . , N .", "These cluster groups can be used to produce KN potential DRECA tasks.", "2 Each task is obtained by choosing one of K clusters for each of the N label groups, and taking their union.", "At meta-training time, learning episodes are sampled uniformly at random from DRECA tasks with a probability and from one of the original tasks with probability 1 .", "Since our clustering procedure is based on finetuned BERT vectors, we expect the resulting clusters to roughly correspond to distinct reasoning categories.", "Indeed, when the true reasoning categories are known, we show in Section 7.2 that DRECA yields clusters that recover these reasoning categories almost exactly.", "We evaluate DRECA on 4 NLI few-shot learning problems which we describe below (more details in Appendix A.2.1).", "The first problem is based on synthetic data, while the other 3 problems are on real datasets and hence a good demonstration of the utility of our proposal.", "HANS-FEWSHOT is a few-shot classification problem over HANS (McCoy et al., 2019), a synthetic diagnostic dataset for NLI.", "Each example in HANS comes from a hand-designed syntactic template which is associated with a fixed label ( entailment or not_entailment ).", "The entire dataset consists of 30 such templates which we use to define 15 reasoning categories.", "We then hold out 5 of these for evaluation, and train on the remaining 10.", "While this is a simple setting, it allows us to compare DRECA against an oracle\" with access to the underlying reasoning categories.", "2 Note that we do not instantiate the KN tasks.", "Instead, we simply sample an episode from random chosen clusters from each label group.", "COMBINEDNLI consists of a combination of 3 NLI datasetsMultiNLI (Williams et al., 2018), Diverse Natural Language Inference Collection (DNC; Poliak et al. (2018)) and Semantic Fragments (Richardson et al., 2020) for training.", "These training datasets cover a broad range of NLI phenomena.", "MultiNLI consists of crowdsourced examples, DNC consists of various semantic annotations from NLP datasets re-cast into NLI and Semantic fragments is a synthetic NLI dataset covering logical and monotonicity reasoning.", "Our objective is to train a single meta-learner that can then be used to make predictions on diverse NLP problems recast as NLI.", "To this end, we evaluate models trained on COMBINEDNLI on 2 datasets.", "In COMBINEDNLI-RTE, we evaluate on the RTE datasets (Dagan et al., 2005; Bar-Haim et al., 2006; Giampiccolo et al., 2007; Bentivogli et al., 2009) as provided in GLUE (Wang et al., 2019).", "The RTE datasets consist of various IE and QA datasets recast as NLI.", "Second, we consider the QANLI dataset (Demszky et al., 2018) which recasts question answering into NLI.", "In particular, we consider RACE (Lai et al., 2017) and use gold annotations provided in Demszky et al. (2018) to transform it into an NLI dataset.", "GLUE-SciTail where we train on all NLI datasets from GLUE (Wang et al., 2019) and evaluate on SciTail (Khot et al., 2018).", "This setting is comparable to Bansal et al. (2020b) with the difference that we only meta-train on the NLI subset of GLUE, whereas they meta-train on all GLUE tasks.", "We follow the same evaluation protocol as Bansal et al. (2020b) and report 2-way 4-shot accuracy.", "Non-Episodic Baselines.", "All non-episodic baselines train h on the union of all examples from each T tr i .", "In MULTITASK (FINETUNE ), we additionally finetune the trained model on the support set of each test task.", "In MULTITASK ( K-NN), each query example in the test task is labeled according to the nearest neighbor of the example in the support set.", "Finally, in MULTITASK (FINETUNE + K-NN), we first finetune the trained model on the support set and then label each query example based on its nearest neighbor in the support set.", "Episodic Meta-learners.", "MAML-BASE is a MAML model where every task corresponds to a dataset.", "In the HANS-FEWSHOT setting where underlying reasoning categories are known, we also compare with an oracle model MAML-ORACLE which is trained over a mixture of dataset-based tasks as well as oracle reasoning categories.", "Finally, MAML-DRECA is our model which trains MAML over a mixture of the original dataset-based tasks as well as the augmented tasks from DRECA .", "Evaluation.", "To control for variations across different support sets, we sample 510 random support sets for each test task.", "We finetune each of our models on these support sets and report means and 95% confidence intervals assuming the accuracies follow a Gaussian.", "Training Details.", "We use first order MAML (Fo-MAML) for computational efficiency.", "We use BERT-base as provided in the transformers library (Wolf et al., 2019) as the parameterization for h and EMBED (; ) .", "The meta-training inner loop optimization involves 10 gradient steps with Adam, with a support set of 2 examples (2-way 1-shot) for all except GLUE-SciTail where the support set size is 8 (2-way 4-shot).", "We experiment with 4-shot adaptation on GLUE-SciTail to match the evaluation setup from Bansal et al. (2020b).", "The mixing weight is set to 0.5 for all our experiments.", "More details can be found in Appendix A.2.2.", "Results.", "We report results on the synthetic HANS-FEWSHOT setting in Table 4, where we find that DRECA improves over all baselines.", "In particular, we observe an improvement of +6.94 over MULTITASK (FINETUNE + K-NN) and +4.3 over MAML-BASE .", "Moreover, we observe that MAML-DRECA obtains a comparable accuracy as MAML-ORACLE .", "Next, we report results on our 3 real NLI settings in Table 3.", "Again, we find that DRECA improves model performance across all 3 settings: MAML-DRECA improves over MAML-BASE by +2.5 points on COMBINEDNLI-QANLI, +2.7 points on COMBINEDNLI-RTE and +1.6 points on GLUE-SciTail.", "On GLUE-SciTail, we compare against SMLMT (Bansal et al., 2020b) and find that MAML-DRECA improves over it by 1.5 accuracy points.", "However, we note that the confidence intervals of these approaches overlap, and also that (Bansal et al., 2020a) consider the entire GLUE data to train the meta-learner whereas we only consider NLI datasets within GLUE.", "We start by visualizing finetuned BERT embed-dings used by DRECA for HANS-FEWSHOT .", "As mentioned earlier, HANS consists of 30 manually defined syntactic templates which can be grouped into 15 reasoning categories.", "Following the procedure for EMBED () (details in Appendix A.2.2), we finetune BERT (Devlin et al., 2019) for 5000 randomly chosen examples from HANS.", "To obtain a vector representation for each example x = ( p, h ) , we concatenate the vector at the [CLS] token, along with a mean pooled representation of the premise and hypothesis.", "We then use t-SNE (Maaten and Hinton, 2008) to project these representations onto 2 dimensions.", "Each point in Fig. 4 is colored with its corresponding reasoning category, and we can observe a clear clustering of examples according Figure 4: t-SNE plot of BERT vectors after finetuning on HANS.", "To understand if reasoning categories can be accurately recovered with our approach, we measure the purity of DRECA clusters for HANS-FEWSHOT where true reasoning categories are known.", "This is evaluated by computing the number of examples belonging to the majority reasoning type for each cluster and then dividing by the total number of examples.", "From Table 5, we observe high cluster purity which provides evidence that DRECA is able to recover true reasoning categories.", "We seek to understand how different linguistic phenomena present in the overall population are distributed among various clusters.", "To perform this analysis, we focus on MultiNLI annotation tags from Williams et al. (2018).", "A subset of examples in MultiNLI are assigned tags based on the presence of certain keywords, e.g., time words like days of the week; quantifiers like every, each, some; negation words like no, not, never.", "Additionally, certain tags are assigned based on the PTB (Marcus et al., 1993) parses of examples, e.g., presence or absence of adjectives/adverbs etc.", "For each annotation tag, we compute the fraction of examples labeled with that tag in each cluster.", "We visualize this for 10 annotation tags and indicate statistically significant deviations from the averages in Fig. 5. Statistical significance is measured with binomial testing with a Bonferroni correction to account for multiple testing.", "For every annotation tag, we shade all clusters that contain a statistically significant deviation from the mean.", "For instance, there is a positive cluster with 2.5 fold enrichment in Negation tags compared to the average, and a negative cluster that contains over 4 times the population average of Negation (Hyp only) tags.", "Similarly, among Conditionals , we have positive clusters that contain 1.4 times the population average and a negative cluster containing half the population average.", "Interestingly, we find most positive clusters to be significantly poverished in Adverb (Hyp only) tags, while most negative clusters are enriched in these tags.", "This analysis presents evidence that clusters used by DRECA localize linguistic phenomena to a small number of clusters.", "Comparing with CACTUs.", "Our work is most similar to CACTUs from Hsu et al. (2019).", "Apart from differences in the modality considered (text vs images), we differ in the following ways.", "Conceptually, Hsu et al. (2019) consider a fully unsupervised meta-learning setting where no labels are provided and use cluster IDs to induce labels, while our goal is to produce additional tasks in a supervised meta-learning setting.", "Second, CACTUs tasks are constructed by directly applying k-means on the entire training dataset while we apply k-means separately on each label group and construct tasks by choosing a cluster from each label group, leading to tasks with uniform label distribution.", "Finally, while CACTUs uses constructed tasks directly, our work using them to augment the original task distribution.", "Number of examples in support set.", "All evaluation in this work considers small support sets where number of examples per label range from 14.", "This setting is somewhat restrictive since in practice, one might be able to get a few hundred", "examples for the target domain.", "These moderately sized support sets could themselves be heterogeneous where adapting a single learner might be hard.", "In such cases, we can use a similar clustering approach to separate out the support set into homogeneous tasks and adapt a separate learner for each task.", "These learners could then be plugged into a mixture of experts framework for making predictions.", "Using k-means to produce task refinements.", "While we are able to get sufficiently homogeneous clusters with k-means, we note one shortcoming with this approach.", "Any input has multiple attributes / factors of variations and it may be possible to create a clustering for each factor.", "The current k-means based approach doesn't model this since we only produce a single clustering of the data.", "For instance, x 1 = The man was walking in the park = The man is not at home and x 2 = He went with his friends to the mall = He is not at work can belong to the same cluster if the underlying metric is based on reasoning types.", "At the same time, it could also be clustered with x 3 = The man was walking in the park (cid:54) = The woman is in the park if the distance metric is based on lexical similarity.", "A promising direction for future work is to explore these multi-clusterings based on the various factors of variation present in the training data.", "Non-meta learning based few-shot adaptation.", "In this work, we use tools from meta-learning to directly optimize for few-shot behavior.", "While not directly comparable to us, there have been many recent approaches to few-shot adaptation for NLP that do not use meta-learning.", "Brown et al. (2020) show impressive few-shot adaptation in large language models through in-context learning\" which is presumably acquired only through its language modeling objective,. Schick and Schtze (2020) train multiple models on lexical variations of a small support set and use these to label additional unlabeled examples from the target domain. These self-labeled examples are used to train a second model which can then make predictions on query examples.", "Finally, Gao et al. (2020) explore in-context learning of smaller language models for few-shot adaptation.", "In particular, they introduce a pipeline to identify useful prompts for the target domain, along with informative labeled examples to prepend as context for the LM.", "Many papers point out fundamental challenges in creating systems that achieve human-like understanding of tasks like NLI.", "Here, we studied conditions under which systems can learn from extremely few samples.", "We believe that such systems would complement and enhance further study into more sophisticated challenges such as model extrapolation.", "One of the main ingredients for successful application of meta-learning is a large number of high quality training tasks to sample learning episodes for the meta-learner.", "We observe that such a task distribution is usually not available for important NLP problems, leading to less desirable ad hoc attempts that treat entire datasets as tasks.", "In response, we propose DRECA as a simple and general purpose task-augmentation strategy.", "Our approach creates a refinement of the original set of tasks (entire datasets) that roughly correspond to linguistic phenomena present in the dataset.", "We show that training on a task distribution augmented with DRECA leads to consistent improvements on 4 NLI few-shot classification problems, matching other approaches that require additional unlabeled data and well as oracles that have access to the true task distribution.", "We are grateful to Eric Mitchell, Robin Jia, Alex Tamkin, John Hewitt, Pratyusha Sharma and the anonymous reviewers for helpful comments.", "The authors would also like to thank other members of the Stanford NLP group for feedback on an early draft of the paper.", "This work has been partially supported by JD.com American Technologies Corporation (JD) under the SAIL-JD AI Research Initiative and partially by Toyota Research Institute (TRI).", "This article solely reflects the opinions and conclusions of its authors and not JD, any entity associated with JD.com, TRI, or any other Toyota entity.", "Christopher Manning is a CIFAR Fellow.", "Code and model checkpoints will be available at https://github.com/MurtyShikhar/DReCA ." ]
[ "abstain", "method", "abstain", "abstain", "objective", "abstain", "objective", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "result", "abstain", "method", "objective", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "objective", "abstain", "objective", "objective", "method", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "result", "result", "result", "result", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "other", "abstain", "abstain", "abstain", "method", "method", "abstain", "result", "objective", "method", "objective", "other", "other", "other", "other", "other", "other" ]
[ "Recently, the sequence-to-sequence models have made remarkable progress on the task of keyphrase generation (KG) by concatenating multiple keyphrases in a predefined order as a target sequence during training.", "However, the keyphrases are inherently an unordered set rather than an ordered sequence.", "Imposing a predefined order will introduce wrong bias during training, which can highly penalize shifts in the order between keyphrases.", "In this work, we propose a new training paradigm ONE 2S ET without predefining an order to concatenate the keyphrases.", "To fit this paradigm, we propose a novel model that utilizes a fixed set of learned control codes as conditions to generate a set of keyphrases in parallel.", "To solve the problem that there is no correspondence between each prediction and target during training, we propose a K -step target assignment mechanism via bipartite matching, which greatly increases the diversity and reduces the duplication ratio of generated keyphrases.", "The experimental results on multiple benchmarks demonstrate that our approach significantly outperforms the state-of-the-art methods.", "Keyphrase generation (KG) aims to generate of a set of keyphrases that expresses the high-level semantic meaning of a document.", "These keyphrases can be further categorized into present keyphrases that appear in the document and absent keyphrases that do not.", "Meng et al. (2017) proposed a sequence-to-sequence (Seq2Seq) model with a copy mechanism (Gu et al., 2016) to predict both present and absent keyphrases.", "However, the model needs beam search during inference to overgenerate multiple keyphrases, which cannot determine the dynamic number of keyphrases.", "To Corresponding authors.", "address this, Yuan et al. (2020) proposed the ONE 2S EQ training paradigm where each source text corresponds to a sequence of keyphrases that are concatenated with a delimiter (cid:104) sep (cid:105) and a terminator (cid:104) eos (cid:105) .", "As keyphrases must be ordered before being concatenated, Yuan et al. (2020) sorted the present keyphrases by their order of the first occurrence in the source text and appended the absent keyphrases to the end.", "During inference, the decoding process terminates when generating (cid:104) eos (cid:105) , and the final keyphrase predictions are obtained after splitting the sequence by (cid:104) sep (cid:105) .", "Thus, a model trained with ONE 2S EQ paradigm can generate a sequence of multiple keyphrases with dynamic numbers as well as considering the dependency between keyphrases.", "However, as the keyphrases are inherently an unordered set rather than an ordered sequence, imposing a predefined order usually leads to the following intractable problems.", "First, the predefined order will give wrong bias during training, which can highly penalize shifts in the order between keyphrases.", "As shown in Figure 1", "(a), the model makes correct predictions in each keyphrase but can still receive a large loss during training.", "Second, this increases the difficulty of model training.", "For example, the absent keyphrases are appended to the end in an author-defined order in Yuan et al. (2020), however, different authors can have various sorting bases, which makes it difficult for the model to learn a unified pattern.", "Third, the model is highly sensitive to the predefined order, as shown in Meng et al. (2019), and can suffer from error propagation during inference when previously having generated keyphrases with an incorrect order.", "Lately, Chan et al. (2019) proposed a reinforcement learning-based fine-tuning method, which fine-tunes the pre-trained models with metric-based rewards (i.e., recall and F 1 ) for generating more sufficient and accurate keyphrases.", "However, this method can alleviate the impact of the order problems when fine-tuning but needs to be pre-trained under the ONE 2S EQ paradigm to initialize the model, which can still introduce wrong biases.", "To address this problem, we propose a new training paradigm ONE 2S ET where the ground-truth target is a set rather than a keyphrase-concatenated sequence.", "However, the vanilla Seq2Seq model can generate a sequence but not a set.", "Hence, we introduce a set prediction model that adopts Transformer (Vaswani et al., 2017) as the main architecture together with a fixed set of learned control codes as additional decoder inputs to perform controllable generation.", "For each code, the model generates a corresponding keyphrase for the source document or a special token that represents the meaning of no corresponding keyphrase.", "During training, the cross-entropy loss cannot be directly used since we do not know the correspondence between each prediction and target.", "Hence, we introduce a K -step target assignment mechanism, where we first auto-regressively generate K words for each code and then assign targets via bipartite matching based on the predicted words.", "After that, we can train each code using teacher forcing as before.", "Compared with the previous models, the proposed method has the following advantages:", "(a) there is no need to predefine an order to concatenate the keyphrases, thus the model will not be affected by the wrong biases in the whole training stage; and", "(b) the bipartite matching forces unique predictions for each code, which greatly reduces the duplication ratio and increases the diversity of predictions.", "We summarize our main contributions as follows: (1) we propose a new training paradigm ONE 2S ET without predefining an order to concatenate the keyphrases; (2) we propose a novel set prediction model that can generate a set of diverse keyphrases in parallel and a dynamic target assignment mechanism to solve the intractable training problem under the ONE 2S ET paradigm; (3) our method consistently outperforms all the state-of-the-art methods and greatly reduces the duplication ratio.", "Our codes are publicly available at Github 1 .", "Existing approaches for keyphrase prediction can be broadly divided into extraction and generation methods.", "Early work mostly focuses on the keyphrase extraction task, and a two-step strategy is typically designed (Hulth, 2003; Mihalcea and Tarau, 2004; Nguyen and Kan, 2007; Wan and Xiao, 2008).", "First, they extract a large set of candidate phrases by hand-crafted rules (Mihalcea and Tarau, 2004; Medelyan et al., 2009; Liu et al., 2011).", "Then, these candidates are scored and reranked based on unsupervised methods (Mihalcea and Tarau, 2004; Wan and Xiao, 2008) or supervised methods (Hulth, 2003; Nguyen and Kan, 2007).", "Other extractive approaches utilize neural-based sequence labeling methods (Zhang et al., 2016; Gollapalli et al., 2017).", "Compared to extractive approaches, generative ones have the ability to consider the absent keyphrase prediction.", "Meng et al. (2017) proposed a generative model CopyRNN, which employs a encoder-decoder framework (Sutskever et al., 2014) with attention (Bahdanau et al., 2015) and copy mechanisms (Gu et al., 2016).", "Many works are proposed based on the CopyRNN architecture (Chen et al., 2018; Zhao and Zhang, 2019; Chen et al., 2019b,a).", "In previous CopyRNN based works, each source text corresponds to a single target keyphrase.", "Thus, the model needs beam search during inference to overgenerate multiple keyphrases, which cannot determine the dynamic number of keyphrases and consider the inter-relation among keyphrases.", "To this end, Yuan et al. (2020) proposed an ONE 2S EQ training paradigm where each source text corresponds to a sequence of concatenated keyphrases.", "Thus, the model can capture the contextual information between the keyphrases as well as determines the dynamic number of keyphrases for different source texts.", "The recent works (Chan et al., 2019; Chen et al., 2020; Swaminathan et al., 2020) mostly follow the ONE 2S EQ training paradigm.", "Chan et al. (2019) proposed an RL-based fine-tuning method using F 1 and Recall metrics as rewards.", "Swaminathan et al. (2020) proposed an RL-based fine-tuning method using a discriminator to produce rewards.", "All the above models need to be trained or pre-trained under the ONE 2S EQ paradigm.", "As keyphrases must be ordered before concatenating and keyphrases are inherently an unordered set, the model can be trained with wrong signal.", "Our ONE 2S ET training paradigm aims to solve this problem.", "This paper proposes a new training paradigm ONE 2S ET for keyphrase generation.", "A set prediction model based on Transformer (SETTRANS ) is proposed to fit this paradigm, as shown in Figure 2.", "Given a fixed set of learned control codes as input conditions, the model generates a keyphrase or a special token for each code in parallel.", "During training, a K -step target assignment mechanism is proposed to dynamically determine the target corresponding to each code.", "The main idea is that the model first freely predicts K steps without any supervision to see what keyphrase each code can roughly generate, and then use bipartite matching to find the optimal allocation based on the model's conjecture and target.", "Given the correspondence of each code and target, a separate set loss is then used to correct the model's conjecture, where half of the codes are trained to predict the present keyphrase set and the others are trained to predict the absent keyphrase set.", "We first formally describe the keyphrase generation task as follows.", "Given a document x , it's aimed to predict a set of keyphrases Y = { y i } i =1 ,..., |Y| , where |Y| is the number of keyphrases.", "To solve the KG task, previous works typically adopted an ONE 2O NE training paradigm (Meng et al., 2017) or ONE 2S EQ training paradigm (Yuan et al., 2020).", "The difference between the two training paradigms is that the form of training samples is different.", "Specifically, in the ONE 2O NE training paradigm, each original sample pair ( x , Y ) is divided into multiple pairs { ( x , y i ) } i =1 ,..., |Y| to perform training independently.", "In the ONE 2S EQ training paradigm, each original sample pair is processed as ( x , f ( Y )) , where f ( Y ) is a sequence of keyphrases after the reordering and concatenating operation.", "To solve the wrong bias problem caused by the ONE 2S EQ training paradigm, we propose the ONE 2S ET training paradigm, where each original sample pair is kept still as ( x , Y ) .", "Hence, the sample used in training is consistent with the original sample, which avoids the intractable problem introduced by the additional processing (i.e., dividing or concatenating).", "However, the vanilla Transformer can only generate a sequence but not a set.", "To predict a set of keyphrases, we propose SETTRANS model that utilizes a set of learned control codes as additional decoder inputs.", "By performing generation conditioned on each control code, we can generate a set of keyphrases in parallel.", "To decide suitable numbers of keyphrases for different given documents, we fix the total length of the control codes to a sufficient number N , and introduce a special token that represents the meaning of no corresponding keyphrase.", "Hence, we can determine the appropriate number of keyphrases for an input document after removing all the tokens from the N predictions.", "where e w y nt 1 is the embedding of word y n t 1 , e pt is the t -th sinusoid positional embedding as in (Vaswani et al., 2017) and c n is the n -th learned control code embedding.", "The decoder outputs the predictive distribution p nt , which is used to get the next word y nt .", "As some keyphrases contain words that do not exist in the predefined vocabulary but appear in the input document, we also employ a copy mechanism (See et al., 2017), which is generally adopted for many previous KG works (Meng et al., 2017; Chan et al., 2019; Chen et al., 2020; Yuan et al., 2020).", "The main difficulty of training under the ONE 2S ET paradigm is that the correspondence between each prediction and ground-truth keyphrase is unknown, so that the cross-entropy loss cannot be directly used.", "Hence, we introduce a K -step target assignment mechanism to assign the ground-truth keyphrase for each prediction, and a separate set loss to train the model in an end-to-end way.", "We first generate K words for each control code and collect the corresponding predictive probability distributions of each step.", "Formally, we denote P = { P n } n =1 ,...,N , where P n = { p nt } t =1 ,...,K and p nt is the predictive distribution at time step t for control code n .", "Then, we find a bipartite matching between the ground-truth keyphrases and predictions.", "Assuming the predefined number of control codes N is larger than the number of ground-truth keyphrases, we consider the ground-truth keyphrases also as a set of size N padded with .", "Note that the bipartite matching enforces permutation-invariance, and guarantees that each target element has a unique match.", "Thus, it reduces the duplication ratio of predictions.", "Specifically, as shown in Figure 2, both the fifth and eighth control code predict the same keyphrase neural model, but one of them is assigned with .", "The eighth code can perceive that this keyphrase has been generated by another code.", "Hence, the control codes can learn their mutual dependency during training and not generate duplicated keyphrases.", "Formally, to find a bipartite matching between sets of ground-truth keyphrases and predictions, we search for a permutation with the lowest cost: = arg min ( N ) N (cid:88) n =1 C match (cid:16) y n , P ( n ) (cid:17) , (2) where ( N ) is the space of all N -length permutations, C match (cid:0) y n , P ( n ) (cid:1) is a pair-wise matching cost between the ground truth y n and distributions of a prediction sequence with index ( n ) .", "This optimal assignment is computed efficiently with the Hungarian algorithm (Kuhn, 1955).", "The matching cost takes into account the class predictions, which can be defined as follows: C match (cid:16) y n , P ( n ) (cid:17) = s (cid:88) t =1 1 { y nt (cid:54) = } p ( n ) t ( y nt ) , (3) where s = min( | y n | , K ) is the minimum shared length between the target and predicted sequence, p ( n ) t ( y nt ) denotes the probability of word y nt in p ( n ) t , and we ignore the score from matching predictions with , which ensures that valid targets (i.e., non targets) can be allocated to predictions with as higher predictive probability as possible.", "where p ( n ) t is the predictive probability distribution using teacher forcing.", "However, predicting present and absent keyphrases requires the model to have different capabilities, we propose a separate set loss to flexibly take this bias into account in a unified model.", "Specifically, we first separate the control codes into two fixed sets with equal size of N/ 2 , which is denoted as C 1 and C 2 , and the target keyphrase set Y into present target keyphrase set Y pre and absent target keyphrase set Y abs .", "Finally, the bipartite matching is performed on the two sets separately, namely, we find a permutation pre using Y pre and predictions from C 1 , and abs using Y abs and predictions from C 2 .", "Thus, we can modify the final loss in Equal 4 as follows: L ( ) = ( N/ 2 (cid:88) n =1 | y n | (cid:88) t =1 log p pre ( n ) t ( y nt ) + N (cid:88) n = N/ 2+1 | y n | (cid:88) t =1 log p abs ( n ) t ( y nt )) .", "(5) In practice, we down-weight the log-probability term when y nt = by scale factors pre and abs for present keyphrase set and absent keyphrase set to account for the class imbalance.", "We conduct our experiments on five scientific article datasets, including Inspec (Hulth, 2003), NUS (Nguyen and Kan, 2007), Krapivin (Krapivin et al., 2009), SemEval (Kim et al., 2010) and KP20k (Meng et al., 2017).", "Each sample from these datasets consists of a title, an abstract, and some keyphrases.", "Following previous works (Meng et al., 2017; Chen et al., 2019b,a; Yuan et al., 2020), we concatenate the title and abstract as a source document.", "We use the largest dataset (i.e., KP20k) to train all the models.", "After preprocessing (i.e., lowercasing, replacing all the digits with the symbol (cid:104) digit (cid:105) and removing the duplicated data), the final KP20k dataset contains 509,818 samples for training, 20,000 for validation, and 20,000 for testing.", "The dataset statistics are shown in Table 1.", "We focus on the comparisons with the following state-of-the-art methods as our baselines:", "catSeq (Yuan et al., 2020).", "The RNN-based seq2seq model with copy mechanism trained under ONE 2S EQ paradigm.", "catSeqTG (Chen et al., 2019b).", "An extension of catSeq with additional title encoding and cross-attention.", "catSeqTG2 RF 1 (Chan et al., 2019).", "An extension of catSeqTG with RL-based fine-tuning using F 1 and Recall metrics as rewards.", "GANMR (Swaminathan et al., 2020).", "An extension of catSeq with RL-based fine-tuning using a discriminator to produce rewards.", "ExHiRD-h (Chen et al., 2020).", "An extension of catSeq with a hierarchical decoding method and an exclusion mechanism to avoid generating duplicated keyphrases.", "In this paper, we propose two Transformer-based models that are denoted as follows: Transformer .", "A Transformer-based model with copy mechanism trained under ONE 2S EQ paradigm.", "SETTRANS .", "An extension of Transformer with additional control codes trained under ONE 2S ET paradigm.", "Following previous works (Chan et al., 2019; Chen et al., 2020; Yuan et al., 2020), when training under the ONE 2S EQ paradigm, the target keyphrase sequence is the concatenation of present and absent keyphrases, with the present keyphrases are sorted according to the orders of their first occurrences in the document and the absent keyphrase kept in their original order.", "We use a Transformer structure similar to Vaswani et al. (2017), with six layers and eight self-attention heads, 2048 dimensions for hidden states.", "In the training stage, we choose the top 50,002 frequent words to form the predefined vocabulary and set the embedding dimension to 512.", "We use the Adam optimization algorithm (Kingma and Ba, 2015) with a learning rate of 0.0001, and a batch size of 12.", "During testing, we use greedy search as the decoding algorithm.", "We set the number of control codes to 20 as we find it covers 99.5% of the samples in the validation set.", "We use a number of two for target assignment steps K based on the average keyphrase length on the validation set, a factor of 0.2 and 0.1 for pre Model Inspec NUS Krapivin SemEval KP20k F 1 @5 F 1 @ M F 1 @5 F 1 @ M F 1 @5 F 1 @ M F 1 @5 F 1 @ M F 1 @5 F 1 @ M catSeq (Yuan et al., 2020) 0.225 0.262 0.323 0.397 0.269 0.354 0.242 0.283 0.291 0.367 catSeqTG (Chen et al., 2019b) 0.229 0.270 0.325 0.393 0.282 0.366 0.246 0.290 0.292 0.366 catSeqTG2 RF 1 (Chan et al., 2019) 0.253 0.301 0.375 0.433 0.300 0.369 0.287 0.329 0.321 0.386 GANMR (Swaminathan et al., 2020) 0.258 0.299 0.348 0.417 0.288 0.369 -0.303 0.378 ExHiRD-h (Chen et al., 2020) 0.253 0.291 -0.286 0.347 0.284 0.335 0.311 0.374 Transformer (ONE 2S EQ ) 0.281 5 0.325 6 0.370 7 0.419 10 0.315 8 0.365 5 0.287 14 0.325 15 0.332 1 0.377 1 SETTRANS (ONE 2S ET ) 0.285 3 0.324 3 0.406 12 0.450 7 0.326 12 0.364 12 0.331 20 0.357 13 0.358 5 0.392 4 Table 2: Present keyphrases prediction results of all models.", "and abs respectively based on the validation set.", "We conduct the experiments on a GeForce RTX 2080Ti GPU, repeat three times using different random seeds, and report the averaged results.", "We follow previous works (Chan et al., 2019; Chen et al., 2020) and use macro-averaged F 1 @5 and F 1 @ M for both present and absent keyphrase predictions.", "F 1 @ M compares all the keyphrases predicted by the model with the ground-truth keyphrases, which means it considers the number of predictions.", "For F 1 @5 , when the prediction number is less than five, we randomly append incorrect keyphrases until it obtains five predictions.", "If we do not adopt such an appending operation, F 1 @5 will become the same with F 1 @ M when the prediction number is less than five as shown in Chan et al. (2019).", "We apply the Porter Stemmer before determining whether two keyphrases are identical and remove all the duplicated keyphrases after stemming.", "Table 2 and Table 3 show the performance evaluations of the present and absent keyphrase, respectively.", "We observe that the proposed SETTRANS model consistently outperforms almost all the previous state-of-the-art models on both F 1 @5 and F 1 @ M metrics by a large margin, which demonstrates the effectiveness of our methods.", "As noted by previous works (Chan et al., 2019; Yuan et al., 2020) that predicting absent keyphrases for a document is an extremely challenging task, thus the performance is much lower than that of present keyphrase prediction.", "Regarding the comparison of our Transformer model trained under ONE 2S EQ paradigm and SETTRANS model trained under ONE 2S ET paradigm, we find SETTRANS model consistently improves both keyphrase extractive and generative ability by a large margin on almost all the datasets, and maintains the performance of present keyphrase prediction on the Inspec and Krapivin datasets, which demonstrates the advantages of ONE 2S ET training paradigm.", "To investigate the model's ability to generate diverse keyphrases, we measure the average numbers of unique present and absent keyphrases, and the average duplication ratio of all the predicted keyphrases.", "The results are reported in Table", "4. Based on the results, we observe that our SETTRANS model generates more unique keyphrases than other baselines by a large margin, as well as achieves a significantly lower duplication ratio.", "Note that ExHiRD-h specifically designed a deduplication mechanism to remove duplication in the inference stage.", "In contrast, our model achieves Model Krapivin SemEval KP20k #PK #AK Dup #PK #AK Dup #PK #AK Dup Oracle 3.24 2.59 -6.12 8.31 -3.31 1.95 catSeq 3.50 0.67 0.46 3.48 0.77 0.53 3.71 0.55 0.39 catSeqTG 3.82 0.83 0.41 3.82 1.09 0.63 3.77 0.67 0.36 catSeqTG2 RF 1 3.28 1.56 0.29 3.57 1.50 0.25 3.55 1.44 0.28 ExHiRD-h 4.41 1.02 0.14 3.65 0.99 0.09 3.97 0.81 0.11 Transformer 4.44 1.39 0.29 4.30 1.52 0.27 4.64 1.16 0.26 SETTRANS 4.83 2.20 0.08 4.62 2.18 0.08 5.10 2.01 0.08 Table 4: Number and duplication ratio of predicted keyphrases on three datasets.", "a lower duplication ratio without any deduplication mechanism, which proves its effectiveness.", "However, we also observe that our model tends to overgenerate more present keyphrases than the ground-truth on the Krapivin and KP20k datasets.", "We analyze that different datasets have different preferences for the number of keyphrases, which we leave as our future work.", "To understand the effects of each component of the SETTRANS model, we conduct an ablation study on it and report the results on the KP20k dataset in Table", "5. Effects of Model Architecture To verify the effectiveness of the model architecture of SETTRANS , we remove the control codes and find the model is completely broken.", "The duplication ratio increases to 0.95, which means all the 20 control codes predict the same keyphrase.", "This occurs because when the control codes are removed, all the predictions depend on the same condition (i.e., the source document) without any distinction.", "This demonstrates that the control codes play an extremely important role in the SETTRANS model.", "Effects of Target Assignment The major difficulty for successfully training under ONE 2S ET paradigm is the target assignment between predictions and targets.", "An attempt is first made to remove the K -step target assignment mechanism, which means that we employ a fixed sequential matching strategy as in the ONE 2S EQ paradigm.", "From the results, we observe that both the present and absent keyphrase performances degrade, the number of predicted keyphrases also drops dramatically, and the duplication ratio increased greatly by 18%.", "We analyze the reasons as follows: (1) Model Present Absent Dup F 1 @5 F 1 @ M #PK F 1 @5 F 1 @ M #AK Oracle -3.31 -1.95 SETTRANS 0.358 0.392 5.10 0.036 0.058 2.01 0.08 Model Architecture control codes 0.001 0.002 0.01 0.000 0.000 0.00 0.95 Target Assignment -K -step assign 0.265 0.381 2.64 0.020 0.045 0.81 0.26 + random assign 0.005 0.010 1.05 0.001 0.002 0.04 0.95 Set Loss teacher forcing 0.001 0.002 0.01 0.000 0.000 0.00 0.89 separate set loss 0.355 0.383 5.31 0.016 0.031 0.55 0.05 Table 5: Ablation study of SETTRANS on KP20k dataset.", "The dynamic characteristics of the K -step target assignment remove unnecessary position constraint during training, which encourages the model to generate more keyphrases.", "Specifically, the model can generate a keyphrase in any location rather than only in the given position.", "Thus, the model does not need to consider the position constraint during the generation and encourages all the control codes to predict keyphrases rather than only the first few codes, which will be verified in Section 5.6.", "(2) The bipartite characteristics of the K step target assignment forces the model to predict unique keyphrases, which reduces the duplication ratio of predictions.", "When predictions from two codes are similar, only one code may be assigned a target keyphrase, and the other is assigned a token.", "Thus, the model can be very careful about each prediction to prevent duplication.", "We further experiment that replacing the K -step target assignment with a random assignment, and we find that the results are similar to those when removing the control codes.", "This is because the random assignment misleads the learning of the control codes and causes them to become invalid.", "Effects of Set Loss As discussed in Section 3.3.2, teacher forcing and a separate set loss are used to train the model after assigning a target for each prediction.", "We investigate their effects in detail.", "The results show the following.", "(1) Teaching forcing can alleviate the cold start problem.", "After removing teaching forcing, the model faces a cold start problem, in other words, the lack of supervision information leads to a poor prediction, and the target assignment is therefore not ideal, which causes the model to fail at the early stage of training.", "(2) A separate set loss helps in both present and absent keyphrase predictions Present Absent P r e s e n t F 1 @ M 0.25 0.30 0.35 0.40 0.45 A b s e n t F 1 @ M 0.01 0.02 0.03 0.04 0.05 0.06 Scale Factor 0 0.1 0.2 0.3 0.4 0.5 Present Absent N u m b e r o f P r e d i c ti on s 0 2 4 6 8 10 Scale Factor 0 0.1 0.2 0.3 0.4 0.5 Figure 3: Performance and number of predictions for present and absent keyphrase under different loss scale factors for token on KP20k dataset.", "but also increases the duplication ratio slightly compared with a single set loss.", "As producing correct present keyphrases is an easier task, the model tends to generate present keyphrases only when using a single set loss.", "Our separate set loss can infuse different inductive biases into the two sets of control codes, which makes them more focused on generating one type of keyphrase (i.e., the present one or absent one).", "Thus, it increases the accuracy of the predictions and encourages more absent keyphrase predictions.", "However, because bipartite matching is performed separately, the constraint of unique prediction does not exist between the two sets, which leads to a slight increase in the duplication ratio.", "In this section, we conduct experiments on KP20k dataset to evaluate performance under different loss scale factors for token.", "The results are shown in Figure 3.", "The left part of the figure shows that when = 0 .", "2 , the performances on both present and absent keyphrases are consistently better than the results when = 0 .", "1 .", "However, a scale factor larger than 0.1 improves the present keyphrase performance, but also harms the absent keyphrase performance.", "As we can see from the right part of the figure, the number of predictions decreases consistently for both the present and absent keyphrases when the scale factor becomes larger.", "This is because a larger scale factor causes the model to predict more tokens to reduce the loss penalty during training.", "Moreover, we also find that the precision metric P @ M will increases when the number of predictions decreases.", "While the effect of the decrease in the recall metric R @ M is even greater when the number is too small, which leads to a degradation in the overall metric F 1 @ M .", "In this section, we study the influence of target assignment steps K on the prediction performance", "and efficiency compared with Transformer.", "As shown in the left part of Figure 4, we note that when K is equal to 1, the improvement of SETTRANS over Transformer is relatively lower than when it is equal to 2 (i.e., the average length of keyphrase).", "This is mainly because some keyphrases that have the same first word cannot be distinguished during training, which could interfere with the learning of control codes.", "The right part of Figure 4 shows the training and inference speedup with various K compared with the Transformer.", "We note SETTRANS could be slower than Transformer at the training stage, and a smaller K could alleviate this problem.", "For performance and efficiency considerations, we consider 2 to be an appropriate value for steps K .", "Moreover, as K is only used in the training stage, SETTRANS is 6.44 times invariably faster than Transformer on the inference stage.", "This is because that with different control codes as input condition, all the keyphrases can be generated in parallel on the GPU.", "Hence, in addition to better performance than Transformer, SETTRANS also has great advantages in the inference efficiency.", "Our analysis here is driven by two questions from Section 5.3:", "(1) Whether the K -step target assignment mechanism encourages all the control codes to predict", "keyphrases rather than only the first few codes?", "(2) Whether the separate set loss makes the control codes more focused on generating one type of keyphrase (i.e., present or absent) compared to the single set loss?", "To investigate these two questions, we measure the ratio of present and absent keyphrase predic-Pre.KP Ratio Abs.KP Ratio 0 50 100 0 50 100 0 50 100 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 Figure 5: Ratio of present and absent keyphrase predictions for all the control codes on KP20k dataset.", "tions for all the control codes on the KP20k dataset, which is shown in Figure", "5. As shown in the top and middle subfigures, we observe that without the target assignment mechanism, many control codes are invalid (i.e., only predicting ), and only the first small part performs valid predictions.", "Moreover, when there are already very few valid predictions, the model still has a duplication ratio of up to 26%, as shown in Table 5, resulting in an even smaller number of final predictions.", "After the introduction of the target assignment mechanism, most of the codes can generate valid keyphrases, which increases the number of predictions.", "However, as shown in the middle subfigure, most of the control code tends to generate more present keyphrases than absent keyphrases when using a single set loss.", "When using a separate set loss in the bottom subfigure, the two parts are more inclined to predict only present and absent keyphrases respectively, which also increases the number of absent keyphrase predictions.", "In this paper, we propose a new training paradigm ONE 2S ET without predefining an order to concatenate the keyphrases, and a novel model SETTRANS that predicts a set of keyphrases in parallel.", "To successfully train under ONE 2S ET paradigm, we propose a K -step target assignment mechanism and a separate set loss, which greatly increases the number and diversity of the generated keyphrases.", "Experiments show that our method gains significantly huge performance improvements against existing state-of-the-art models.", "We also show that SETTRANS has great advantages in the inference efficiency compared with the Transformer under ONE 2S EQ paradigm.", "The authors wish to thank the anonymous reviewers for their helpful comments.", "This work was partially funded by China National Key R&D Program (No. 2017YFB1002104), National Natural Science Foundation of China (No. 62076069, 61976056), Shanghai Municipal Science and Technology Major Project (No.2021SHZDZX0103)." ]
[ "abstain", "abstain", "abstain", "objective", "objective", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "method", "abstain", "method", "objective", "method", "abstain", "abstain", "abstain", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "method", "method", "method", "method", "method", "method", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "abstain", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "other", "abstain", "result", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "result", "result", "other", "other" ]
[ "We tackle the task of building supervised event trigger identification models which can generalize better across domains.", "Our work leverages the adversarial domain adaptation (ADA) framework to introduce domain-invariance.", "ADA uses adversarial training to construct representations that are predictive for trigger identification, but not predictive of the example's domain.", "It requires no labeled data from the target domain, making it completely unsupervised.", "Experiments with two domains (English literature and news) show that ADA leads to an average F1 score improvement of 3.9 on out-of-domain data.", "Our best performing model (BERT-A) reaches 44-49 F1 across both domains, using no labeled target data.", "Preliminary experiments reveal that finetuning on 1% labeled data, followed by self-training leads to substantial improvement, reaching 51.5 and 67.2 F1 on literature and news respectively.", "1 1 Introduction Events are a key semantic phenomenon in natural language understanding.", "They embody a basic function of language: the ability to report happenings.", "Events are a basic building block for narratives across multiple domains such as news articles, stories and scientific abstracts, and are important for many downstream tasks such as question answering (Saur et al., 2005) and summarization (Daniel et al., 2003).", "Despite their utility, event extraction remains an onerous task.", "A major reason for this is that the notion of what counts as an event depends heavily on the domain and task at hand.", "For example, should a system which extracts events from doctor notes only focus on medical events (eg: symptoms, treatments), or also annotate lifestyle events (eg: dietary changes, ex-1 Our system is available at https://github.com/ aakanksha19/ODETTE ercise habits) which may have bearing on the pa-tient's illness?", "To circumvent this, prior work has mainly focused on annotating specific categories of events (Grishman and Sundheim, 1996; Dodding-ton et al., 2004; Kim et al., 2008) or narratives from specific domains (Pustejovsky et al., 2003; Sims et al., 2019).", "This has an important implication for supervised event extractors: they do not generalize to data from a different domain or containing different event types (Keith et al., 2017).", "Conversely, event extractors that incorporate syntactic rule-based modules (Saur et al., 2005; Chambers et al., 2014) tend to overgenerate, labeling most verbs and nouns as events.", "Achieving a balance between these extremes will help in building generalizable event extractors, a crucial problem since annotated training data may be expensive to obtain for every new domain.", "Prior work has explored unsupervised (Huang et al., 2016; Yuan et al., 2018), distantly supervised (Keith et al., 2017; Chen et al., 2017; Araki and Mitamura, 2018; Zeng et al., 2018) and semi-supervised approaches (Liao and Grishman, 2010; Huang and Riloff, 2012; Ferguson et al., 2018), which largely focus on automatically generating in-domain training data.", "In our work, we try to leverage annotated training data from other domains.", "Motivated by the hypothesis that events, despite being domain/ task-specific, often occur in similar contextual patterns, we try to inject lexical domain-invariance into supervised models, improving generalization, while not overpredicting events.", "Concretely, we focus on event trigger identification, which aims to identify triggers (words) that instantiate an event.", "For example, in John was born in Sussex, born is a trigger, invoking a BIRTH event.", "To introduce domain-invariance, we adopt the adversarial domain adaptation (ADA) framework (Ganin and Lempitsky, 2015) which constructs representations that are predictive for trigger identification, but not predictive of the example's domain, using adversarial training.", "This framework requires no labeled target domain data, making it completely unsupervised.", "Our experiments with two domains (English literature and news) show that ADA makes supervised models more robust on out-of-domain data, with an average F1 score improvement of 3.9, at no loss of in-domain performance.", "Our best performing model (BERT-A) reaches 44-49 F1 across both domains using no labeled data from the target domain.", "Further, preliminary experiments demonstrate that finetuning on 1% labeled data, followed by self-training leads to substantial improvement, reaching 51.5 and 67.2 F1 on literature and news respectively.", "Throughout this work, we treat the task of event trigger identification as a token-level classification task.", "For each token in a sequence, we predict whether it is an event trigger.", "To ensure that our trigger identification model can transfer across domains, we leverage the adversarial domain adaptation (ADA) framework (Ganin and Lempitsky, 2015), which has been used in several NLP tasks (Ganin et al., 2016; Li et al., 2017; Liu et al., 2017; Chen et al., 2018; Shah et al., 2018; Yu et al., 2018).", "Figure 1 gives an overview of the ADA framework for event trigger identification.", "It consists of three components:", "i) representation learner ( R )", "ii) event classifier ( E ) and", "iii) domain predictor ( D ).", "The representation learner generates token-level representations, while the event classifier and domain predictor use these representations to identify event triggers and predict the domain to which the sequence belongs.", "The key idea is to train the representation learner to generate representations which are predictive for trigger identification but not predictive for domain prediction, making it more domain-invariant.", "A notable benefit here is that the only data we need from the target domain is unlabeled data.", "To ensure domain-invariant representation learning, ADA uses adversarial training.", "Assume that we have a labeled source domain dataset D s with examples { ( x s 1 , e s 1 ) , ..., ( x sn , e sn ) } , where x si is the token sequence and e si is the sequence of event tags.", "We construct auxiliary dataset D a with examples Labeled Source Domain Data Unlabeled TargetDomain Data RepresentationLearner Event Classifier Domain Predictor S o u r c e D o m a i n R e p s S ou r c e + T a r ge t D o m a i n R ep s Figure 1: Adversarial Domain Adaptation Framework for Event Trigger Identification { ( x a 1 , d a 1 ) , ..., ( x an , d an ) } , where x ai is the token sequence and d ai is the domain label, using token sequences from D s and unlabeled target domain sentences.", "The representation learner R maps a token sequence x i = ( x i 1 , ..., x ik ) into token representations h i = ( h i 1 , ..., h ik ) .", "The event classifier E maps representations h i = ( h i 1 , ..., h ik ) to event tags e i = ( e i 1 , ..., e ik ) .", "The domain predictor D creates a pooled representation p i = P ool ( h i 1 , ..., h ik ) and maps it to domain label d ai .", "Given this setup, we apply an alternating optimization procedure.", "In the first step, we train the domain predictor using D a , to optimize the following loss: argmin DL ( D ( h ai ) , d ai ) In the second step, we train the representation learner and event classifier using D s to optimize the following loss: argmin R,E (cid:34) (cid:88) k (cid:16) L ( E ( h sik ) , e sik ) (cid:17) L ( D ( h si ) , d si ) (cid:35) L refers to the cross-entropy loss and is a hyperparameter.", "In practice, the optimization in the above equation is performed using a gradient reversal layer (GRL) (Ganin and Lempitsky, 2015).", "AGRL works as follows.", "During the forward pass, it acts as the identity, but during the backward pass it scales the gradients flowing through by .", "We apply a GRL g before mapping the pooled representation to a domain label using D .", "This changes the optimization to: argmin R,E (cid:34) L ( D ( g ( p si )) , d si ) + (cid:80) k L ( E ( h sik ) , e sik ) (cid:35) In our setup, the event classifier and domain predictors are MLP classifiers.", "For the representation learner, we experiment with several architectures.", "We experiment with the following models: 2 LSTM : A unidirectional LSTM over tokens represented using word embeddings.", "2 Complete implementation details in the appendix Statistic LitBank TimeBank #Docs 100 183 #Tokens 210,532 80,281 #Events 7849 8103 Event Density 3.73% 10.10% Table 1: Dataset Statistics Model In-Domain Out-of-Domain P R F1 P R F1 LSTM 61.9 61.5 61.7 86.1 17.1 28.5 LSTM-A 61.1 61.6 61.3 89.0 18.9 31.2 BiLSTM 64.5 61.7 63.1 91.8 14.4 24.9 BiLSTM-A 66.1 62.8 64.4 92.9 18.5 30.9 POS 74.1 51.9 61.1 93.5 9.6 17.4 POS-A 69.6 57.7 63.1 92.5 15.2 26.1 BERT 73.5 72.7 73.1 88.1 28.2 42.7 BERT-A 71.9 71.3 71.6 85.0 35.0 49.6 Table 2: Model performance on domain transfer experiments from LitBank to TimeBank.", "BiLSTM : A bidirectional LSTM over word embeddings to incorporate both left and right context.", "POS : A BiLSTM over token representations constructed by concatenating word embeddings with embeddings corresponding to part-of-speech tags.", "This model explicitly introduces syntax.", "BERT : A BiLSTM over contextual token representations extracted using BERT (Devlin et al., 2019), similar to the best-performing model on LitBank, reported by Sims et al. (2019).", "In our experiments, we use the following datasets: 3 LitBank (Sims et al., 2019): 100 English literary texts with entity and event annotations.", "TimeBank (Pustejovsky et al., 2003): 183 English news articles containing annotations for events and temporal relations between them.", "Both datasets follow similar guidelines for event annotation, with an important distinction: LitBank does not annotate events which have not occurred (eg: future, hypothetical or negated events).", "To overcome this gap, we remove all such events from TimeBank using available metadata about event modality and tense.", "Table 1 provides a brief overview of statistics for both datasets.", "3 Unlike prior work, we cannot use the ACE-2005 dataset since it tags specific categories of events, whereas we focus on tagging all possible events.", "Tables 2 and 3 present the results of our experiments.", "Table 2 shows the results when transferring from LitBank to TimeBank while Table 3 presents transfer results in the other direction.", "From Table 2 (transfer from LitBank to TimeBank), we see that ADA improves out-of-domain performance for all models, by 6.08 F1 on average.", "BERT-A performs best, reaching an F1 score of 49.6, using no labeled news data.", "Transfer experiments from TimeBank to LitBank (Table 3) showcase similar trends, with only BiLSTM not showing improvement with ADA.", "For other models, ADA results in an average out-of-domain F1 score improvement of 1.77.", "BERT-A performs best, reaching an F1 score of 44.1.", "We also note that models transferred from LitBank to TimeBank have high precision, while models transferred in the other direction have high recall.", "We believe this difference stems from the disparity in event density across corpora (Table 1).", "Since event density in LitBank is much lower, models transferred from LitBank tend to be slightly conservative (high precision), while models transferred from TimeBank are less so (high recall).", "When transferring from LitBank to TimeBank, LSTM generalizes better than BiLSTM, which may be because BiLSTM has twice as many parameters making it more prone to overfitting.", "ADA gives a higher F1 boost with BiLSTM, indicating that it may be acting as a regularizer.", "Another interesting result is the poor performance of POS when transferring from LitBank to TimeBank.", "This might stem from the Stanford CoreNLP tagger (trained on news data) producing inaccurate tags for LitBank.", "Hence using automatically generated POS tags while training on LitBank does not produce Category % Example TimeBank Improvements Finance 54 the accord was unanimously approved Political 12 the ukrainian parliament has already ratified it Reporting 10 from member station kqed , auncil martinez reports Law 10 mr. antar was charged last month in a civil suit LitBank Improvements Archaic 6 his countenance became intolerably fervid Animal Actions 6 the dogs left off barking , and ran about every way Human Actions 18 a nod was the answer Literary 14 there strikes the ebony clock Table 4: Categorization of TimeBank and LitBank examples on which ADA shows improvement.", "On average, ADA makes supervised models more robust on out-of-domain data, with an average F1 score improvement of 3.9, at no loss of in-domain performance.", "What cases does ADA improve on?", "To gain more insight into the improvements observed on using ADA, we perform a manual analysis of out-of-domain examples that BERT labels incorrectly, but BERT-A gets right.", "We carry out this analysis on 50 examples from TimeBank and LitBank each.", "We observe that an overwhelming number of cases from TimeBank use vocabulary in contexts unique to news (43/50 or 86%).", "This includes examples of financial events, political events and reporting events that are rarer in literature, indicating that ADA manages to reduce event extraction models' reliance on lexical features.", "We make similar observations for LitBank though the proportion of improvement cases with literature-specific vocabulary is more modest (22/50 or 44%).", "These cases include examples with archaic vocabulary, words that have a different meaning in literary contexts and human/ animal actions, which are not common in news.", "Table 4 presents a detailed breakdown of Percentage of training data used for finetuning M ode l pe r f o r m an c e on T i m e B an k t e s t s e t ( F 1 0 20 40 60 80 1 2 3 4 5 BERT-FEDA BERT-A BERT-NoDA Model performance when finetuning on TimeBank Figure 2: Improvement in model performance when finetuning on labeled training data from TimeBank Percentage of training data used for finetuning M ode l pe r f o r m an c e on L i t B an k t e s t s e t ( F 1 sc o r e ) 0 20 40 60 80 1 2 3 4 5 BERT-FEDA BERT-A BERT-NoDA Model performance when finetuning on LitBank Figure 3: Improvement in model performance when finetuning on labeled training data from LitBank these cases, along with examples.", "Finetuning on labeled data: We run finetuning experiments to study improvement in model performance on incorporating small amounts of labeled target domain data.", "For both domains, we finetune BERT-A, slowly increasing the percentage of labeled data used from 1%-5%.", "5 We compare BERT-A with two other models.", "The first model is naive BERT with no domain adaptation (BERT-NoDA).", "The second model is a BERT model trained via supervised domain adaptation (BERT-FEDA), which we use as an indicator of ceiling performance.", "The supervised domain adaptation method we use is the neural modification of frustratingly easy domain adaptation developed in Kim et al. (2016).", "Frustratingly easy domain adaptation (Daume III, 2007) uses a feature augmentation strategy to improve performance when annotated data from both source and target domains is available.", "This algorithm simply duplicates input features 3 times, 4 This table does not include generic improvement cases (i.e. no domain-specific vocabulary used), which formed 14% and 56% of improvement cases in TimeBank and LitBank.", "creating a source-specific, target-specific and general version of each feature.", "For source data, only the source-specific and general features are active, while only the target-specific and general features are active for target data.", "The neural modification works by duplicating the feature extractor module, which is the BiLSTM in our case.", "Figures 2 and 3 present the results of these experiments.", "Performance of all models steadily improves with more data, but BERT-A starts with a much higher F1 score than BERT-NoDA, demonstrating that ADA boosts performance when little annotated training data is available.", "Performance increase of BERT-NoDA is suprisingly rapid, especially on LitBank.", "However, it is worth noting that 5% of the LitBank training set is 10,000 tokens, which is a substantial amount to annotate.", "Therefore, BERT-A beats BERT-NoDA on sample efficiency.", "We can also see that BERT-A does not do much worse than BERT-FEDA, which performs supervised adaptation.", "Using BERT-A to provide weak supervision: We run further experiments to determine whether finetuned BERT-A can be leveraged for self-training (Yarowsky, 1995; Riloff and Wiebe, 2003).", "Self-training creates a teacher model from labeled data, which is then used to label a large amount of unlabeled data.", "Both labeled and unlabeled datasets are jointly used to train a student model.", "Algorithm 1 gives a quick overview of our self-training procedure.", "We use 1% of the training data as D l , with the remaining 99% used as D u .", "BERT-A acts as T , while S is a vanilla BERT model.", "Table 5 shows the results of self-training on both domains.", "Self-training improves model performance by nearly 7 F1 points on average.", "Increase on TimeBank is much higher which may be due to the high precision-low recall tendency of the teacher model.", "In this work, we tackled the task of building generalizable supervised event trigger identification models using adversarial domain adaptation (ADA)", "D l = { ( x l 1 , e l 1 ) , ..., ( x lm , x lm ) } , Unlabeled Data D u = { x u 1 , ...x un } , Output: Trained Student Model S 1: Finetune the teacher model T by minimizing cross-entropy loss on labeled data 1 m m (cid:88) i =1 L ( T ( x li ) , e li ) 2: Generate labels { e u 1 , ..., e un } for unlabeled data D u using T 3: Train a student model S by minimizing cross-entropy loss on both datasets D l , D u 1 m m (cid:88) i =1 L ( S ( x li ) , e li ) + 1 n n (cid:88) i =1 L ( S ( x ui ) , e ui ) 4: Iterative training: Repeat step 2 using updated student model S", "to introduce domain-invariance.", "Our experiments with two domains (English literature and news) showed that ADA made supervised models more robust on out-of-domain data, with an average F1 score improvement of 3.9.", "Our best performing model (BERT-A) was able to reach 44-49 F1 across both domains using no labeled target domain data.", "Preliminary experiments showed that finetuning BERT-A on 1% labeled data, followed by self-training led to substantial improvement, reaching 51.5 and 67.2 F1 on literature and news respectively.", "While these results are encouraging, we are yet to match supervised in-domain model performance.", "Future directions to explore include incorporating noise-robust training procedures (Gold-berger and Ben-Reuven, 2017) and example weighting (Dehghani et al., 2018) during self-training, and exploring lexical alignment methods from literature on learning cross-lingual embeddings.", "This work was supported by the University of Pittsburgh Medical Center (UPMC) and Abridge AI Inc through the Center for Machine Learning and Health at Carnegie Mellon University.", "The authors would like to thank the anonymous reviewers for their helpful feedback on this work." ]
[ "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "objective", "objective", "abstain", "method", "abstain", "result", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "result", "method", "abstain", "result", "abstain", "other", "other" ]
[ "The Emotion Cause Extraction (ECE) task aims to identify clauses which contain emotion-evoking information for a particular emotion expressed in text.", "We observe that a widely-used ECE dataset exhibits a bias that the majority of annotated cause clauses are either directly before their associated emotion clauses or are the emotion clauses themselves.", "Existing models for ECE tend to explore such relative position information and suffer from the dataset bias.", "To investigate the degree of reliance of existing ECE models on clause relative positions, we propose a novel strategy to generate adversarial examples in which the relative position information is no longer the indicative feature of cause clauses.", "We test the performance of existing models on such adversarial examples and observe a significant performance drop.", "To address the dataset bias, we propose a novel graph-based method to explicitly model the emotion triggering paths by leveraging the commonsense knowledge to enhance the semantic dependencies between a candidate clause and an emotion clause.", "Experimental results show that our proposed approach performs on par with the existing state-of-the-art methods on the original ECE dataset, and is more robust against adversarial attacks compared to existing models.", "1 1 Introduction Instead of detecting sentiment polarity from text, recent years have seen a surge of research activities that identify the cause of emotions expressed in text (Gui et al., 2017; Cheng et al., 2017a; Rashkin et al., 2018; Xia and Ding, 2019; Kim and Klinger, 2018; Oberlander and Klinger, 2020).", "In a typical dataset for Emotion Cause Extract (ECE) (Gui 1 Our code can be accessed at https://github.com /hanqi-qi/Position-Bias-Mitigation-in-Emotion-Cause-Analysis et al., 2017), a document consists of multiple clauses, one of which is the emotion clause annotated with a pre-defined emotion class label.", "In addition, one or more clauses are annotated as the cause clause(s) which expresses triggering factors leading to the emotion expressed in the emotion clause.", "An emotion extraction model trained on the dataset is expected to classify a given clause as a cause clause or not, given the emotion clause.", "However, due to the difficulty in data collection, the ECE datasets were typically constructed by using emotion words as queries to retrieve relevant contexts as candidates for emotion cause annotation, which might lead to a strong positional bias (Ding and Kejriwal, 2020).", "Figure 1 depicts the distribution of positions of cause clauses relative to the emotion clause in the ECE dataset (Gui et al., 2016).", "Most cause clauses are either immediately preceding their corresponding emotion clauses or are the emotion clauses themselves.", "Existing ECE models tend to exploit such relative position information and have achieved good results on emotion cause detection.", "For example, The Relative Position Augmented with Dynamic Global Labels (PAE-DGL) (Ding et al., 2019), RNN-Transformer Hierarchical Network (RTHN) (Xia et al., 2019) and Multi-Attention-based Neural Network (MANN) (Li et al., 2019) all concatenate the relative position embeddings with clause semantic embeddings as the clause representations.", "We argue that models utilising clause relative positions would inherently suffer from the dataset bias, and therefore may not generalise well to unseen data when the cause clause is not in proximity to the emotion clause.", "For example, in a recently released emotion cause dataset, only 25-27% cause clauses are located immediately before the emotion clause (Poria et al., 2020).", "To investigate the degree of reliance of existing ECE models on clause relative positions, we propose a novel strategy to generate adversarial examples in which the relative position information is no longer the indicative feature of cause clauses.", "We test the performance of existing models on such adversarial examples and observe a significant performance drop.", "To alleviate the position bias problem, we propose to leverage the commonsense knowledge to enhance the semantic dependencies between a candidate clause and the emotion clause.", "More concretely, we build a clause graph, whose node features are initialised by the clause representations, and has two types of edges i.e., Sequence-Edge ( SEdge ) and Knowledge-Edge ( K-Edge ).", "A S-Edge links two consecutive clauses to capture the clause neighbourhood information, while a K-Edge links a candidate clause with the emotion clause if there exists a knowledge path extracted from the ConceptNet (Speer et al., 2017) between them.", "We extend Relation-GCNs (Schlichtkrull et al., 2018) to update the graph nodes by gathering information encoded in the two types of edges.", "Finally, the cause clause is detected by performing node (i.e., clause) classification on the clause graph.", "In summary, our contributions are three-fold: We investigate the bias in the Emotion Cause Extraction (ECE) dataset and propose a novel strategy to generate adversarial examples in which the position of a candidate clause relative to the emotion clause is no longer the indicative feature for cause extraction.", "We develop a new emotion cause extraction approach built on clause graphs in which nodes are clauses and edges linking two nodes capture the neighbourhood information as well as the implicit reasoning paths extracted from a commonsense knowledge base between clauses.", "Node representations are updated using the extended Relation-GCN.", "Experimental results show that our proposed approach performs on par with the existing state-of-the-art methods on the original ECE dataset, and is more robust when evaluating on the adversarial examples.", "Position-insensitive Models.", "A more traditional line of research exploited structural representations of textual units relying on rule-based systems (Lee et al., 2010) or incorporated commonsense knowledge bases (Gao et al., 2015) for emotion cause extraction.", "Machine learning methods leveraged text features (Gui et al., 2017) and combined them with multi-kernel Support Vector Machine (SVM) (Xu et al., 2017).", "More recent works developed neural architectures to generate effective semantic features.", "Cheng et al. (2017b) employed LSTM models, Gui et al. (2017) made use of memory networks, while Li et al. (2018) devised a Convolutional Neural Network (CNN) with a co-attention mechanism.", "(Chen et al., 2018) used the emotion classification task to enhance cause extraction results.", "Position-aware Models.", "More recent methodologies have started to explicitly leverage the positions of cause clauses with respect to the emotion clause.", "A common strategy is to concatenate the clause relative position embedding with the candidate clause representation (Ding et al., 2019; Xia et al., 2019; Li et al., 2019).", "The Relative Position Augmented with Dynamic Global Labels (PAE-DGL) (Ding et al., 2019) reordered clauses based on their distances from the target emotion clause, and propagated the information of surrounding clauses to the others.", "Xu et al. (2019) used emotion dependent and independent features to rank clauses and identify the cause.", "The RNN-Transformer Hierarchical Network (RTHN) (Xia et al., 2019) argued there exist relations between clauses in a document and proposed to classify multiple clauses simultaneously.", "Li et al. (2019) proposed a Multi-Attention-based Neural Network (MANN) to model the interactions between a candidate clause and the emotion clause.", "The generated representations are fed to a CNN layer for emotion cause extraction.", "The Hierarchical Neural Network (Fan et al., 2019) aimed at narrowing the gap between the prediction distribution p and the true distribution of the cause clause relative positions.", "We first define the Emotion Cause Extraction (ECE) task here.", "A document D contains N clauses D = { C i } Ni =1 , one of which is annotated as an emotion clause CE with a pre-defined emotion class label, E w .", "The ECE task is to identify one or more cause clauses, C t , 1 t N , that trigger the emotion expressed in CE .", "Note that the emotion clause itself can be a cause clause.", "We propose a Knowledge-Aware Graph (KAG) model as shown in Figure 2, which incorporates knowledge paths extracted from ConceptNet for emotion cause extraction.", "More concretely, for each document, a graph is first constructed by representing each clause in the document as a node.", "The edge linking two nodes captures the sequential relation between neighbouring clauses (called the Sequence Edge or S-Edge ).", "In addition, to better capture the semantic relation between a candidate clause and the emotion clause, we identify keywords in the candidate clause which can reach the annotated emotion class label by following the knowledge paths in the ConceptNet.", "The extracted knowledge paths from ConceptNet are used to enrich the relationship between the candidate clause and the emotion clause and are inserted into the clause graph as the Knowledge Edge or K-Edge .", "We argue that by adding the K-Edges , we can better model the semantic relations between a candidate clause and the emotion clause, regardless of their relative positional distance.", "In what follows, we will first describe how to extract knowledge paths from ConceptNet, then present the incorporation of the knowledge paths into context modelling, and finally discuss the use of Graphical Convolutional Network (GCN) for learning node (or clause) representations and the prediction of the cause clause based on the learned node representations.", "ConceptNet is a commonsense knowledge graph, which represents entities as nodes and relationship between them as edges.", "To explore the causal re-Bai Jinyue, an ordinary worker in XingTai Steel factory in HeBei province and the department leader replied to my mail when I found that my advice had been adopted I realized that I had made contributions to the country's development talked to the journalist with exicitment different departments, like the public security, have accepted his advice with a Thank You letter in his hands C2) C4) C3) C6 ) C7) C8) Since 27 years ago acceptance culture diffusion spread C1) C5) make better world happiness ConceptNet Figure 3: A document consisting of 8 clauses in the ECE dataset with extracted knowledge paths from the ConceptNet.", "lation between a candidate clause and the emotion clause, we propose to extract cause-related paths linking a word in the candidate clause with the annotated emotion word or the emotion class label, E w , in the emotion clause.", "More concretely, for a candidate clause, we first perform word segmentation using the Chinese segmentation tool, Jieba 2 , and then extract the top three keywords ranked by Text-Rank 3 .", "Based on the findings in (Fan et al., 2019) that sentiment descriptions can be relevant to the emotion cause, we also include adjectives in the keywords set.", "We regard each keyword in a candidate clause as a head entity , e h , and the emotion word or the emotion class label in the emotion clause as the tail entity , e t .", "Similar to (Lin et al., 2019), we apply networkx 4 to perform a depth-first search on the ConceptNet to identify the paths which start from e h and end at e t , and only keep the paths which contain less than two intermediate entities.", "This is because shorter paths are more likely to offer reliable reasoning evidence (Xiong et al., 2017).", "Since not all relations in ConceptNet are related to or indicative of causal relations, we further remove the paths which contain any of these four relations: antonym ', distinct from ', not desires ', and not capable of '.", "Finally, we order paths by their lengths in an ascending order and choose the top K paths as the result for each candidate-emotion clause pair 5 .", "An example is shown in Figure 3. The 5-th 2 https://github.com/fxsjy/jieba 3 We have also experimented with other keyword extraction strategies, such as extracting words with higher TFIDF values or keeping all words after removing the stop words.", "But we did not observe improved emotion cause detection results.", "4 http://networkx.github.io/ 5 We set K to 15, which is the median of the number of paths between all the candidate-emotion clause pairs in our dataset.", "clause is annotated as the emotion clause and the emotion class label is happiness '.", "For the keyword, adopted ', in the first clause, we show two example paths extracted from ConceptNet, each of which links the word adopted ' with happiness '.", "One such a path is adopted related to acceptance has subevent make better world causes happiness .", "As shown in Figure 2, there are four components in our model: a document encoding module, a context-aware path representation learning module, a GCN-based graph representation updating module, and finally a softmax layer for cause clause classification.", "Initial Clause/Document Representation Learning For each clause C i , we derive its representation, C i , by using a Bi-LSTM operating on its constituent word vectors, where each word vector w i R d is obtained via an embedding layer.", "To capture the sequential relationship ( S-Edges ) between neighbouring clauses in a document, we feed the clause sequence into a transformer architecture.", "Similar to the original transformer incorporating the position embedding with the word embedding, we utilise the clause position information to enrich the clause representation.", "Here, the position embedding o i of each clause is concatenated with its representation C i generated by Bi-LSTM.", "We consider different ways for encoding position embeddings using either relative or absolute clause positions and explore their differences in the experiments section.", "In addition, we will also show the results without using position embeddings at all.", "Since the aim of our task is to identify the cause clause given an emotion clause, we capture the dependencies between each candidate clause and the emotion clause.", "Therefore, in the document context modelling, we consider the emotion clause CE , generated in a similar way as C i , as the query vector, and the candidate clause representation C i as both the key and value vectors, in order to derive the document representation, D R d .", "Context-Aware Path Representation In Section 3.1, we have chosen a maximum of K paths { p t } Kt =1 linking each candidate C i with the emotion clause.", "However, not every path correlates equally to the document context.", "Taking the document shown in Figure 3 as an example, the purple knowledge path is more closely related to the document context compared to the green path.", "As such, we should assign a higher weight to the purple path than the green one.", "We propose to use the document-level representation D obtained above as the query vector, and a knowledge path as both key and value vectors, in order to calculate the similarity between the knowledge path and the document context.", "For each pair of a candidate clause C i and the emotion clause, we then aggregate the K knowledge paths to derive the context-aware path representation s i R d below: s i = K (cid:88) t =1 t p t t = softmax ( DT p t (cid:80) Kj =1 DT p j ) (2) where D is the document representation, p t is the path representation obtained from Bi-LSTM on a path expressed as an entity-relation word sequence.", "Update of Clause Representations by GCN After constructing a clause graph such as the one shown in Figure", "2(c), we update the clause/node representations via S-Edges and K-Edges .", "Only clauses with valid knowledge paths to the emotion clause are connected with the emotion clause node.", "After initialising the node (or clause) in the clause graph with C i and the extracted knowledge path with s i , we update clause representation using an extended version of GCN, i.e. Relation-GCNs (aka. R-GCNs) (Schlichtkrull et al., 2018), which is designed for information aggregation over multiple different edges: h (cid:96) +1 i = ( (cid:88) r R Ni (cid:88) j N i 1 c i,r W (cid:96)r h (cid:96)j + W (cid:96) 0 h (cid:96)i ) (3) where W (cid:96)r h (cid:96)j is the linear transformed information from the neighbouring node j with relation r at the (cid:96) -th layer, W (cid:96)r R d d is relation-specific, N i is the set of neighbouring nodes of the i -th node, RN j is the set of distinct edges linking the current node and its neighbouring nodes.", "When aggregating the neighbouring nodes information along the K-Edge , we leverage the path representation s i to measure the node importance.", "This idea is inspired by the translation-based models in graph embedding methods (Bordes et al., 2013).", "Here, if a clause pair contains a possible reasoning process described by the K-Edge , then h E h i + s i holds.", "Otherwise, h i + s i should be far away from the emotion clause representation h E .", "6 Therefore, we measure the importance of graph nodes according to the similarity between ( h i + s i ) and h E .", "Here, we use the scaled Dot-Attention to calculate the similarity e iE and obtain the updated node representation z i .", "where e E is { e iE } N 1 i =1 .", "d is the dimension of graph node representations, and N r k is a set of neighbours by the K-Edge .", "Then, we combine the information encoded in SEdge with z i as in Eq.", "3, and perform a non-linear transformation to update the graph node representation h (cid:96) +1 i : h (cid:96) +1 i = (cid:0) z (cid:96)i + (cid:88) j N rsi ( W j h j ) (cid:1) (5) where N r s i is a set of i -th neighbours connected by the S-Edges .", "Cause Clause Detection Finally, we concatenate the candidate clause node h i and the emotion node representation h e generated by the graph, and apply a softmax function to yield the predictive class distribution y i .", "We conduct a thorough experimental assessment of the proposed approach against several state-of-the-art models 7 .", "6 Here, we do not consider the cases when the candidate clause is the emotion clause (i.e., h i = h E ), as the similarity between h E + s i and h E will be much larger than the other pairs.", "7 Training and hyper-parameter details can be found in Appendix A. Methods P (%) R (%) F1 (%) W/O Pos RB 67.47 42.87 52.43 EMOCause 26.72 71.30 38.87 Ngrams+SVM 42.00 43.75 42.85 Multi-Kernel 65.88 69.27 67.52 CNN 62.15 59.44 60.76 CANN 77.21 68.91 72.66 Memnet 70.76 68.38 69.55 W. Pos HCS 73.88 71.54 72.69 MANN 78.43 75.87 77.06 LambdaMART 77.20 74.99 76.08 PAE-DGL 76.19 69.08 72.42 RTHN 76.97 76.62 76.77 Our KAG 79.12 75.81 77.43 : w/o R-GCNs 73.68 72.76 73.14 : w/o K-Edge 75.67 72.63 74.12 : w/o S-Edge 76.34 75.46 75.88 Table 1: Results of different models on the ECE dataset.", "Dataset and Evaluation Metrics The evaluation dataset (Gui et al., 2016) consists of 2,105 documents from SINA city news.", "As the dataset size is not large, we perform 10-fold cross-validation and report results on three standard metrics, i.e. Precision (P), Recall (R), and F1-Measure, all evaluated at the clause level.", "Baselines We compare our model with the position-insensitive and position-aware baselines: RB (Lee et al., 2010) and EMOCause (Russo et al., 2011) are rules-based methods.", "Multi-Kernel (Gui et al., 2016) and Ngrams+SVM (Xu et al., 2017) leverage S upport V ector M achines via different textual feature to train emotion cause classifiers.", "CNN (Kim, 2014) and CANN (Li et al., 2018) are vanilla or attention-enhanced approaches.", "Memnet (Gui et al., 2017) uses a deep memory network to re-frame ECE as a question-answering task.", "Position-aware models use the relative position embedding to enhance the semantic features.", "HCS (Yu et al., 2019) uses separate hierarchical and attention module to obtain context and information.", "Besides that, PAE-DGL (Ding et al., 2019) and RTHN (Xia et al., 2019) use similar G lobal P rediction E mbedding ( GPE ) to twist the clauses' first-round predictions.", "MANN (Li et al., 2019) performs multi-head attention in CNN to jointly encode the emotion and candidate clauses.", "LambdaMART (Xu et al., 2019) uses the relative position, word-embedding similarity and topic similarity as emotion-related feature to extract cause.", "Table 1 shows the cause clause classification results on the ECE dataset.", "Two rule-based methods have poor performances, possibly due to their pre-defined rules.", "Multi-Kernel performs better than the vanilla SVM, being able to leverage more contextual information.", "Across the other three groups, the precision scores are higher than recall scores, and it is probably due to the unbalanced number of cause clauses (18.36%) and non-cause clauses (81.64%), leading the models to predict a clause as non-cause more often.", "Models in the position-aware group perform better than those in the other groups, indicating the importance of position information.", "Our proposed model outperforms all the other models except RHNN in which its recall score is slightly lower.", "We have also performed ablation studies by removing either K-Edge or S-Edge , or both of them (w/o R-GCNs ).", "The results show that removing the R-GCNs leads to a drop of nearly 4.3% in F1.", "Also, both the K-Edge and S-Edge contributes to emotion cause extraction.", "As contextual modelling has considered the position information, the removal of S-Edge leads to a smaller drop compared to the removal of K-Edge .", "In order to examine the impact of using the clause position information in different models, we replace the relative position information of the candidate clause with absolute positions.", "In the extreme case, we remove the position information from the models.", "The results are shown in Figure 4. It can be observed that the best results are achieved using relative positions for all models.", "Replacing relative positions using either absolution positions or no position at all results in a significant performance drop.", "In particular, MANN and PAE-DGL have over 50-54% drop in F1.", "The performance degradation is less significant for RTHN, partly due to its use of the Transformer architecture for context modeling.", "Nevertheless, we have observed a decrease in F1 score in the range of 20-35%.", "Our proposed model is less sensitive to the relative positions of candidate clauses.", "Its robust performance partly attributes to the use of (1) hierarchical contextual modeling via the Transformer structure, and (2) the K-Egde which helps explore causal links via commonsense knowledge regardless of a clause's 65.49 72.42 76.77 77.08 15.31 18.39 56.94 69.43 15.09 17.9 41.45 68.29 15 25 35 45 55 65 75 85 MANN PAE-DGL RTHN OURS F1(%) relative position absolute position no position Figure 4: Emotion cause extraction when using relative, absolute or no clause positional information.", "In recent years, there have been growing interests in understanding vulnerabilities of NLP systems (Goodfellow et al., 2015; Ebrahimi et al., 2017; Wallace et al., 2019; Jin et al., 2020).", "Adversarial examples explore regions where the model performs poorly, which could help understanding and improving the model.", "Our purpose here is to evaluate if KAG is vulnerable as existing ECE models when the cause clauses are not in proximity to the emotion clause.Therefore, we propose a principled way to generate adversarial samples such that the relative position is no longer an indicative feature for the ECE task.", "Generation of adversarial examples We generate adversarial examples to trick ECE models, which relies on swapping two clauses C r 1 and C r 2 , where r 1 denotes the position of the most likely cause clause, while r 2 denotes the position of the least likely cause clause.", "We identify r 1 by locating the most likely cause clause based on its relative position with respect to the emotion clause in a document.", "As illustrated in Figure 1, over half of the cause clauses are immediately before the emotion clause in the dataset.", "We assume that the position of a cause clause can be modelled by a Gaussian distribution and estimate the mean and variance directly from the data, which are, { , 2 } = { 1 , 0 .", "5445 } .", "The position index r 1 can then be sampled from the Gaussian distribution.", "As the sampled value is continuous, we round the value to its nearest integer: r 1 (cid:98) g (cid:101) , g (cid:118) Gaussian ( , 2 ) .", "To locate the least likely cause clause, we propose to choose the value for r 2 according to the attention score between a candidate clause and the emotion clause.", "Our intuition is that if the emotion clause has a lower score attended to a candidate clause, then it is less likely to be the cause clause.", "We use an existing emotion cause extraction model to generate contextual representations and use the Dot-Attention (Luong et al., 2015) to measure the similarity between each candidate clause and the emotion clause.", "We then select the index i which gives the lowest attention score and assign it to r 2 : r 2 = arg min i { i } Ni =1 , i = Dot-Att.", "where C i is the representation of the i -th candidate clause, CE is the representation of the emotion clause, and N denotes a total of N clauses in a document.", "Here, we use existing ECE models as different discriminators to generate different adversarial samples.", "8 The desirable adversarial samples will fool the discriminator to predict the inverse label.", "We use leave-one-model-out to evaluate the performance of ECE models.", "In particular, one model is used as a Discriminator for generating adversarial samples which are subsequently used to evaluate the performance of other models.", "Results The results are shown in Table 2.", "The attacked ECE models are merely trained on the original dataset.", "The generated adversarial examples are used as the test set only.", "We can observe a significant performance drop of 23-32% for the existing ECE models, some of which even perform worse than the earlier rule-based methods, showing their sensitivity to the positional bias in the dataset.", "We also observe the performance degradation of our proposed KAG.", "But its performance drop is less significant compared to other models.", "The results verify the effectiveness of capturing the semantic dependencies between a candidate clause and the emotion clause via contextual and commonsense knowledge encoding.", "To understand how KAG aggregate information based on different paths, we randomly choose two examples to visualise the attention distributions (Eq. 4) on different graph nodes (i.e., clauses)", "in Figure 5.", "9 These attention weights show the distance ' between a candidate clause and the emotion clause during the reasoning process.", "The cause clauses are underlined, and keywords are in bold.", "C i in brackets indicate the relative clause position to the emotion clause (which is denoted as C 0 ).", "Ex.1", "The crime that ten people were killed shocked the whole country ( C 4 ) .", "This was due to personal grievances ( C 3 ).", "Qiu had arguments with the management staff ( C 2 ), and thought the Taoist temple host had molested his wife ( C 1 ).", "He became angry ( C 0 ), and killed the host and destroyed the temple ( C 1 ).", "In Ex.1, the emotion word is angry ', the knowledge path identified by our model from ConceptNet is, arguments fight angry for Clause C 2 , and molest irritate exasperate angry for Clause C 1 .", "Our model assigns the same attention weight to the clauses C 2 , C 1 and the emotion clause, as shown in Figure 5.", "This shows that both paths are equally weighted by our model.", "Due to the K-Edge attention weights, our model can correctly identify both C 2 and C 1 clauses as the cause clauses.", "Ex.2", "The LongBao Primary school locates between the two villages ( C 2 ).", "Some unemployed people always cut through the school to take a shortcut ( C 1 ).", "Liu Yurong worried that it would affect children's study ( C 0 ).", "When he did not have teaching duties ( C 1 ), he stood guard outside the school gate ( C 2 ).", "9 More cases can be found in the Appendix.", "been assigned the largest attention weight as shown in Figure 5.", "Note that the path identified is spurious since the emotion of worried ' is triggered by unemployment ' in the ConceptNet, while in the original text, worried ' is caused by the event, Unemployed people cut through the school '.", "This shows that simply using keywords or entities searching for knowledge paths from commonsense knowledge bases may lead to spurious knowledge extracted.", "We will leave the extraction of event-driven commonsense knowledge as future work.", "In this paper, we examine the positional bias in the annotated ECE dataset and investigate the degree of reliance of the clause position information in existing ECE models.", "We design a novel approach for generating adversarial samples.", "Moreover, we propose a graph-based model to enhance the semantic dependencies between a candidate clause and a given emotion clause by extracting relevant knowledge paths from ConceptNet.", "The experimental results show that our proposed method achieves comparative performance to the state-of-the-art methods, and is more robust against adversarial attacks.", "Our current model extracts knowledge paths linking two keywords identified in two separate clauses.", "In the future, we will exploit how to incorporate the event-level commonsense knowledge to improve the performance of emotion cause extraction.", "This work was funded by the EPSRC (grant no. EP/T017112/1, EP/V048597/1).", "HY receives the PhD scholarship funded jointly by the University of Warwick and the Chinese Scholarship Council.", "YH is supported by a Turing AI Fellowship funded by the UK Research and Innovation (grant no. EP/V020579/1).", "We thank Yizhen Jia and Daoye Zhu for their valuable work on earlier code framework of this paper.", "We also thank the anonymous reviewers for their valuable comments." ]
[ "abstain", "result", "abstain", "objective", "result", "objective", "objective", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "result", "objective", "method", "abstain", "abstain", "abstain", "objective", "objective", "objective", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "objective", "other", "other", "method", "other", "method", "objective", "other", "other", "objective", "objective", "abstain", "abstain", "abstain", "other", "method", "abstain", "other", "abstain", "other", "other", "objective", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "objective", "method", "result", "other", "other", "other", "other", "other" ]