ACL2020 Information Extraction Related Paper List

Original article was published on Deep Learning on Medium

ACL2020 released the accepted paper list. I looked at it and picked up the papers related to these NLP tasks, Named Entity Recognition, Relation Extraction, Event Detection, Coreference Resolution, and Knowledge Graph Related. I hope this could save you time. If a paper has no description, that is because there is no PDF file available right now.

Named Entity Recognition

An obvious trend of NER in ACL2020 is that more works focus on nested entities.

Connecting Embeddings for Knowledge Graph Entity Typing Yu Zhao, anxiang zhang, Ruobing Xie, Kang Liu and Xiaojie WANG

Bipartite Flat-Graph Network for Nested Named Entity Recognition Ying Luo and Hai Zhao

  • In this paper, we propose a novel bipartite flat-graph network (BiFlaG) for nested named entity recognition (NER), which contains two subgraph modules: a flat NER module for outermost entities and a graph module for all the entities located in inner layers.

Code and Named Entity Recognition in StackOverflow Jeniya Tabassum, Mounica Maddela, Wei Xu and Alan Ritter

  • In this paper, we introduce a new named entity recognition (NER) corpus for the computer programming domain, consisting of 15,372 sentences annotated with 20 fine-grained entity types. We also present the SoftNER model that combines contextual information with domain specific knowledge using an attention network.

Improving Multimodal Named Entity Recognition via Entity Span Detection with Unified Multimodal Transformer Jianfei Yu, Jing Jiang, Li Yang and Rui Xia

Multi-Domain Named Entity Recognition with Genre-Aware and Agnostic Inference Jing Wang, Mayank Kulkarni and Daniel Preotiuc-Pietro

  • Domain transfer of NER models with data from multiple genres has not been widely studied. To this end, we conduct NER experiments in three predictive setups on data from: a) multiple domains; b) multiple domains where the genre label is unknown at inference time; c) domains not encountered in training. We introduce a new architecture tailored to this task by using shared and private domain parameters and multi-task learning.

Named Entity Recognition without Labelled Data: A Weak Supervision Approach Pierre Lison, Jeremy Barnes, Aliaksandr Hubin and Samia Touileb

  • This paper presents a simple but powerful approach to learn NER models in the absence of labelled data through weak supervision. The approach relies on a broad spectrum of labelling functions to automatically annotate texts from the target domain. These annotations are then merged together using a hidden Markov model which captures the varying accuracies and confusions of the labelling functions.

Pyramid: A Layered Model for Nested Named Entity Recognition Jue Wang, Lidan Shou, Ke Chen and Gang Chen

Sources of Transfer in Multilingual Named Entity Recognition David Mueller, Nicholas Andrews and Mark Dredze

Temporally-Informed Analysis of Named Entity Recognition Shruti Rijhwani and Daniel Preotiuc-Pietro

Improving Low-Resource Named Entity Recognition using Joint Sentence and Token Labeling Canasai Kruengkrai, Thien Hai Nguyen, Sharifah Mahani Aljunied and Lidong Bing

Instance-Based Learning of Span Representations: A Case Study through Named Entity Recognition Hiroki Ouchi, Jun Suzuki, Sosuke Kobayashi, Sho Yokoi, Tatsuki Kuribayashi, Ryuto Konno and Kentaro Inui

  • Interpretable rationales for model predictions play a critical role in practical applications. In this study, we develop models possessing interpretable inference process for structured prediction. Specifically, we present a method of instance-based learning that learns similarities between spans.

Named Entity Recognition as Dependency Parsing Juntao Yu, Bernd Bohnet and Massimo Poesio

  • NER research is often focused on flat entities only (flat NER), ignoring the fact that entity references can be nested, as in [Bank of [China]] (Finkel and Manning, 2009). In this paper, we use ideas from graph-based dependency parsing to provide our model a global view on the input via a biaffine model (Dozat and Manning, 2017). The biaffine model scores pairs of start and end tokens in a sentence which we use to explore all spans, so that the model is able to predict named entities accurately.

Soft Gazetteers for Low-Resource Named Entity Recognition Shruti Rijhwani, Shuyan Zhou, Graham Neubig and Jaime Carbonell

  • Although modern neural network models do not require such hand-crafted features for strong performance, recent work has demonstrated their utility for named entity recognition on English data. However, designing such features for low-resource languages is challenging, because exhaustive entity gazetteers do not exist in these languages. To address this problem, we propose a method of “soft gazetteers” that incorporates ubiquitously available information from English knowledge bases, such as Wikipedia, into neural named entity recognition models through cross-lingual entity linking.

TriggerNER: Learning with Entity Triggers as Explanations for Named Entity Recognition Bill Yuchen Lin, Dong-Ho Lee, Ming Shen, Ryan Moreno, Xiao Huang, Prashant Shiralkar and Xiang Ren

  • Training neural models for named entity recognition (NER) in a new domain often requires additional human annotations (e.g., tens of thousands of labeled instances) that are usually expensive and time-consuming to collect. Thus, a crucial research question is how to obtain supervision in a cost-effective way. In this paper, we introduce “entity triggers,” an effective proxy of human explanations for facilitating label-efficient learning of NER models. An entity trigger is defined as a group of words in a sentence that helps to explain why humans would recognize an entity in the sentence.

Relation Extraction

A Novel Cascade Binary Tagging Framework for Relational Triple Extraction Zhepei Wei, Jianlin Su, Yue Wang, Yuan Tian and Yi Chang

  • Extracting relational triples from unstructured text is crucial for large-scale knowledge graph construction. However, few existing works excel in solving the overlapping triple problem where multiple relational triples in the same sentence share the same entities. In this work, we introduce a fresh perspective to revisit the relational triple extraction task and propose a novel cascade binary tagging framework (CasRel) derived from a principled problem formulation.

Dialogue-Based Relation Extraction Dian Yu, Kai Sun, Claire Cardie and Dong Yu

  • We present the first human-annotated dialogue-based relation extraction (RE) dataset DialogRE, aiming to support the prediction of relation(s) between two arguments that appear in a dialogue. We further offer DialogRE as a platform for studying cross-sentence RE as most facts span multiple sentences.

Exploiting the Syntax-Model Consistency for Neural Relation Extraction Amir Pouran Ben Veyseh, Franck Dernoncourt, Dejing Dou and Thien Huu Nguyen

  • We propose a novel deep learning model for RE that uses the dependency trees to extract the syntax-based importance scores for the words, serving as a tree representation to introduce syntactic information into the models with greater generalization.

In Layman’s Terms: Semi-Open Relation Extraction from Scientific Texts Ruben Kruiper, Julian Vincent, Jessica Chen-Burger, Marc Desmulliez and Ioannis Konstas

  • In this work we combine the output of both types of systems to achieve Semi-Open Relation Extraction, a new task that we explore in the Biology domain.

Learning Interpretable Relationships between Entities, Relations and Concepts via Bayesian Structure Learning on Open Domain Facts Jingyuan Zhang, Mingming Sun, Yue Feng and Ping Li

Probing Linguistic Features of Sentence-Level Representations in Relation Extraction Christoph Alt, Aleksandra Gabryszak and Leonhard Hennig

  • We introduce 14 probing tasks targeting linguistic properties relevant to RE, and we use them to study representations learned by more than 40 different encoder architecture and linguistic feature combinations trained on two datasets, TACRED and SemEval 2010 Task 8.

Rationalizing Medical Relation Prediction from Corpus-level Statistics Zhen Wang, Jennifer Lee, Simon Lin and Huan Sun

  • Aiming to shed some light on how to rationalize medical relation prediction, we present a new interpretable framework inspired by existing theories on how human memory works, e.g., theories of recall and recognition.

Reasoning with Latent Structure Refinement for Document-Level Relation Extraction Guoshun Nan, Zhijiang Guo, Ivan Sekulic and Wei Lu

  • Existing approaches construct static document-level graphs based on syntactic trees, co-references or heuristics from the unstructured text to model the dependencies. Unlike previous methods that may not be able to capture rich non-local interactions for inference, we propose a novel model that empowers the relational reasoning across sentences by automatically inducing the latent document-level graph.

Relabel the Noise: Joint Extraction of Entities and Relations via Cooperative Multiagents Daoyuan Chen, Yaliang Li, Kai Lei and Ying Shen

  • We propose a joint extraction approach to address this problem by re-labeling noisy instances with a group of cooperative multiagents.

TACRED Revisited: A Thorough Evaluation of the TACRED Relation Extraction Task Christoph Alt, Aleksandra Gabryszak and Leonhard Hennig

  • Have we reached a performance ceiling or is there still room for improvement? And how do crowd annotations, dataset, and models contribute to this error rate? To answer these questions, we first validate the most challenging 5K examples in the development and test sets using trained annotators. We find that label errors account for 8% absolute F1 test error, and that more than 50% of the examples need to be relabeled. On the relabeled test set the average F1 score of a large baseline model set improves from 62.1 to 70.1.

Towards Understanding Gender Bias in Relation Extraction Andrew Gaut, Tony Sun, Shirlyn Tang, Yuxin Huang, Jing Qian, Mai ElSherief, Jieyu Zhao, Diba Mirza, Elizabeth Belding, Kai-Wei Chang and William Yang Wang

  • We create WikiGenderBias, a distantly supervised dataset with a human annotated test set. WikiGenderBias has sentences specifically curated to analyze gender bias in relation extraction systems. We use WikiGenderBias to evaluate systems for bias and find that NRE systems exhibit gender biased predictions and lay groundwork for future evaluation of bias in NRE.

ZeroShotCeres: Zero-Shot Relation Extraction from Semi-Structured Webpages Colin Lockard, Prashant Shiralkar, Xin Luna Dong and Hannaneh Hajishirzi

  • In this work, we propose a solution for “zero-shot” open-domain relation extraction from webpages with a previously unseen template, including from websites with little overlap with existing sources of knowledge for distant supervision and websites in entirely new subject verticals.

Relation Extraction with Explanation Hamed Shahbazi, Xiaoli Fern, Reza Ghaeini and Prasad Tadepalli

Revisiting Unsupervised Relation Extraction Thy Thy Tran, Phong Le and Sophia Ananiadou

  • Unsupervised relation extraction (URE) extracts relations between named entities from raw text without manually-labelled data and existing knowledge bases (KBs). URE methods can be categorised into generative and discriminative approaches, which rely either on hand-crafted features or surface form. However, we demonstrate that by using only named entities to induce relation types, we can outperform existing methods on two popular datasets.

Event Detection

  • Improving Event Detection via Open-domain Trigger Knowledge Meihan Tong, Bin Xu, Shuai Wang, Yixin Cao, Lei Hou, Juanzi Li and Jun Xie
  • Cross-media Structured Common Space for Multimedia Event Extraction Manling Li, Alireza Zareian, Qi Zeng, Spencer Whitehead, Di Lu, Heng Ji and Shih-Fu Chang
  • Discourse as a Function of Event: Profiling Discourse Structure in News Articles around the Main Event Prafulla Kumar Choubey, Aaron Lee, Ruihong Huang and Lu Wang
  • Document-Level Event Role Filler Extraction using Multi-Granularity Contextualized Encoding Xinya Du and Claire Cardie
  • Improving Event Detection via Open-domain Trigger Knowledge Meihan Tong, Bin Xu, Shuai Wang, Yixin Cao, Lei Hou, Juanzi Li and Jun Xie
  • A Two-Step Approach for Implicit Event Argument Detection Zhisong Zhang, Xiang Kong, Zhengzhong Liu, Xuezhe Ma and Eduard Hovy
  • Towards Open Domain Event Trigger Identification using Adversarial Domain Adaptation Aakanksha Naik and Carolyn Rose

Coreference Resolution

CorefQA: Coreference Resolution as Query-based Span Prediction Wei Wu, Fei Wang, Arianna Yuan, Fei Wu and Jiwei Li

  • A new approach for the coreference resolution task. It formulates the problem as a span prediction task, like in machine reading comprehension (MRC): A query is generated for each candidate mention using its surrounding context, and a span prediction module is employed to extract the text spans of the coreferences within the document using the generated query.

Toward Gender-Inclusive Coreference Resolution Yang Trista Cao and Hal Daumé III

  • We inspect many existing datasets for trans-exclusionary biases, and develop two new datasets for interrogating bias in crowd annotations and in existing coreference resolution systems.

Active Learning for Coreference Resolution using Discrete Annotation Belinda Z. Li, Gabriel Stanovsky and Luke Zettlemoyer

  • We improve upon pairwise annotation for active learning in coreference resolution, by asking annotators to identify mention antecedents if a presented mention pair is deemed not coreferent

Knowledge Graph Related

1 Utilize KG to enhance other NLP tasks

Breaking Through the 80% Glass Ceiling: Raising the State of the Art in Word Sense Disambiguation by Incorporating Knowledge Graph Information Michele Bevilacqua and Roberto Navigli

  • Neural architectures are the current state of the art in Word Sense Disambiguation (WSD). However, they make limited use of the vast amount of relational information encoded in Lexical Knowledge Bases (LKB). We present Enhanced WSD Integrating Synset Embeddings and Relations (EWISER), a neural supervised architecture that is able to tap into this wealth of knowledge by embedding information from the LKB graph within the neural architecture, and to exploit pretrained synset embeddings, enabling the network to predict synsets that are not in the training set.

Grounded Conversation Generation as Guided Traverses in Commonsense Knowledge Graphs Houyu Zhang, Zhenghao Liu, Chenyan Xiong and Zhiyuan Liu

  • This paper presents a new conversation generation model, ConceptFlow, which leverages commonsense knowledge graphs to explicitly model conversation flows

KdConv: A Chinese Multi-domain Dialogue Dataset Towards Multi-turn Knowledge-driven Conversation Hao Zhou, Chujie Zheng, Kaili Huang, Minlie Huang and Xiaoyan Zhu

  • In this paper, we propose a Chinese multi-domain knowledge-driven conversation dataset, KdConv, which grounds the topics in multi-turn conversations to knowledge graphs.

KinGDOM: Knowledge-Guided DOMain adaptation for sentiment analysis Deepanway Ghosal, Devamanyu Hazarika, Abhinaba Roy, Navonil Majumder, Rada Mihalcea and Soujanya Poria

  • In this paper, we take a novel perspective on this task by exploring the role of external commonsense knowledge. We introduce a new framework, KinGDOM, which utilizes the ConceptNet knowledge graph to enrich the semantics of a document by providing both domain-specific and domain-general background concepts.

Knowledge Graph-Augmented Abstractive Summarization with Semantic-Driven Cloze Reward Luyang Huang, Lingfei Wu and Lu Wang

  • We propose the use of dual encoders — -a sequential document encoder and a graph-structured encoder — -to maintain the global context and local characteristics of entities, complementing each other.

2 Knowledge Graph Completion

A Re-evaluation of Knowledge Graph Completion Methods Zhiqing Sun, Shikhar Vashishth, Soumya Sanyal, Partha Talukdar and Yiming Yang

  • we notice that several recent papers report very high performance, which largely outperforms previous state-of-the-art methods. In this paper, we find that this can be attributed to the inappropriate evaluation protocol used by them and propose a simple evaluation protocol to address this problem.

Orthogonal Relation Transforms with Graph Context Modeling for Knowledge Graph Embedding Yun Tang, Jing Huang, Guangtao Wang, Xiaodong He and Bowen Zhou

  • Translational distance-based knowledge graph embedding has shown progressive improvements on the link prediction task, from TransE to the latest state-of-the-art RotatE. However, N-1, 1-N and N-N predictions still remain challenging. In this work, we propose a novel translational distance-based approach for knowledge graph link prediction.

Check out my other posts on Medium, or with a better TOC list view!
GitHub: https://github.com/BrambleXu
LinkedIn:
www.linkedin.com/in/xu-liang
Blog:
https://bramblexu.org