Most Read
  • Published in last 1 year
  • In last 2 years
  • In last 3 years
  • All

Please wait a minute...
  • Select all
    |
  • Survey
    WU Youzheng, LI Haoran, YAO Ting, HE Xiaodong
    . 2022, 36(5): 1-20.
    Over the past decade, there has been a steady momentum of innovation and breakthroughs that convincingly push the limits of modeling single modality, e.g., vision, speech and language. Going beyond such research progresses made in single modality, the rise of multimodal social network, short video applications, video conferencing, live video streaming and digital human highly demands the development of multimodal intelligence and offers a fertile ground for multimodal analysis. This paper reviews recent multimodal applications that have attracted intensive attention in the field of natural language processing, and summarizes the mainstream multimodal fusion approaches from the perspectives of single modal representation, multimodal fusion stage, fusion network, fusion of unaligned modalities, and fusion of missing modalities. In addition, this paper elaborate the latest progresses of the vision-language pre-training.
  • Survey
    CUI Lei, XU Yiheng, LYU Tengchao, WEI Furu
    . 2022, 36(6): 1-19.
    Document AI, or Document Intelligence, is a relatively new research topic that refers to the techniques to automatically read, understand and analyze business documents. It is an important interdisciplinary study involving natural language processing and computer vision. In recent years, the popularity of deep learning technology has greatly advanced the development of Document AI tasks, such as document layout analysis, document information extraction, document visual question answering, and document image classification etc. This paper briefly introduces the early-stage heuristic rule-based document analysis, statistical machine learning based algorithms, as well as the deep learning-based approaches especially the pre-training approaches. Finally, we also look into the future direction of Document AI.
  • Survey
    CEN Keting, SHEN Huawei, CAO Qi, CHENG Xueqi
    Journal of Chinese Information Processing. 2023, 37(5): 1-21.
    As a self-supervised deep learning paradigm, contrastive learning has achieved remarkable results in computer vision and natural language processing. Inspired by the success of contrastive learning in these fields, researchers have tried to extend it to graph data and promoted the development of graph contrastive learning. To provide a comprehensive overview of graph contrastive learning, this paper summarizes recent works under a unified framework to highlight the development trends. It also catalogues the popular datasets and evaluation metrics for graph contrastive learning, and concludes with the possible future direction of the field.
  • Survey
    DU Xiaohu, WU Hongming, YI Zibo, LI Shasha, MA Jun, YU Jie
    . 2021, 35(8): 1-15.
    Adversarial attack and defense is a popular research issue in recent years. Attackers use small modifications to generate adversarial examples to cause prediction errors from the deep neural network. The generated adversarial examples can reveal the vulnerability of the neural network, which can be repaired to improve the security and robustness of the model. This paper gives a more detailed and comprehensive introduction to the current mainstream adversarial text example attack and defense methods, the data set together with the target neural network of the mainstream attack. We also compare the differences between different attack methods in this paper. Finally, the challenges of the adversarial text examples and the prospect of future research are summarized.
  • Survey
    ZHANG Rujia, DAI Lu, WANG Bang, GUO Peng
    . 2022, 36(6): 20-35.
    Chinese named entity recognition (CNER) is one of the basic tasks of natural language processing applications such as question answering systems, machine translation and information extraction. Although traditional CNER system has achieved satisfactory experiment results with the help of manually designed domain-specific features and grammatical rules, it is still defected in aspects such as weak generalization ability, poor robustness and difficult maintenance. In recent years, deep learning techniques have been adopted to deal with the above shortcomings by automatically extracting text features in an end-to-end learning manner. This article surveys the recent advances of deep learning-based CNER. It first introduces the concepts, difficulties and applications of CNER, and introduces the common datasets and evaluation metrics. Recent neural network models for the CNER task are then grouped according to their network architectures, and representative models in each group are detailed. Finally, future research directions are discussed.
  • Survey
    DENG Yiyi, WU Changxing, WEI Yongfeng, WAN Zhongbao, HUANG Zhaohua
    . 2021, 35(9): 30-45.
    Named entity recognition (NER), as one of the basic tasks in natural language processing, aims to identify the required entities and their types in unstructured text. In recent years, various named entity recognition methods based on deep learning have achieved much better performance than that of traditional methods based on manual features. This paper summarizes recent named entity recognition methods from the following three aspects: 1) A general framework is introduced, which consists of an input layer, an encoding layer and a decoding layer. 2) After analyzing the characteristics of Chinese named entity recognition, this paper introduces Chinese NER models which incorporate both character-level and word-level information. 3) The methods for low-resource named entity recognition are described, including cross-lingual transfer methods, cross-domain transfer methods, cross-task transfer methods, and methods incorporating automatically labeled data. Finally, the conclusions and possible research directions are given.
  • Information Extraction and Text Mining
    WANG Bingqian, SU Shaoxun, LIANG Tianxin
    . 2021, 35(7): 81-88.
    Event extraction (EE) refers to the technology of extracting events from natural language texts and identifying event types and event elements. This paper proposes an end-to-end multi-label pointer network for event extraction, in which the event detection task is integrated into the event element recognition task to extract event elements and event types at the same time. This method avoids the problem of wrong cascade and task separation in traditional pipeline methods, and alleviates the problem of role overlapping and element overlapping in event extraction. The proposed method achieves 85.9% F1 score on the test set in 2020 Language and Intelligence Challenge Event Extraction task.
  • Information Extraction and Text Mining
    WU Hao, PAN Shanliang
    . 2022, 36(1): 92-103.
    The current detection method of illegal comments mainly relies on sensitive words screening, incapable of effectively identifying malicious comments without vulgar language. In this paper, a data set of Chinese illegal comments is established by crawler and manual annotation. On the basis of BERT, RCNN combined with attention mechanism is used to further extract the context features of comments, and multi-task joint training is adopted to improve the classification accuracy and generalization ability of the model. The model is independent to sensitive thesaurus. Experimental results show that the proposed model can better understand the semantic information than the traditional model, achieving aprecision of 94.24%, which is 8.42% higher than traditional TextRNN and 6.92% higher than TextRNN combined with attention mechanism.
  • Survey
    AN Zhenwei, LAI Yuxuan, FENG Yansong
    . 2022, 36(8): 1-11.
    In recent years, legal artificial intelligence has attracted increasing attention for its efficiency and convenience. Among others, legal text is the most common manifestation in legal practice, thus, using natural language understanding method to automatically process legal text is an important direction for both academia and industry. In this paper, we provide a gentle survey to summarize recent advances on natural language understanding for legal texts. We first introduce the popular task setups, including legal information extraction, legal case retrieval, legal question answering, legal text summarization, and legal judgement prediction. We further discuss the main challenges from three perspectives: understanding the difference of languages between legal domain and open domain, understanding the rich argumentative texts in legal documents, and incorporating legal knowledge into existing natural language processing models.
  • Survey
    YUE Zengying, YE Xia, LIU Ruiheng
    . 2021, 35(9): 15-29.
    Pre-training technology has stepped into the center stage of natural language processing, especially with the emergence of ELMo, GTP, BERT, XLNet, T5, and GTP-3 in the last two years. In this paper, we analyze and classify the existing pre-training technologies from four aspects: language model, feature extractor, contextual representation, and word representation. We discuss the main issues and development trends of pre-training technologies in current natural language processing.
  • Survey
    YANG Fan, RAO Yuan, DING Yi, HE Wangbo, DING Zifang
    . 2021, 35(10): 1-20.
    Recently, the artificial intelligence-based dialogue system has been widely applied in, human-computer interaction, intelligent assistant, smart customer service, Q&A consulting, and so on. This paper proposes a definition about the task-oriented dialogue system, which is to satisfy the user’s requirements and certain tasks with the least response turns in dialogue between human and machine. Furthermore, three critical technical problems and challenges are summarized: the user’s intent detection in the complex context, the limitation annotated data, and the personalized response under the multi-modal situation. The research progress in these three challenges are discussed in the paper. Finally, we outline the research directions in the future and the key issues in the next generation of task-oriented dialogue system.
  • Survey
    SUN Yi, QIU Hangping, ZHENG Yu, ZHANG Chaoran, HAO Chao
    . 2021, 35(7): 10-29.
    Introducing knowledge into data-driven artificial intelligence models is an important way to realize human-machine hybrid intelligence. The current pre-trained language models represented by BERT have achieved remarkable success in the field of natural language processing. However, the pre-trained language models are trained on large scale unstructured corpus data, and it is necessary to introduce external knowledge to alleviate its defects in determinacy and interpretability to some extent. In this paper, the characteristics and limitations of two kinds of pre-trained language models, pre-trained word embeddings and pre-trained context encoders, are analyzed. The related concepts of knowledge enhancement are explained. Four types of knowledge enhancement methods of pre-trained word embeddings are summarized and analyzed, which are pre-trained word embeddings retrofitting, hierarchizing the process of encoding and decoding, attention mechanism optimization and knowledge memory introduction. The knowledge enhancement methods of pre-training context encoders are described from two perspectives: 1) task-specific and task-agnostic; 2) explicit knowledge and implicit knowledge. Through the summary and analysis of the knowledge enhancement methods of the pre-trained language model, the basic pattern and algorithm are provided for the human-machine hybrid artificial intelligence.
  • Survey
    WU Yunfang, ZHANG Yangsen
    . 2021, 35(7): 1-9.
    Question generation (QG) aims to automatically generate fluent and semantically related questions for a given text. QG can be applied to generate questions for reading comprehension tests in the education field, and to enhance question answering and dialog systems. This paper presents a comprehensive survey of related researches on QG. We first describe the significance of QG and its applications, especially in the education field. Then we outline the traditional rule-based methods on QG, and make a detailed description on the neural network based models from different views. We also introduce the evaluation metrics of generated questions. Finally, we discuss the limitations of previous studies and suggest future works.
  • Survey
    WANG Wanzhen, RAO Yuan, WU Lianwei, LI Xue
    . 2021, 35(9): 1-14.
    Artificial intelligence is increasingly emphasized in judicial practices in recent years. Based on the literature on intelligent models for assisting judicial cases, this paper suggests the following six challenges in legal judgement decision prediction: multi-feature crime prediction, multi-label crime prediction, multiple sub-task processing, unbalanced data issue, the interpretability of decision prediction and the adaption of existing algorithms to different types of cases. Meanwhile, the paper provides theoretical discussion, technical analysis, technical challenges as well as trend analysis for these problems. The datasets used in this field and the corresponding evaluation metrics are also summarized.
  • Survey
    QIN Libo, LI Zhouyang, LOU Jieming, YU Qiying, CHE Wanxiang
    . 2022, 36(1): 1-11,20.
    Natural Language Generation in a task-oriented dialogue system (ToDNLG) aims to generate natural language responses given the corresponding dialogue acts, which has attracted increasing research interest. With the development of deep neural networks and pre-trained language models, great success has been witnessed in the research of ToDNLG field. We present a comprehensive survey of the research field, including: (1) a systematical review on the development of NLG in the past decade, covering the traditional methods and deep learning-based methods; (2) new frontiers in emerging areas of complex ToDNLG as well as the corresponding challenges; (3) rich open-source resources, including the related papers, baseline codes and the leaderboards on a public website. We hope the survey can promote future research in ToDNLG.
  • Sentiment Analysis and Social Computing
    LU Hengyang, FAN Chenyou, WU Xiaojun
    . 2022, 36(1): 135-144,172.
    The COVID-19 rumors published and spread on the online social media have a serious impact on people's livelihood, economy, and social stability. Most existing researches for rumor detection usually assumed that the happened events for modeling and predictions already have enough labeled data. These studies have severe limitations on detecting emergent events such as the COVID-19 which has very few training instances. This article focuses on the problem of few-shot rumor detection, aiming to detect rumors of emergent events with only very few labeled instances. Taking the COVID-19 rumors from Sina Weibo as the target, we construct a Sina Weibo COVID-19 rumor dataset for few-shot rumor detection, and propose a deep neural network based few-shot rumor detection model with meta learning. In the few-shot machine learning scenarios, the experimental results of the proposed model on the COVID-19 rumor dataset and the PHEME public dataset have been significantly improved.
  • Sentiment Analysis and Social Computing
    ZHANG Yawei, WU Liangqing, WANG Jingjing, LI Shoushan
    . 2022, 36(5): 145-152.
    Sentiment analysis is a popular research issue in the field of natural language processing, and multimodal sentiment analysis is the current challenge in this task. Existing studies are defected in capturing context information and combining information streams of different models. This paper proposes a novel multi-LSTMs Fusion Model Network (MLFN), which performs deep fusion between the three modalities of text, voice and image via the internal feature extraction layer for single-modal, and the inter-modal fusion layer for dual-modal and tri-modal. This hierarchical LSTM framework takes into account the information features inside the modal while capturing the interaction between the modals. Experimental results show that the proposed method can better integrate multi-modal information, and significantly improve the accuracy of multi-modal emotion recognition.
  • Survey
    Li Yunhan, Shi Yunmei, Li Ning, Tian Ying'ai
    . 2022, 36(9): 1-18,27.
    Text correction, an important research field in Natural Language Processing (NLP), is of great application value in fields such as news, publication, and text input . This paper provides a systematic overview of automatic error correction technology for Chinese texts. Errors in Chinese texts are divided into spelling errors, grammatic errors and semantic errors, and the methods of error correction for these three types are reviewed. Moreover, datasets and evaluation methods of automatic error correction for Chinese texts are summarized. In the end, prospects for the automatic error correction for Chinese texts are raised.
  • Information Extraction and Text Mining
    WANG Zi, WANG Yulong, LIU Tongcun, LI Wei, LIAO Jianxin
    . 2022, 36(3): 82-90.
    Quote attribution in novels aims at determining who says a quote in a given novel. This task is important for assigning appropriate voices to the given quotes when producing vocal novels. In order to fully express the difference of quote types and the semantic features in the context, this paper proposes a Rule-BertAtten method for quote attribution in Chinese novels. The quotes are divided into four categories: the quote with explicit speaker, the quote with pronoun speaker with one-match gender, the quote with pronoun speaker with multi-match gender and the quote with implicit speaker. According to these categories, a rule-based method and the BERT word embedding methods with Attention are applied respectively. The experiment result shows that our method is more accurate than previous approaches.
  • Language Resources Construction
    WANG Chengwen, DONG Qingxiu, SUI Zhifang, ZHAN Weidong,
    CHANG Baobao, WANG Haitao
    Journal of Chinese Information Processing. 2023, 37(2): 26-40.
    Pubic NLP datasets form the bedrock for NLP evaluation tasks, and the quality of such datasets has a fundamental impact on the development of evaluation tasks and the application of evaluation metrics. In this paper, we analyze and summarize eight types of problems relating to publicly available mainstream Natural Language Processing (NLP) datasets. Inspired by the quality assessment of testing in education community, we propose a series of evaluation metrics and evaluation methods combining computational and operational approaches, with the aim of providing a reference for the construction, selection and utilization of natural language processing datasets.
  • Survey
    CHEN Xin, ZHOU Qiang
    . 2021, 35(11): 1-12.
    As a branch of the dialogue system, open domain dialogue has a good prospect in application. Different from task-based dialogue, it has strong randomness and uncertainty. This paper reviews the researches on open domain dialogue from the perspective of reply method, focusing on the application and improvement of sequence-to-sequence model in dialogue generation scenarios. The researches exhibit a clear clue from single-round dialogue to multi-round dialogue, and we further reveal that the sequence to sequence generation model has some problems that the characteristics of the model implementation and the application scenarios do not exactly match in the multi-round dialogue generation. Finally, we explore the possible improvements for the generation of multi-round dialogues from introducing external knowledge, introducing rewriting mechanism and introducing agent mechanism.
  • Information Extraction and Text Mining
    DING Zeyuan, YANG Zhihao, LUO Ling, WANG Lei, ZHANG Yin, LIN Hongfei, WANG Jian
    . 2021, 35(5): 70-76.
    In the field of biomedical text mining, biomedical named entity recognition and relations extraction are of great significance. This paper builds a Chinese biomedical entity relation extraction system based on deep learning technology. Firstly, Chinese biomedical entity relation corpus is construction from the publicly available English biomedical annotated corpora via translation and manual annotation. Then this paper applies the ELMo (Embedding from Language Model) trained in Chinese biomedical text to the Bi-directional LSTM (BiLSTM) combined conditional random fields (CRF) model for Chinese entity recognition. Finally, the relation between entities is extracted using BiLSTM combined with the Attention mechanism. The experimental results show that the system can accurately extract biomedical entities and inter-entity relation from Chinese text.
  • Information Extraction and Text Mining
    LU Xiaolei, NI Bin
    . 2021, 35(11): 70-79.
    An accurate automatic patent classifier is crucial to patent inventors and patent examiners, and is of potential application in the fields of intellectual property protection, patent management, and patent information retrieval. This paper presents BERT-CNN, a hierarchical patent classifier based on pre-trained language model, which is trained by the national patent application documents collected from the State Information Center, China. The experimental results show that the proposed method achieves 84.3% accuracy, much better than the two compared baseline methods, Convolutional Neural Networks and Recurrent Neural Networks. In addition, this article also discusses the differences between hierarchical and flat strategies in multi-layer text classification.
  • Information Extraction and Text Mining
    LI Chunnan, WANG Lei, SUN Yuanyuan, LIN Hongfei
    . 2021, 35(8): 73-81.
    Legal named entity recognition(LNER)is a fundamentaltask for the field of smart judiciary.This paper presents a new definition of LNER and a corpus of letters of proposal for prosecution named LegalCorpus. This paper proposes novel BERT based NER model for legal texts, named BERT-ON-LSTM-CRF (Bidirectional Encoder Representations from Transformers-Ordered Neuron-Long Short Term Memory Networks-Conditional Random Field). The proposed model utilizes BERT model to dynamically obtain the semantic vectors according to the context of words. Then the ONLSTM is adopted to extract the text features by modeling the input sequence and hierarchy. Finally, the text features are decoded by CRF to obtain the optimal tag sequence. Experiments show that the proposed model can achieve a F1-value of 86.09%, with 7.8% increased than the best baseline Lattice-LSTM.
  • Information Extraction and Text Mining
    ZHANG Shiqi, MA Jin, ZHOU Xiabing, JIA Hao, CHEN Wenliang, ZHANG Min
    . 2022, 36(1): 56-64.
    Attribute extraction is a key step of constructing a knowledge graph. In this paper, the task of attribute extraction is converted into a sequence labeling problem. Due to a lack of labeling data in product attribute extraction, we use the distant supervision to automatically label multiple source texts related to e-commerce. In order to accurately evaluate the performance of the system, we construct a manually annotated test set, and finally obtain a new data set for product attribute extraction in multi-domains. Based on the newly constructed data set, we carried out intra-domain and cross-domain attribute extraction for a variety of pre-trained language models. The experimental results show that the pre-trained language models can better improve the extraction performance. Among them, ELECTRA performs the best in attribute extraction in in-domain experiments, and BERT performs the best in cross-domain experiments. we also find that adding a small amount of target domain annotation data can effectively improve the performance cross-domain attribute extraction and enhance the domain adaptability of the model.
  • Survey
    ZHANG Xuan, LI Baobin
    Journal of Chinese Information Processing. 2022, 36(12): 1-15.
    Social bots in microblog platforms significantly impact information dissemination and public opinion stance. This paper reviews the recent researches on social bot account detection in microblogs, especially Twitter and Weibo. The popular methods for data acquisition and feature extraction are reviewed. Various bot detection algorithms are summarized and evaluated, including approaches based on statistical methods, classical machine learning methods, and deep learning methods. Finally, some suggestions for future research are anticipated.
  • Sentiment Analysis and Social Computing
    CHENG Yan, SUN Huan, CHEN Haomai, LI Meng, CAI Yingying, CAI Zhuang
    . 2021, 35(5): 118-129.
    Text sentiment analysis is an important branch in the field of natural language processing. This paper proposes a text sentiment analysis capsule model that combines convolutional neural networks and bidirectional GRU networks. Firstly, the multi-head attention is used to learn the dependency between words and capture the emotional words in the text. Then, the convolutional neural network and bidirectional GRU network are used to extract emotional features of different granularities in the text. After the feature fusion, the global average pooling is used to get the instance feature representation of the text, and the attention mechanism is combined to generate feature vectors for each emotion category to construct an emotion capsule. Finally, the emotion category of the text is judged by the capsule attributes. Tested on the MR, IMDB, SST-5 and Tan Songbo hotel review datasets, the proposed model achieves better classification effect than other baseline models.
  • Information Extraction and Text Mining
    YE Junyao, SU Jingyong, WANG Yaowei, XU Yong
    . 2022, 36(12): 133-138,148.
    Spaced repetition is a common mnemonic method in language learning. In order to decide proper review intervals for a desired memory effect, it is necessary to predict the learners’ long-term memory. This paper proposes a long-term memory prediction model for language learning via LSTM. We extract statistical features and sequence features from the memory behavior history of learners. The LSTM is used to learn the memory behavior sequence. The half-life regression model is applied to predict the probability of foreign language learners' recall of words. Upon the 9 billion pieces of real memory behavior data collected for evaluation, the sequence features are revealed more informative than statistical features. Compared with the state-of-the-art models, the error of the proposed LSTM-HLR model is significantly reduced by 50%.
  • Survey
    DONG Qingxiu, SUI Zhifang, ZHAN Weidong, CHANG Baobao
    . 2021, 35(6): 1-15.
    Evaluation in natural language processing drives and promotes research on models and methods. In recent years, new evaluation data sets and evaluation tasks have been continuously proposed. At the same time, a series of problems exposed by such evaluations seems to restrict the progress of natural language processing technology. Starting from the concept, composition, development and significance of natural language Processing evaluation, this article classifies and summarizes the tasks and characteristics of mainstream natural language Processing evaluation, and then reveals the problems and their possible causes. In parallel to the human language ability evaluation standard, this paper puts forward the concept of human-like machine language ability evaluation, and proposes a series of basic principles and implementation ideas for human-like machine language ability evaluation from three aspects: reliability, difficulty and validity.
  • Information Extraction and Text Mining
    WU Ting, KONG Fang
    . 2021, 35(10): 73-80.
    As a subtask of information extraction, relation extraction aims to extract the structured knowledge from unstructured text, which is very important for the downstream tasks such as automatic question answering and knowledge graph construction. Focused on document-level relation extraction, this paper proposed a graph attention convolution model to deal with long-distance dependence issue. The model uses a multi-head attention mechanism to construct a dynamic topological graph for coreference, syntax and other information. Then it uses the graph convolution model and dynamic graph to capture global and local dependency information between entities. Experiments on the DocRED corpus and the self-expanding ACE 2005 corpus comfirm improvements on F1 values by 2.03 and 3.93, respectively.
  • Information Extraction and Text Mining
    CHONG Weifeng, LI Hui, LI Xue, REN He, YU Dong, WANG Yehan
    . 2021, 35(5): 86-90.
    The normalization of clinical terms is to assign a corresponding term in the standard term set to any term written by the doctor. This task is challenged by large amount of standard terms with high mutual similarity, as well as insufficient training data known as Zero-shot or Few-shot learning. This paper designs and implements a clinical term normalization system based on BERT entailment ranking. The system consists of four modules: data preprocessing, BERT entailment scoring, BERT quantity prediction, and logistic regression-based reordering.Tested in CHIP 2019 Track 1 "Evaluation of Chinese Clinical Term Normalization", it achieves a final accuracy of 0.948 25 as the top score in this campaign.
  • Sentiment Analysis and Social Computing
    GE Xiaoyi, ZHANG Mingshu, WEI Bin, LIU Jia
    . 2022, 36(9): 129-138.
    The identification of rumors is of substantial significance research value. Current deep learning-based solution brings excellent results, but fails in capturing the relationship between emotion and semantics or providing emotional explanations. This paper proposes a dual emotion-aware method for interpretable rumor detection, aiming to provide a reasonable explanation from an emotional point of view via co-attention weights. Compared with contrast model, the accuracy is increased by 3.9%,3.3% and 4.4% on the public Twitter15, Twitter16, and Weibo20 datasets.
  • Information Retrieval and Question Answering
    WU Kun, ZHOU Xiabing, LI Zhenghua, LIANG Xingwei, CHEN Wenliang
    . 2021, 35(9): 113-122.
    Path selection, as a key step in the Knowledge Base Question Answering (KBQA) task, relies on the the semantic similarity between a question and candidate paths. To deal with massive unseen relations in the test set, a method based on dynamic sampling of negative examples is proposed to enrich the relations in the training set. In the prediction phase, two path pruning methods, i.e., the classification method and the beam search method, are compared to tackle the explosion of candidate paths. On the CCKS 2019-CKBQA evaluation data set containing simple and complex problems, the proposed method achieves an average F1 value of 0.694 for the single-model system, and 0.731 for the ensemble system.
  • Information Extraction and Text Mining
    LI Wenxin, ZHANG Kunli, GUAN Tongfeng, ZHANG Huan,
    ZHU Tiantian, CHANG Baobao, CHEN Qingcai
    . 2022, 36(4): 66-72.
    The 6th China Conference on Health Information Processing (CHIP2020) organized six evaluation tasks in Chinese medical information processing, among which task 1 was named entity recognition task of Chinese medical text. The main purpose of this task is to automatically identify medical named entities in medical texts. A total of 253 teams signed up for the evaluation, and 37 teams finally submitted 80 sets of results. The micro-average F1 is used as the final evaluation criteria, and the highest value of the submitted results reached 68.35%.
  • Knowledge Representatoin and Acquisition
    LIU Mengdi, LIANG Xun
    .
    The paper proposes a method for calculating the similarity of character glyphs, which aims to solve the problem of identifying similar Chinese characters. First, we construct a radical knowledge graph according to the character's composition. Then, based on the knowledge graph and structure features, the paper proposes 2CTransE to learn the semantic representation of entities. Finally, we calculate the character similarity by the entity vector. Results show that the method are effective in similar characters identification. And the component library can be used in the subsequent related researches. We also propose a novel method for Japanese and other similar languages in character similarity calculation.
  • Information Extraction and Text Mining
    GAN Zifa, ZAN Hongying, GUAN Tongfeng, LI Wenxin, ZHANG Huan, ZHU Tiantian, SUI Zhifang, CHEN Qingcai
    . 2022, 36(6): 101-108.
    The 6th China conference on Health Information Processing (CHIP 2020) organized six shared tasks in Chinese medical information processing. The second task was entity and relation extraction that automatically extracts the triples consist of entities and relations from Chinese medical texts. A total of 174 teams signed up for the task, and eventually 17 teams submitted 42 system runs. According to micro-average F1 which was the key evaluation criterion in the task, the top performance of the submitted results reaches 0.648 6.
  • Language Resources Construction
    XING Fugui, ZHU Tingshao
    . 2021, 35(7): 41-46.
    The classical Chinese word segmentation is an important step to analyze existing ancient documents. In this paper, we first collect unstructured classical Chinese online corpus and accumate a basic dictionary. Then the candidate new words are discovered by a multi-feature fusion strategy, including mutual information, information entropy, and position word probability. Finally, a CCIDict of 349,740 words is applied with the forward maximum matching to segment the words in classical Chinese texts, achieving 14% improvements in F-value compared with the open-source Jiayan.
  • Natural Language Understanding and Generation
    LIU Xikai, LIN Hongfei, XU Bo, YANG Liang, REN Yuqi
    . 2021, 35(7): 134-142.
    Response generation is an important component of the dialogue system. To better combine the retrieval-based model with a generation-based model, this paper proposes a response generative model via the retrieved response fusion mechanism. The model uses bidirectional LSTM to encode the retrieved response, and then this paper proposes a Long Short-Term Memory network with a fusion mechanism (fusion-LSTM). This mechanism fuses retrieval results with dialogue text within the model to better integrate the retrieved information into the generative model. The experimental results show that this method is superior to the baseline methods in both automatic and human evaluation.
  • Language Analysis and Calculation
    HE Xiaowen, LUO Zhiyong, HU Zijuan, WANG Ruiqi
    . 2021, 35(5): 1-8.
    The grammatical structure of natural language text consists of words, phrases, sentences, clause complexes and texts. This paper re-examines the definition of sentences in linguistics and the segmentation of sentences in natural language processing, and puts forward the task of Chinese sentence segmentation. Based on the theory of clause complex, the sentence is defined as the smallest topic self-sufficient punctuation sequence, and a sentence boundary recognition model based on BERT is designed and implemented. The experimental results show that the accuracy and F1 value of the model are 88.37% and 83.73%, respectively, much better than that of mechanical segmentation according to punctuation marks.
  • Survey
    SHI Yuefeng, WANG Yi, ZHANG Yue
    . 2022, 36(7): 1-12,23.
    The goal of argument mining task is to automatically identify and extract argumentative structure from natural language. Understanding the argumentative structure and its reasoning contributes to obtaining reasons behind claims, and argument mining has gained great attention from researchers. Deep learning based methods have been generally applied for these tasks owing to their encoding capabilities for complex structures and representation capabilities for latent features. This paper systematically reviews the deep learning methods in argument mining areas, ncluding fundamental concepts, frameworks and datasets. It also introduces how deep learning based methods are applied in different argument mining tasks. Finally, this paper concludes weaknesses of current argument mining methods and anticipates the future research trends.