Most Read
  • Published in last 1 year
  • In last 2 years
  • In last 3 years
  • All

Please wait a minute...
  • Select all
    |
  • Survey
    CEN Keting, SHEN Huawei, CAO Qi, CHENG Xueqi
    Journal of Chinese Information Processing. 2023, 37(5): 1-21.
    As a self-supervised deep learning paradigm, contrastive learning has achieved remarkable results in computer vision and natural language processing. Inspired by the success of contrastive learning in these fields, researchers have tried to extend it to graph data and promoted the development of graph contrastive learning. To provide a comprehensive overview of graph contrastive learning, this paper summarizes recent works under a unified framework to highlight the development trends. It also catalogues the popular datasets and evaluation metrics for graph contrastive learning, and concludes with the possible future direction of the field.
  • Survey
    AN Zhenwei, LAI Yuxuan, FENG Yansong
    . 2022, 36(8): 1-11.
    In recent years, legal artificial intelligence has attracted increasing attention for its efficiency and convenience. Among others, legal text is the most common manifestation in legal practice, thus, using natural language understanding method to automatically process legal text is an important direction for both academia and industry. In this paper, we provide a gentle survey to summarize recent advances on natural language understanding for legal texts. We first introduce the popular task setups, including legal information extraction, legal case retrieval, legal question answering, legal text summarization, and legal judgement prediction. We further discuss the main challenges from three perspectives: understanding the difference of languages between legal domain and open domain, understanding the rich argumentative texts in legal documents, and incorporating legal knowledge into existing natural language processing models.
  • Survey
    Li Yunhan, Shi Yunmei, Li Ning, Tian Ying'ai
    . 2022, 36(9): 1-18,27.
    Text correction, an important research field in Natural Language Processing (NLP), is of great application value in fields such as news, publication, and text input . This paper provides a systematic overview of automatic error correction technology for Chinese texts. Errors in Chinese texts are divided into spelling errors, grammatic errors and semantic errors, and the methods of error correction for these three types are reviewed. Moreover, datasets and evaluation methods of automatic error correction for Chinese texts are summarized. In the end, prospects for the automatic error correction for Chinese texts are raised.
  • Language Resources Construction
    WANG Chengwen, DONG Qingxiu, SUI Zhifang, ZHAN Weidong,
    CHANG Baobao, WANG Haitao
    Journal of Chinese Information Processing. 2023, 37(2): 26-40.
    Pubic NLP datasets form the bedrock for NLP evaluation tasks, and the quality of such datasets has a fundamental impact on the development of evaluation tasks and the application of evaluation metrics. In this paper, we analyze and summarize eight types of problems relating to publicly available mainstream Natural Language Processing (NLP) datasets. Inspired by the quality assessment of testing in education community, we propose a series of evaluation metrics and evaluation methods combining computational and operational approaches, with the aim of providing a reference for the construction, selection and utilization of natural language processing datasets.
  • Survey
    LUO Wen, WANG Houfeng
    Journal of Chinese Information Processing. 2024, 38(1): 1-23.
    Large Language Models (LLMs) have demonstrated exceptional performance in various Natural Language Processing (NLP) tasks, providing a potential for achieving general language intelligence. However, their expanding application necessitates more accurate and comprehensive evaluations. Existing evaluation benchmarks and methods still have many short-comings, such as unreasonable evaluation tasks and uninterpretable evaluation results. With increasing attention to robustness, fairness and so on, the demand for holistic, interpretable evaluations is impressing. This paper delves into the current landscape and challenges of LLM evaluation, summarizes existing evaluation paradigms, analyzes limitations, introduces pertinent evaluation metrics and methodologies for LLMs and discusses the ongoing advancements and future directions in the evaluation of LLMs.
  • Survey
    ZHANG Xuan, LI Baobin
    Journal of Chinese Information Processing. 2022, 36(12): 1-15.
    Social bots in microblog platforms significantly impact information dissemination and public opinion stance. This paper reviews the recent researches on social bot account detection in microblogs, especially Twitter and Weibo. The popular methods for data acquisition and feature extraction are reviewed. Various bot detection algorithms are summarized and evaluated, including approaches based on statistical methods, classical machine learning methods, and deep learning methods. Finally, some suggestions for future research are anticipated.
  • Information Extraction and Text Mining
    YE Junyao, SU Jingyong, WANG Yaowei, XU Yong
    . 2022, 36(12): 133-138,148.
    Spaced repetition is a common mnemonic method in language learning. In order to decide proper review intervals for a desired memory effect, it is necessary to predict the learners’ long-term memory. This paper proposes a long-term memory prediction model for language learning via LSTM. We extract statistical features and sequence features from the memory behavior history of learners. The LSTM is used to learn the memory behavior sequence. The half-life regression model is applied to predict the probability of foreign language learners' recall of words. Upon the 9 billion pieces of real memory behavior data collected for evaluation, the sequence features are revealed more informative than statistical features. Compared with the state-of-the-art models, the error of the proposed LSTM-HLR model is significantly reduced by 50%.
  • Sentiment Analysis and Social Computing
    GE Xiaoyi, ZHANG Mingshu, WEI Bin, LIU Jia
    . 2022, 36(9): 129-138.
    The identification of rumors is of substantial significance research value. Current deep learning-based solution brings excellent results, but fails in capturing the relationship between emotion and semantics or providing emotional explanations. This paper proposes a dual emotion-aware method for interpretable rumor detection, aiming to provide a reasonable explanation from an emotional point of view via co-attention weights. Compared with contrast model, the accuracy is increased by 3.9%,3.3% and 4.4% on the public Twitter15, Twitter16, and Weibo20 datasets.
  • Survey
    SHI Yuefeng, WANG Yi, ZHANG Yue
    . 2022, 36(7): 1-12,23.
    The goal of argument mining task is to automatically identify and extract argumentative structure from natural language. Understanding the argumentative structure and its reasoning contributes to obtaining reasons behind claims, and argument mining has gained great attention from researchers. Deep learning based methods have been generally applied for these tasks owing to their encoding capabilities for complex structures and representation capabilities for latent features. This paper systematically reviews the deep learning methods in argument mining areas, ncluding fundamental concepts, frameworks and datasets. It also introduces how deep learning based methods are applied in different argument mining tasks. Finally, this paper concludes weaknesses of current argument mining methods and anticipates the future research trends.
  • Information Extraction and Text Mining
    ZHANG Zhaowu, XU Bin, GAO Kening, WANG Tongqing, ZHANG Qiaoqiao
    . 2022, 36(7): 114-122.
    In the field of education, named entity recognition is widely used in Automatic machine questioning and Intelligent question answering. The traditional Chinese named entity recognition model needs to change the network structure to incorporate character and word information, which increases the complexity of the network structure. On the other hand, the data in the education field must be very accurate in the identification of entity boundaries. Traditional methods cannot incorporate location information, and the ability to identify entity boundaries is poor. In response to the above problems, this article uses an improved vector representation layer to integrate words, character, and location information in the vector representation layer, which can better define entity boundaries and improve the accuracy of entity recognition. BiGRU and CRF are used as models respectively. The sequence modeling layer and the annotation layer perform Chinese named entity recognition. This article conducted experiments on the Resume data set and the education data set (Edu), and the F1 values were 95.20% and 95.08%, respectively. The experimental results show that the method proposed in this paper improves the training speed of the model and the accuracy of entity recognition compared with the baseline model.
  • Ethnic Language Processing and Cross Language Processing
    LIU Rui, KANG Shiyin, GAO Guanglai, LI Jingdong, BAO Feilong
    . 2022, 36(7): 86-97.
    Aiming at real-time and high-fidelity Mongolian Text-to-Speech (TTS) generation, a FastSpeech2 based non-autoregressive Mongolian TTS system (short forMonTTS) is proposed. To improve the overall performance in terms of prosody naturalness and fidelity, MonTTS adopts three novel mechanisms: 1) Mongolian phoneme sequence is used to represent the Mongolian pronunciation; 2) phoneme-level variance adaptor is employed to learn the long-term prosody information; and 3) two duration aligners, i.e. Mongolian speech recognition and Mongolian autoregressive TTS based models, are used to provide the duration supervise signal. Besides, we build a large-scale Mongolian TTS corpus, named MonSpeech. The experimental results show that the MonTTS outperforms the state-of-the-art Tacotron-based Mongolian TTS and standard FastSpeech2 baseline systems significantly, with real-time rate (RTF) of 3.63× 10-3 and Mean Opinion Score (MOS) of 4.53(see https: //github.com/ttslr/MonTTS).
  • Language Resources Construction
    XIE Chenhui, HU Zhengsheng, YANG Lin'er, LIAO Tianxin, YANG Erhong
    Journal of Chinese Information Processing. 2023, 37(2): 15-25.
    Sentence pattern structure treebank is developed according to the theory of sentence-based grammar, which is of great significance to Chinese teaching. To further expand such treebank from Chinese as second language textbooks and Chinese textbooks to other domains, we propose a rule-based method to convert a phrase structure treebank named Penn Chinese Treebank (CTB) into a sentence pattern structure treebank so as to increase the size of the existing treebank. The experimental results show that our proposed method is effective.
  • Sentiment Analysis and Social Computing
    ZHU Qinglin, LIANG Bin, XU Ruifeng, LIU Yuhan, CHEN Yi, MAO Ruibin
    . 2022, 36(8): 109-117.
    To address the entity-level sentiment analysis of financial texts, this paper builds a multi-million level corpus of sentiment analysis of financial domain entities and labels more than five thousand financial domain sentiment words as financial domain sentiment dictionary. We further propose an Attention-based Recurrent Network Combined with Financial Lexicon, called FinLexNet. FinLexNet model uses a LSTM to extract category-level information based on financial domain sentiment dictionary and another LSTM to extract semantic information at the word-level. In addition, in order to get more attention to the financial sentiment words, an attention mechanism based on the financial domain sentiment dictionary is proposed. Finally, experiments on the dataset we constructed shows that our model has achieved better performance than the baseline models.
  • Survey
    DENG Hancheng, XIONG Deyi
    . 2022, 36(11): 20-37.
    Machine translation quality estimation refers to the estimation of the quality of the outputs by machine translation system without the human reference translations. It is of great value to the research and application of machine translation. In this survey, we firstly introduce the background and significance of machine translation quality estimation. Then we introduce in detail the specific task objectives and evaluation indicators of word-level QE, sentence-level QE, and document-level QE. We further summarize the development of QE methods to three main stage: methods based on feature engineering and machine learning, methods based on deep learning, and methods integrated with pre-training model. Representative research works in each stage are introduced, and the current research status and shortcomings are analyzed. Finally, we outline the outlook for the future research and development of QE.
  • Survey
    CHEN Jinpeng, LI Haiyang, ZHANG Fan, LI Huan, WEI Kaimin
    Journal of Chinese Information Processing. 2023, 37(3): 1-17,26.
    In recent years, session-based recommendation methods have attracted extensive attention from academics. With the continuous development of deep learning techniques, different model structures have been used in session-based recommendation methods, such as Recurrent Neural Networks, Attention Mechanism, and Graph Neural Networks. This paper conducts a detailed analysis, classification, and comparison over these models, and expounds on the target problems and shortcomings of these methods. In particular, this paper first compares the session-based recommendation methods with the traditional recommendation methods, and expounds the main advantages and disadvantages of the session-based recommendation methods through investigation. Subsequently, this paper details how complex data and information are modeled in session-based recommendation models, as well as the problems that these models can solve. Finally, this paper discusses and ideatifies the challenges and potential research directions in session-based recommendations.
  • Language Resources Construction
    ZHANG Kunli, REN Xiaohui, ZHUANG Lei, ZAN Hongying, ZHANG Weicong, SUI Zhifang
    . 2022, 36(10): 45-53.
    A medicine knowledge base with complete classification system and comprehensive drug information can provide basis and support for clinical decision-making and rational drug use. Based on multiple domestic medical resources as references and data sources, this paper establishes the knowledge description system and classification system of medicine base, standardizes classification of drugs and forms detailed knowledge description, and constructs a multi-source Chinese Medicine Knowledge Base (CMKB). The classification of CMKB includes 27 first-level categories and 119 secondary categories, and describes 14,141 drugs from multiple levels such as drug indications, dosage and administration. Furthermore, the BiLSTM-CRF and T-BiLSTM-CRF models are used to extract information of disease entities in unstructured descriptions, forming structured information extraction of drug attributes, and establishing the knowledge association between drug entities and automatically extracted disease entities. The constructed CMKB can be connected with the Chinese medical knowledge graph to expand drug information, and can provide the knowledge basis for intelligent diagnosis and medical question and answer.
  • Ethnic Language Processing and Cross Language Processing
    AN Bo, LONG Congjun
    . 2022, 36(12): 85-93.
    Tibetan text classification is a fundamental task in Tibetan natural language processing. The current mainstream text classification model is a large-scale pre-training model plus fine-tuning. However, Tibetan lacks open source large-scale text and pre-training language model, and cannot be verified on Tibetan text classification task. This paper crawls a large Tibetan text dataset to solve the above problems and trains a Tibetan pre-training language model (BERT-base-Tibetan) based on this dataset. Experimental results show that the pre-training language model can significantly improve the performance of Tibetan text classification (F1 value increases by 9.3% on average) and verify the value of the pre-training language model in Tibetan text classification tasks.
  • Natural Language Understanding and Generation
    MA Tianyu, QIN Jun, LIU Jing, TIE Jun, HOU Qi
    . 2022, 36(8): 127-134.
    The intention classification and the slot filling are two basic sub-tasks of spoken language understanding. A joint model of intention classification and slot filling based on BERT is proposed. Through an association network, the two tasks establish direct contact and share information. BERT is introduced into the model to enhance the semantic representation of word vectors, which effectively alleviates the issue of small training data. Experiments on ATIS and Snips data sets show that the proposed model can significantly improve the accuracy of intention classification and the F1 value of slot filling.
  • Information Extraction and Text Mining
    ZHANG Hongkuan, SONG Hui, XU Bo, WANG Shuyi
    . 2022, 36(10): 97-106.
    Document-level event extraction aims at discovering the event with its arguments and their roles from texts. This paper proposes an end-to-end model for domain-specific document-level event extraction based on BERT. We introduce the embedding of event type and entity nodes to the subsequent layer for event argument and role identification, which represents the relation between events, arguments and roles to improve the accuracy of classifying multi-event arguments. With the title and the embedding of the quintuple of event, we realize the identification of principal and subordinate events, and element fusion between multiple events. Experimental results show that our model outperforms the baselines.
  • Survey
    XUE Siyuan, ZHOU Jianshe, REN Fuji
    Journal of Chinese Information Processing. 2023, 37(2): 1-14.
    This paper summarizes the researches on automated essay scoring, including the development of automated essay scoring system. It also examines the tasks, public datasets and popular metrics in of automated essay scoring. The main techniques and models for automated essay scoring are reviewed, as well as the challenges in terms of both native Chinese speakers and non-native Chinese speakers.; Finally, the prospects for future automated essay scoring is discussed.
  • Information Extraction and Text Mining
    SUN Xianghui, MIAO Deqiang, DOU Chenxiao,
    YUAN Long, MA Baochang, DENG Yong, ZHANG Lulu, LI Xiangang
    Journal of Chinese Information Processing. 2023, 37(2): 119-128.
    "Intent Recognition" and "Slot Filling" are two core tasks in intelligent human-computer interaction, which have received extensive attentions from academia and industry. Most state-of-the-art models on few-shot learning tasks are far inferior to their performance on many-shot learning tasks. In this paper, we propose a novel joint model based on semi-supervised and transfer learning for intent recognition and slot filling. Semi-supervised learning is used to identify the few-shot intent, requiring no additional labelled data. Transfer learning is used to exploit prior knowledge learned from the large samples to acquire a slot-filling model for the small samples. The proposed method won the first place in 2021 China Computer Society Big Data and Computational Intelligence Competition (CCF-BDCI) Organizing Committee and Chinese Information Society of China (CIPS) jointly held the National Information Retrieval Challenge Cup (CCIR Cup) track of 2021 CCIR Cup held by CCF Big Data & Computing Intelligence Contest (CCF-BDCI).
  • Sentiment Analysis and Social Computing
    WANG Jinghao, LIU Zhen, LIU Tingting , WANG Yuanyi, CHAI Yanjie
    . 2022, 36(10): 145-154.
    Existing methods for sentiment analysis in social media usually deal with single modal data, without capturing the relationship between multimodal information. This paper propose to treat the hierarchical structure relations between texts and images in social media as complementarity. This paper designs a multi-level feature fusion attention network to capture both the ‘images-text’ and the ‘text-images’ relations to perceive the user’s sentiments in social media. Experimental results on Yelp and MultiZOL datasets show that this method can effectively improve the sentiment classification accuracy for multimodal data.
  • Sentiment Analysis and Social Computing
    FU Xiangling, YAN Chenwei, ZHAO Pengya, SONG Meiqi, WU Weiqiang
    . 2022, 36(9): 120-128,138.
    Fraud detection in consumer finance is an important issue in both academic and industrial community. With the emergence of group fraud, classical machine learning methods doesn’t work well due to the small number of fraudulent users and insufficient feature data. Since group fraudulent users are closely related, this paper investigates to construct a user-related network by the phone call data between users. The user feature in the graph is extracted through network statistical indicators and Deepwalk algorithm, making full use of the topological structure information and the neighboring information. The above information, together with the user’s inherent characteristics, are input to the LightGBM model. The experimental results show that with the graph representation learning method, the AUC is improved by 7.3% compared with using only inherent features.
  • Survey
    FAN Zipeng, ZHANG Peng, GAO Hui
    Journal of Chinese Information Processing. 2023, 37(1): 1-15.
    Quantum natural language processing, as a cross-disciplinary field of quantum mechanics and natural language processing, has gradually attracted the attention of the community, and a large number of quantum natural language processing models and algorithms have been proposed. As a review of these work, this paper briefly summarizes the problems of current classical algorithms and the two research ideas of combinng quantum mechanics with natural language processing. It also explains the role of quantum mechanics in natural language processing from three aspects: semantic space, semantic modeling and semantic interaction. By analyzing the differences in storage resources and computation complexity between the quantum computing platform and the classical computing platform, it reveals the necessity of deploying quantum natural language processing algorithms on the quantum computing platform. Finally, the current quantum natural language processing algorithms are enumerated, and the research direction in this field are outlooked for further research.
  • Survey
    HUANG Zhenya, LIU Qi, CHEN Enhong, LIN Xin, HE Liyang, LIU Jiayu, WANG Shijin
    . 2022, 36(10): 1-16.
    One of the important research directions on the integration of artificial intelligence into pedagogy is analyzing the meanings of educational questions and simulating how humans solve problems. In recent years, a large number of educational question resources have been collected, which provides the data support of the related research. Leveraging the big data analysis and natural language processing related techniques, researchers propose many specific text analysis methods for educational questions, which are of great significance to explore the cognitive abilities of how human master knowledge. In this paper, we summarize several representative topics, including question quality analysis, machine reading comprehension, math problem solving, and automated essay scoring. Moreover, we introduce the relevant public datasets and open-source toolkits. Finally, we conclude by anticipating several future directions.
  • Sentiment Analysis and Social Computing
    LEI Pengbin, QIN Bin, WANG Zhili, WU Yufan, LIANG Siyi, CHEN Yu
    . 2022, 36(8): 101-108.
    This paper proposes a new method for text sentiment classification based on the pre-training model. The BiLSTM network is applied to dynamically adjust the output weight of the Transformer of each layer of the pre-training model, and the layered text representation vectors are filtered using features such as BiLSTM and BiGRU. By using the model, we achieved third place in the Netizen Emotion Recognition Track during the epidemic of CCF 2020 Science and Technology for Epidemic·Big Data Charity Challenge. The F1 value of the final test set is 0.745 37, which is 0.000 1 less than the first-place model with 67% less parameters.
  • Natural Language Understanding and Generation
    ZHU Shuai, CHEN Jianwen, ZHU Ming
    . 2022, 36(9): 159-168.
    The main factor restricting the performance of multi-turn dialogue is the insufficient use of context information. Currently, one of important solutions to this problem is to rewrite user’s input based on preceding text of dialogue. The core task of rewrite is pronoun resolution and ellipsis recovery. We proposed SPDR (Span Prediction for Dialogue Rewrite) based on BERT, which performs multi-turn dialogue rewrite through predicting the start and end position of the span to fill before each token in user’s input. A new metric comes forward to evaluate the performance of rewrite result. Compared with traditional pointer generate network, the inference speed of our model is improved by about 100% without damaging the performance. Our model based on RoBERTa-wwm outperforms the pointer generate network in five metrics.
  • Language Analysis and Calculation
    XIONG Kai , DU Li, DING Xiao , LIU Ting, QIN Bing, FU Bo
    Journal of Chinese Information Processing. 2022, 36(12): 27-35.
    Although the pre-trained language model has achieved high performance on a large number of natural language processing tasks, the knowledge contained in some pre-trained language models is difficult to support more efficient textual inference. Focused on using a wealth of knowledge to enhance the pre-trained language model for textual inference, we propose a framework for textual inference to integrate the knowledge of graphs and graph structures into the pre-trained language model. Experiments on two subtasks of textual inference indicate our framework outperforms a series of baseline methods.
  • Language Analysis and Calculation
    ZHANG Zhonglin, YU Wei, YAN Guanghui, YUAN Chenyu
    . 2022, 36(8): 12-19,28.
    At present, most of the existing Chinese word segmentation models are based on recurrent neural networks, which can capture the overall features of the sequence while ignoring local features. This paper combines the attention mechanism, convolutional neural network and conditional random fields, and proposes Attention Convolutional Neural Network CRF (ACNNC). The self-attention layer replaces the recurrent neural network to capture the global features of the sequence, and the convolutional neural network captures location features of the sequence. The features are combined in the fusion layer and then input into conditional random fields for decoding. The experimental results on BACKOFF 2005 show that the model proposed achieves 96.2%, 96.4%, 96.1% and 95.8% F1 values on PKU, MSR, CITYU and AS test set, respectively.
  • Natural Language Understanding and Generation
    LI Mengmeng, JIANG Aiwen, LONG Yuzhong, NING Ming, PENG Hu, WANG Mingwen
    . 2022, 36(9): 139-148.
    Visual storytelling is a cross-modal task derived from image captioning, with substantial academic significance and wide application in the fields of automatic generation of travel notes, education and so on. Current methods are challenged by insufficient description of fine-grained image contents, low correlation between image and the generated story, lack of richness in language and so on. This paper proposes a visual storytelling algorithm based on fine-grained visual features and knowledge graph. To fully mine and enhance the representations on image content, we design a fine-grained visual feature generator and a semantic concept generator. The first generator applies graph convolution learning on scene graph to embed entity relationships into the fine-grained visual information. The second generator integrates external knowledge graph to enrich high-level semantic associations between adjacent images. As a result, comprehensive and detailed representations for image sequence are finally realized. Compared with several state-of-the-art methods on the VIST dataset, the proposed algorithm has great advantages on Distinct-N and TTR performance over story-image correlation, story logic and word diversity.
  • Machine Translation
    CHEN Linqing, LI Junhui, GONG Zhengxian
    . 2022, 36(9): 67-75.
    How to effectively use textual context information is a challenge in the field of document-level neural machine translation (NMT). This paper proposes to use a hierarchical global context derived from the entire document to improve the document-level NMT models. The proposed model obtains the dependencies between the words in current sentence and all other sentences, as well as those between all words. Then the dependencies of different levels are combined as the global context containing the hierarchical contextual information. In order to take advantage of the parallel sentence in training, this paper employs a two-step training strategy: a sentence level model is first trained by the Transformer, and then fine-tuned on a document-level corpus. Experiments on several benchmark corpus data sets show that the proposed model significantly improves translation quality compared with other strong baseline models.
  • Information Extraction and Text Mining
    YIN Yajue, GAO Xiaoya, WANG Jingjing, LI Shoushan, XU Shaoyang, ZENG Yuhao
    . 2022, 36(7): 106-113.
    The task of patent matching aims to determine the similarity between two patent texts. Different from the free texts, the patent text includes a variety of text blocks, such as title, abstract, statement, etc. In order to make full use of these multi-text information, this paper proposes a Multi-View Attentive Network (MVAN) learning model based on attention mechanism, so as to capture matching information from different perspectives of the patent. First, the BERT model is employed to extract each single-view matching features (title, abstract or statement) of a patent pair. Then, the attention mechanism is adopted to integrate the above-mentioned features and obtain multi-view matching features. Finally, the multi-view learning mechanism is applied to jointly learn single and multi-view matching features. Experimental results show that the performance of the proposed MVAN is better than other baseline methods on patent matching tasks.
  • Information Extraction and Text Mining
    DENG Qiuyan, XIE Songxian, ZENG Daojian, ZHENG Fei, CHENG Chen, PENG Lihong
    . 2022, 36(9): 93-101.
    There are a large amount of text data in the field of public security. How to extract case-related information from texts of different sources and formats is an important issue for public security information processing. This paper proposes an event extraction method that combines the event detection without triggers and the event argument role classification based on reading comprehension. First of all, this method adopts a method to realize event detection without triggers. Based on the result of event detection, it realizes the classification of event argument role through reading comprehension. Experiments show that the proposed method achieves effective performance of event extraction in the field of public security.
  • Language Resources Construction
    CHANG Hongyang, ZAN Hongying, MA Yutuan, ZHANG Kunli
    . 2022, 36(8): 37-45.
    This paper discusses the labeling of Named-Entity and Entity Relations in Chinese electronic medical records of stroke disease, and proposes a system and norms for labeling entity and entity relations that are suitable for content and characteristics of electronic medical records of stroke disease. Based on the guidance of the labeling system and norms, after multiple rounds of labeling and correction, we completed the labeling of entities and relationships in electronic medical record of stroke disease more than 1.5 million words(Stroke Electronic Medical Record entity and entity related Corpus, SEMRC). The constructed corpus contains 10,594 named entities and 14,597 entity relationships. The consistency of named entity reached 0.8516, and that of entity relationship reached 0.9416.
  • Information Extraction and Text Mining
    MA Shikun, TENG Chong, LI Fei, JI Donghong
    . 2022, 36(8): 92-100.
    Text Classification is a fundamental task in natural larguage processing communing. However, current text classification is usually domain-independent, suffering from insufficient annotated training data. We propose a solution by leveraging the similar information of data in different domains to address the limited labeled training data issue. Under the framework of multi-task learning proposed by this paper, we extract domain-invariant and domain-specific features by using a shared encoder and multiple private encoders, respectively. Latent informaton from different domaius can be captured, which is beneficial for multi-domain text classification. Besides, we further apply an orthogonal projection operation to inherently disjoint shared and private feature spaces to refine of the shared features, and then designed a gate mechanism to fuse the shared and private features. Experiments on Amazon review and FDU-MTL show that the average accuracy of the proposed model on two datasets are 86.04% and 89.2%, respectively, significant better compared with multiple baseline models.
  • Information Retrieval
    LIU Shudong, ZHANG Ke, CHEN Xu
    . 2022, 36(9): 102-111.
    At present, news recommendation has gradually become one of the core techniques and attracts many attention from various fields at home and abroad. Focusing on the unbalanced data issue, this paper proposes a news recommendation model with long and short-term user preference based on users’ multi-dimensional interest. We divide the user long-term interests into several dimensions, and utilize the attention mechanism to distinguish the importance of different dimensions. In addition, we combine CNN and attention network to learn the news representations, and uses GRU to capture users’ short-term preferences from their recent reading history.. Experiments on a real-life news datasets show that our proposed model outperforms the state-of-the-art news recommendation methods in terms of AUC, MRR and NDCG.
  • Ethnic Language Processing and Cross Language Processing
    WU Huijuan, FAN Daoerji, BAI Fengshan, Tengda, PAN Yuecai
    . 2022, 36(10): 81-87.
    One major feature of Mongolian is the seamless connection of characters in a word, so a Mongolian word has multiple character division methods. A multi-scale Mongolian offline handwriting recognition method is proposed, in which one image of handwritten Mongolian word are mapped into to multiple target sequences to train the model. This paper distinguishes three candidate character division methods: "Twelve Prefix" code, presentation form code and grapheme code. The multi-scale model processes the sequence of handwritten images with a Bidirectional Long Short-Term Memory network, which are then fed into a Connectionist Temporal Classification (CTC) layer to map the image to the "Twelve Prefix" code sequence, the presentation form code sequence, and the grapheme code sequence, respectively. The sum of three CTC loss is used as the total loss function of the model. The experiments show that the model achieves the best performance on the public Mongolian offline handwritten data set MHW, with 66.22% and 63.97% accuracy on test set I and II, respectively.
  • Information Extraction and Text Mining
    FANG Zhengyun, YANG Zheng, LI Limin, LI Tianjiao
    . 2022, 36(7): 123-131.
    Aiming at structured scientific research project text, this paper proposes a novel two-view cross attention (TVCA) and multi-view cross attention text classification method (Multi-View Cross Attention, MVCA) based on pre-trained networks such as BERT. The MVCA method is targeted at one main important chapter (project abstract) and two chapters of the project text (research content, research purpose and meaning), extracting feature vectors containing richer semantic information through a cross-attention mechanism to further improve the performance of the classification model. Applied to the classification tasks of scientific publications and research project texts of China Southern Power Grid, the MVCA method is significantly better than the existing methods in terms of classification effect and convergence speed.
  • Natural Language Understanding and Generation
    ZENG Biqing, PEI Fenghua, XU Mayi, DING Meirong
    . 2022, 36(8): 154-162,174.
    Paragraph-level question generation is to generate one or more related questions from a given paragraph. Current studies on sequence-to-sequence based neural networks fail to filter redundant information or focus on key sentences. To solve this issue, this paper proposes a dual attention based model for paragraph-level question generation. The model first uses the attention mechanism for the paragraph and the sentence where the answer is located. Then, the model uses the gating mechanism to dynamically assign weights and merges them into context information. Finally, it improves pointer-generator network to combine the context vector and attention distribution to generate questions. Experimental results show that this model has a better performance than exiting models on the SQuAD dataset.
  • Knowledge Representation and Acquisition
    XU Yao, HE Shizhu, LIU Kang, ZHANG Chi, JIAO Fei, ZHAO Jun
    . 2022, 36(10): 54-62.
    In recent years, embedding models for deterministic knowledge graph have made great progress in tasks such as knowledge graph completion. However, how to design and train embedding models for uncertain knowledge graphs is still an important challenge. Different from deterministic knowledge graphs, each fact triple of uncertain knowledge graph has a corresponding confidence. Therefore, the uncertain knowledge graph embedding model needs to accurately calculate the confidence of each triple. The existing uncertain knowledge graph embedding model with relatively simple structure can only deal with symmetric relations, and cannot handle the false-negative problem well. Aiming to solve the above problems, we first propose a unified framework for training uncertain knowledge graph embedding models. The framework uses a multi-model based semi-supervised learning method to train uncertain knowledge graph embedding models. In order to solve the problem of excessive noise in semi-supervised samples, we also use Monte Carlo Dropout to calculate the uncertainty of the model on the output results, and effectively filter the noisy data in semi-supervised samples according to this uncertainty. In addition, in order to better represent the uncertainty of entities and relationships in uncertain knowledge graph to deal with more complex relations, we also propose an uncertain knowledge graphs embedding model UBetaE based on Beta distribution, which represents both entities and relations as a set of mutually independent Beta distributions. The experimental results on the public dataset show that the combination of the semi-supervised learning method and UBetaE model proposed in this paper not only greatly alleviates the false-negative problem, but also significantly outperforms the current SOTA uncertain knowledge graph embedding models such as UKGE in multiple tasks.