%0 Journal Article %A ZHANG Shiqi %A MA Jin %A ZHOU Xiabing %A JIA Hao %A CHEN Wenliang %A ZHANG Min %T Pre-trained Language Models for Product Attribute Extraction %D 2022 %R %J Journal of Chinese Information Processing %P 56-64 %V 36 %N 1 %X Attribute extraction is a key step of constructing a knowledge graph. In this paper, the task of attribute extraction is converted into a sequence labeling problem. Due to a lack of labeling data in product attribute extraction, we use the distant supervision to automatically label multiple source texts related to e-commerce. In order to accurately evaluate the performance of the system, we construct a manually annotated test set, and finally obtain a new data set for product attribute extraction in multi-domains. Based on the newly constructed data set, we carried out intra-domain and cross-domain attribute extraction for a variety of pre-trained language models. The experimental results show that the pre-trained language models can better improve the extraction performance. Among them, ELECTRA performs the best in attribute extraction in in-domain experiments, and BERT performs the best in cross-domain experiments. we also find that adding a small amount of target domain annotation data can effectively improve the performance cross-domain attribute extraction and enhance the domain adaptability of the model. %U http://jcip.cipsc.org.cn/EN/abstract/article_3247.shtml