Shoulin Yin

Work place: Shenyang Normal University, China



Research Interests: Data Mining, Image Processing, Network Security


Dr. Shoulin Yin 

Deputy Director of Intelligent Information Processing Laboratory,
Shenyang Normal University, China

Dr. Shoulin Yin is an associate Professor in the Software College, Shenyang Normal University. He is a master associate supervisor. His research interests include Software Engineering, AI, Cloud Computing, Network Security, Image proccessing, Remote Sensing, Pattern Recognition, cloud and edge computing, blockchain. He is the deputy director of Intelligent Information Processing Laboratory. His research has been funded by NSFC, and Provincial Natural Science Foundation. He is a Senior Member of CCF, a member of IEEE, ACM.

Author Articles
A Multi-channel Character Relationship Classification Model Based on Attention Mechanism

By Yuhao Zhao Hang Li Shoulin Yin

DOI:, Pub. Date: 8 Feb. 2022

Relation classification is an important semantic processing task in the field of natural language processing. The deep learning technology, which combines Convolutional Neural Network and Recurrent Neural Network with attention mechanism, has always been the mainstream and state-of-art method. The LSTM model based on recurrent neural network dynamically controls the weight by gating, which can better extract the context state information in time series and effectively solve the long-standing problem of recurrent neural network. The pre-trained model BERT has also achieved excellent results in many natural language processing tasks. This paper proposes a multi-channel character relationship classification model of BERT and LSTM based on attention mechanism. Through the attention mechanism, the semantic information of the two models is fused to get the final classification result. Using this model to process the text, we can extract and classify the relationship between the characters, and finally get the relationship between the characters included in this paper. Experimental results show that the proposed method performs better than the previous deep learning model on the SemEval-2010 task 8 dataset and the COAE-2016-Task3 dataset. 

[...] Read more.
Action Recognition Based on the Modified Twostream CNN

By Dan zheng Hang Li Shoulin Yin

DOI:, Pub. Date: 8 Dec. 2020

Human action recognition is an important research direction in computer vision areas. Its main content is to simulate human brain to analyze and recognize human action in video. It usually includes individual actions, interactions between people and the external environment. Space-time dual-channel neural network can represent the features of video from both spatial and temporal perspectives. Compared with other neural network models, it has more advantages in human action recognition. In this paper, a action recognition method based on improved space-time two-channel convolutional neural network is proposed. First, the video is divided into several equal length non-overlapping segments, and a frame image representing the static feature of the video and a stacked optical flow image representing the motion feature are sampled at random part from each segment. Then these two kinds of images are input into the spatial domain and the temporal domain convolutional neural network respectively for feature extraction, and then the segmented features of each video are fused in the two channels respectively to obtain the category prediction features of the spatial domain and the temporal domain. Finally, the video action recognition results are obtained by integrating the predictive features of the two channels. Through experiments, various data enhancement methods and transfer learning schemes are discussed to solve the over-fitting problem caused by insufficient training samples, and the effects of different segmental number, pre-training network, segmental feature fusion scheme and dual-channel integration strategy on action recognition performance are analyzed. The experiment results show that the proposed model can better learn the human action features in a complex video and better recognize the action.

[...] Read more.
Other Articles