Rahman Sharar

Work place: Department of Computer Science, Faculty of Science and Technology, American International University-Bangladesh, Dhaka, Bangladesh

E-mail: rahmansharar@gmail.com


Research Interests: Natural Language Processing, Data Compression,


Rahman Sharar (correspond author) was born in Dhaka, Bangladesh in 1997. He received a BSc degree in Computer Science & Engineering from the American International University-Bangladesh (AIUB), majoring in software engineering, in 2022.

He served as an intern in the J2SE and C++ laboratory classes at AIUB from May 2022 to August 2022. His research areas comprise of data science and natural language processing, with a keen interest in machine learning.

Author Articles
MediBERT: A Medical Chatbot Built Using KeyBERT, BioBERT and GPT-2

By Sabbir Hossain Rahman Sharar Md. Ibrahim Bahadur Abu Sufian Rashidul Hasan Nabil

DOI: https://doi.org/10.5815/ijisa.2023.04.05, Pub. Date: 8 Aug. 2023

The emergence of chatbots over the last 50 years has been the primary consequence of the need of a virtual aid. Unlike their biological anthropomorphic counterpart in the form of fellow homo sapiens, chatbots have the ability to instantaneously present themselves at the user's need and convenience. Be it for something as benign as feeling the need of a friend to talk to, to a more dire case such as medical assistance, chatbots are unequivocally ubiquitous in their utility. This paper aims to develop one such chatbot that is capable of not only analyzing human text (and speech in the near future), but also refining the ability to assist them medically through the process of accumulating data from relevant datasets. Although Recurrent Neural Networks (RNNs) are often used to develop chatbots, the constant presence of the vanishing gradient issue brought about by backpropagation, coupled with the cumbersome process of sequentially parsing each word individually has led to the increased usage of Transformer Neural Networks (TNNs) instead, which parses entire sentences at once while simultaneously giving context to it via embeddings, leading to increased parallelization. Two variants of the TNN Bidirectional Encoder Representations from Transformers (BERT), namely KeyBERT and BioBERT, are used for tagging the keywords in each sentence and for contextual vectorization into Q/A pairs for matrix multiplication, respectively. A final layer of GPT-2 (Generative Pre-trained Transformer) is applied to fine-tune the results from the BioBERT into a form that is human readable. The outcome of such an attempt could potentially lessen the need for trips to the nearest physician, and the temporal delay and financial resources required to do so.

[...] Read more.
Other Articles