Work place: University of Asia Pacific/Electrical and Electronic Engineering, Dhaka, 1205, Bangladesh
E-mail: tishna@uap-bd.edu
Website: https://orcid.org/0000-0002-6072-6386
Research Interests:
Biography
Tishna Sabrina received B.Sc.Eng. (hons.) degree in Electrical and Electronic Engineering from Bangladesh University of Engineering and Technology (BUET), Bangladesh, in 2005. She received a PhD degree from Monash University, Australia in 2014. She is an Assistant Professor in the department of Electrical and Electronic Engineering, University of Asia Pacific (UAP). Her major research interests are in the fields of Applied machine learning, Security privacy, and Communication Engineering.
By Sayem Shahad Salman Sayeed Md Naimur Rahman Khan Sifat Shuvo Biswas Tishna Sabrina
DOI: https://doi.org/10.5815/ijmecs.2026.01.07, Pub. Date: 8 Feb. 2026
In recent days, we have largely adopted Advanced Large Language Models (LLMs) in educational settings, where we use them as content creators, teaching assistants, and interactive conversation agents. However, the responses generated by these models are often monotonous, verbose, and ambiguous, which can hinder their effectiveness in educational contexts. Addressing these shortcomings, we introduce EduAgent, a multimodal chatbot framework specifically designed to enhance interactive learning in Electrical and Electronics Engineering (EEE) education. EduAgent can respond with pedagogically enhanced answers to electronics-related queries, complemented by relevant images and detailed explanations. It is designed to provide complete, concise, step-by-step responses, ensuring that foundational knowledge is clearly mentioned before diving deep. To develop EduAgent, we constructed a dataset comprising 596 four-turn conversations and a collection of 118 images covering a wide range of EEE concepts. The conversation dataset was used to fine-tune the open-source LLMs and facilitate in-context learning. Both images and their corresponding explanations were integrated into a knowledge base for efficient retrieval. Finally, we evaluated multiple text generation and image retrieval methods using both automatic metrics and human assessments, demonstrating the effectiveness and engagement of our approach.
[...] Read more.Subscribe to receive issue release notifications and newsletters from MECS Press journals