|
International Journal of Computer Applications
Foundation of Computer Science (FCS), NY, USA
|
| Volume 187 - Issue 78 |
| Published: February 2026 |
| Authors: Ziqiao Ao, Juhi Singh, Sebastian Antinome |
10.5120/ijca2026926329
|
Ziqiao Ao, Juhi Singh, Sebastian Antinome . Optimizing Prompt Refinement: Algorithmic Strategies for Large Language Model-Based Text Classification. International Journal of Computer Applications. 187, 78 (February 2026), 1-10. DOI=10.5120/ijca2026926329
@article{ 10.5120/ijca2026926329,
author = { Ziqiao Ao,Juhi Singh,Sebastian Antinome },
title = { Optimizing Prompt Refinement: Algorithmic Strategies for Large Language Model-Based Text Classification },
journal = { International Journal of Computer Applications },
year = { 2026 },
volume = { 187 },
number = { 78 },
pages = { 1-10 },
doi = { 10.5120/ijca2026926329 },
publisher = { Foundation of Computer Science (FCS), NY, USA }
}
%0 Journal Article
%D 2026
%A Ziqiao Ao
%A Juhi Singh
%A Sebastian Antinome
%T Optimizing Prompt Refinement: Algorithmic Strategies for Large Language Model-Based Text Classification%T
%J International Journal of Computer Applications
%V 187
%N 78
%P 1-10
%R 10.5120/ijca2026926329
%I Foundation of Computer Science (FCS), NY, USA
The performance of Large Language Models (LLMs) for text classification depends on how well prompts are designed and refined. This paper presents a structured framework for improving prompt refinement strategies for LLM-based classification, with a focus on question-type classification for Microsoft technical certification exams. Several prompt optimization techniques were evaluated, including Chain of Thought (CoT), Self-Consistency, Tree of Thought (ToT), and different configurations of Retrieval- Augmented Generation (RAG) were evaluated. A modular prompt structure was also developed to support category-specific evaluation and improve decision consistency. Experiments were conducted in three stages: (1) tuning and comparing prompt refinement techniques, (2) optimizing RAG retrieval parameters, and (3) applying a modular rule-based approach to enhance classification reliability. Experimental results indicate that the proposed framework enhances classification performance, achieving an absolute improvement of approximately 40 percentage points in F1 score compared to baseline prompting methods. The methodology can be adapted to educational assessment, automated content analysis, and other text classification applications.