TENIO, JONALD R., BAJE, RONAN C., CALADIAO, JEROME Z., ATIENZA, FRANCIS ARLANDO L
DOI: https://doi.org/Text summarization is heavily used by LLM’s and Gen AI models to condense texts into concise summaries for efficient and faster parsing of prompts. One algorithm commonly used for this is dynamic programming where it has two approaches namely: (1) memoization and (2) tabulation. The existing approaches offer great application in various use-cases but possess three problems: (1) hard-coded context for splitting sentences[1,5], (2) nonadaptive base case[2,3,4], and (3) memory intensiveness[]. This study introduces the third approach, “Adaptive Tabulation” which solves the problem of existing approaches by: (1) segmentation and chunking, (2) using scores and ranks, and (3) selective caching. Adaptive tabulation produced consistent quality summaries between short stories and articles producing a difference of 26.8% in consistency and 3.5% in retaining semantic meaning while also improving memory usage by an average of 32.8%. For future research, broadening the file formats and literature types that adaptive tabulation could process is a great focus.
