The realm of artificial intelligence employs a challenge: understanding how complex models, like XAI800T, arrive at their conclusions. Often likened to a black box, these systems can create seemingly intelligent results without explicitly revealing their inner workings. This deficiency of transparency raises concerns about reliability and restricts