Artificial IntelligenceNEC improves reliability of large language models to accelerate business applications

NEC Laboratories Europe, the European research and development center for NEC Corporation, is improving NEC cotomi generative AI services for Japan by introducing LLM Explainer, an NEC technology that helps detect and explain hallucinations in output generated by commonly available large language models (LLMs).

LLMs are a type of generative AI that understands and generates natural language and other content. While increasingly adopted by industries and end users to enhance knowledge-based activities, LLMs frequently produce incorrect output – commonly called hallucinations. This limits their use in situations that require accurate information or involve risk, for example, in business-critical operations or managing critical infrastructure.

Dr. Carolin Lawrence, Manager and Chief Research Scientist of NEC Laboratories Europe, explains, “Currently, organizations using LLMs must manually check that output is correct, which can be time-consuming and costly. By incorporating LLM Explainer, NEC cotomi generative AI allows users to compare LLM output with relevant source documentation. This lets users efficiently check and verify the correctness of LLM-generated text and, if needed, correct it.”

Using the latest advances in natural-language processing, LLM Explainer compares both the words and meaning of sentences to detect omissions, duplications and changes in meaning between original source documentation and generated output sentences. For each output sentence, users can view relevant sentences from original source documentation, which lets them determine if the LLM-generated text is correct.

Dr. Lawrence adds, “Highlighting evidence and potential discrepancies in a simple format, for what is often complex information, allows users to make corrections and quickly adapt LLM-generated text to their needs.”

NEC plans to enhance the detection of LLM hallucinations in generative AI services by adding functions such as finding hallucinated entities and identifying contradictions – further accelerating the detection and correction of LLM discrepancies.

LLM Explainer will be integrated into NEC cotomi generative AI API services for Japan beginning at the end of October. A version of the service will also be available on premise.

PRNewswire

Leave a Reply

Your email address will not be published. Required fields are marked *