Net-Interactive Documents - Generative Pre-trained Transformer

(NIDGPT)

 URL:  https://nid-library.com/gpt https://www.austria-forum.org 

NOTE: All such systems do not give guaranteed correct answers, see the paper on AI Hallucinations.

We are introducing a new NID module that empowers its users to leverage the capabilities of generative AI. This feature makes use of completely localized Large Language Models (LLMs) and vector embedding stores to generate answers to users queries based on information available in NID library and Austria-Forum. This creates a cutting edge yet secure environment to encode the semantic meaning and context of text, allowing LLMs to understand context and judge similarity when returning answers to query prompts.

At the moment we are testing various embedding schemes and LLMs for optimal results. We are also exploring how effective the system performs on standard CPU servers and value additions in form of GPU based NID hosting infrastructure.

At the moment only limited document sources from NID are being added to NID-GPT vector store once the module matures the functionality will be extended to a larger dataset available in NID and Austria-Forum repository.

We believe that a well-curated knowledge base for a GPT system or a Retrieval-Augmented Generation (RAG) application can significantly enhance information access and its practical use. Addition of LLM powered module not only will facilitate the information access in this cutting edge way but will also extend NID systems capabilities in terms of keyword detection, automated linking of contents, translations etc. 

Ask Questions in NID GPT interface / “Fragen” button on right-side of Austria-forum

1. In order to ask a question, type a question into the search bar like: 
2. What is NID Library System 
3. Hit enter on your keyboard or click Ask! 
4. Wait (Please be patient!!) while the LLM model consumes the prompt and prepares the answer. Currently, the system's processing is offloaded to the local CPU and a modest onboard GPU. In the future, adding a more powerful GPU cluster will enhance the system's response time and capabilities. 
5. Once done, it will print the answer and the 4 sources it used as context from your documents; you can then ask another question without re-running the script, just wait for the prompt again.

 Warning:   This module is under development and going through continuous changes, the service may go down without any prior notice.