Mohammed Abdulrauf
لدي اهتمام وخبرة بعدة مجالات ابرزها المونتاج وكتابة المراجعات والتصوير والالعاب والرياضة احب التقنية والكمبيوتر وتركيبه وتطويره واحاول تطوير نفسي في هذه المجالات
Imagine yourself playing a large RPG with a large number of interactive NPCs (non-playable characters). The way you interact with NPCs in all current RPGs is through a series of pre-defined statement selections, where you pick from a number of text-based alternatives on the screen and the NPC responds in a particular way. Although it seems quite forced and railroaded, NVIDIA intends to change this. NVIDIA hopes to enable voice-based interactions with NPCs through the use of ACE (a character engine) and NeMo SteerLM (a natural language model). This is a crucial step in the future when NPCs will be supported by sizable GPTs allowing you to have in-depth dialogues with them.
The player interacts with an NPC by speaking normal language to it. The voice input is processed by a speech-to-text engine and LLM, which then produces a natural language answer. The NPC’s reaction is generated instantly using Omniverse Audio2Face. NVIDIA’s new NeMo SteerLLM, which the company unveiled at Gamescom, breathes new life into the portion of ACE that analyzes natural voice input. Depending on the personality qualities a game developer assigns to an NPC, the NeMo SteerLLM generates responses with varied degrees of inventiveness, humor, and toxicity, among other characteristics.
لدي اهتمام وخبرة بعدة مجالات ابرزها المونتاج وكتابة المراجعات والتصوير والالعاب والرياضة احب التقنية والكمبيوتر وتركيبه وتطويره واحاول تطوير نفسي في هذه المجالات