
In an exclusive announcement, Neosapience, a voice-synthesis company, has successfully completed a funding round securing $11.5 million. The company plans to increase its investments in AI voice services that are emotionally intelligent and conversational in anticipation of an upcoming IPO. With this injection of capital, Neosapience aims to enhance its localization efforts by expanding its language offerings beyond English, Chinese, Japanese, Korean, Spanish, and Vietnamese.
Neosapience is preparing for a public listing on the Korea Stock Market scheduled for the end of the following year. The company’s flagship product, Typecast, stands out for its ability to detect and convey human emotions from scripts with minimal guidance or pre-recordings.
Headquartered in Seoul, South Korea, and San Francisco, Neosapience highlights that Typecast can assess any script and automatically generate the appropriate emotional delivery for each sentence without human intervention. This feature ensures that the resulting output closely resembles a human voice. The applications of this technology span various industries such as entertainment and marketing, with a specific emphasis on serving digital creators amidst the growing creator economy trend.
The deployment of AI tools in the entertainment sector continues to be a topic of debate among industry professionals at different levels. Concerns persist regarding the potential impact on job opportunities and the creative process. Notably, the use of AI-powered voice technology raises particular apprehensions among voice actors. Despite these reservations, the adoption of AI-based voice and audio solutions is on the rise. Respeecher from Ukraine has successfully implemented its AI voice services in prominent films like *The Brutalist* and TV shows such as *The Mandalorian*. Moreover, renowned personalities like Michael Caine and Liza Minnelli have collaborated with ElevenLabs’ “Iconic Voice Marketplace” to provide their voices.
Taesu Kim, co-founder and CEO of Neosapience, shared insights with Deadline stating: “Audiences are diverse globally; however, content creation is often limited by language barriers and budget constraints. We aim to address this issue.” He elaborated on Typecast’s capabilities by explaining how it analyzes scripts line by line to interpret tone, emotion, and context accurately – a concept referred to as ‘smart emotion’. This functionality enables Typecast to automatically express emotions based on context, ensuring that performances align with the narrative.
Kim further emphasized: “With just a short segment of human audio input, we can leverage AI technology to develop voices that go beyond the original recording’s limitations, delivering nuanced performances surpassing what was initially captured. For instance, we can create voices that evoke more emotion than what the speaker could naturally produce.”