speech-recognition

2 posts

kakao

The Evolution of Kanana-o (opens in new tab)

Kakao has significantly advanced its integrated multimodal model, Kanana-o, by enhancing its ability to process complex instructions across text, image, and audio inputs while enriching its emotional vocal expression. By developing specialized datasets and sophisticated training techniques for prosody, the team has bridged the performance gap between text and audio modalities. The result is a more natural, human-like AI capable of nuanced interaction and high-performance instruction following, particularly within the Korean linguistic context. ## Advancing Multimodal Instruction Following * Addressed the "modality gap" where multimodal models often show decreased reasoning and reasoning performance when processing audio inputs compared to text. * Constructed a structured, high-quality dataset featuring complex, multi-step instructions such as summarizing a context and then translating it into a specific language or style. * Leveraged the Speech-KoMT-Bench to evaluate performance, showing that Kanana-o significantly outperforms global competitors of similar scale in Korean-specific tasks. * Focused on "Domain-generalization" to ensure the model's core intelligence remains stable regardless of whether the input is text, audio, or a combination of both. ## Image-Audio-Text Modality Alignment * Developed integrated datasets to ensure that reasoning capabilities learned in text-image or text-audio contexts generalize to complex image-audio scenarios. * Trained the model to handle tasks where users ask questions about visual information via voice, requiring the simultaneous alignment of three different data types. * Prioritized the maintenance of "World Knowledge" during multimodal training so that the addition of new modalities does not degrade the model’s factual accuracy. ## Enhancing Vocal Expressiveness and Prosody * Focused on "prosody"—the rhythm, pitch, and stress of speech—to move beyond robotic, flat text-to-speech (TTS) outputs. * Implemented a system of descriptive tokens and emotion tags (e.g., "warm voice," "excited tone") during training to give the model fine-grained control over its vocal persona. * Incorporated natural human speech elements, such as realistic breathing patterns and contextual variations in speech speed, to make interactions feel more intuitive and less synthetic. * Refined the model's ability to interpret the user's emotional state from their voice and respond with a matching emotional intensity. The evolution of Kanana-o highlights a shift from simply maximizing generic benchmarks to optimizing real-world user experiences through multimodal alignment and emotional intelligence. The success of this model underscores the necessity of high-quality, structured instruction data and fine-grained control over output styles to create truly conversational AI that feels natural to the user.

google

Real-time speech-to-speech translation (opens in new tab)

Google DeepMind and Google Core ML have developed an innovative end-to-end speech-to-speech translation (S2ST) model that enables real-time, voice-preserved communication with only a two-second delay. By replacing traditional cascaded pipelines with a streaming architecture trained on time-synchronized data, the system overcomes long-standing issues of high latency and accumulated errors. This advancement represents a significant shift toward natural, fluid cross-language dialogue that retains the original speaker's personality. ## Limitations of Cascaded S2ST Traditional real-time translation systems typically rely on a cascaded chain of three distinct AI models: Automatic Speech Recognition (ASR), Automatic Speech Translation (AST), and Text-to-Speech (TTS). This approach suffers from several critical drawbacks: * **High Latency:** Processing through three separate stages results in a 4–5 second delay, forcing users into unnatural, turn-based interactions. * **Error Propagation:** Inaccuracies in the initial transcription or translation phase accumulate, often leading to garbled or incorrect final audio output. * **Loss of Identity:** General-purpose TTS engines generate generic voices, stripping the communication of the original speaker’s unique vocal characteristics. ## Time-Synced Data Acquisition Pipeline To train an end-to-end model capable of low-latency output, researchers created a scalable pipeline that transforms raw audio into a specialized time-synchronized dataset. * **Alignment Multi-mapping:** The process uses forced alignment algorithms to map source audio to source text, source text to translated text, and finally, translated text to generated speech. * **Voice Preservation:** A custom TTS engine generates the target language audio while intentionally preserving the vocal characteristics of the original speaker. * **Strict Validation:** Automated filters discard any segments where alignments fail or where the translated audio cannot meet specific real-time delay requirements. * **Data Augmentation:** The training set is further refined using techniques such as sample rate reduction, denoising, and reverberation to ensure the model performs well in real-world environments. ## End-to-End Streaming Architecture The model’s architecture is designed for continuous audio streams, leveraging the AudioLM framework and fundamental transformer blocks to make real-time decisions. * **Streaming Encoder:** This component summarizes source audio data by focusing on the preceding 10-second window of input. * **Streaming Decoder:** This module predicts translated audio autoregressively, utilizing compressed encoder states and previous predictions to maintain flow. * **RVQ Audio Tokens:** The system represents audio as a 2D set of Residual Vector Quantization (RVQ) tokens, where the X-axis represents time and the Y-axis represents audio quality/fidelity. * **SpectroStream Integration:** By using SpectroStream codec technology, the model manages hierarchical audio representations, allowing it to prioritize the sequential output of audio segments for immediate playback. This technology effectively bridges the gap between high-quality translation and real-time responsiveness. For developers and researchers in the field, the transition from modular cascaded systems to end-to-end streaming architectures—supported by rigorous time-aligned datasets—is the recommended path for achieving truly seamless human-to-human cross-language communication.