iFlytek Unveils AI Glasses: Supporting Multimodal Simultaneous Interpretation

iFlytek Unveils AI Glasses: Supporting Multimodal Simultaneous Interpretation

At the 2026 Mobile World Congress (MWC 2026), iFlytek showcased its AI Glasses for the first time. Designed specifically for face-to-face communication, the glasses display translated subtitles on the lenses in real time after the other person speaks, and the translated text is played back through a built-in speaker, enabling natural communication that is exactly what you see.

These glasses combine multimodal simultaneous interpretation with an ultra-lightweight design. They feature multimodal noise reduction, all-around translation, and multimodal recording functions. With the dual support of voice translation and visual translation, they can handle cross-language communication scenarios such as international conferences, business negotiations, and overseas exhibitions.

Addressing the challenges of noisy environments such as trade shows and parties, iFlytek’s AI glasses pioneered a multimodal noise reduction solution based on lip movement recognition. This solution captures the speaker’s lip movements using a camera and combines this with bone conduction microphones to capture the wearer’s voice, fusing the audio and video information. This allows the glasses to accurately locate the target speaker in noisy backgrounds with multiple people conversing, thereby improving the accuracy of speech recognition and translation by more than 50%.

In terms of industrial design, the entire product weighs only 40 grams.

Currently, online pre-orders for the new iFlytek AI glasses are open, and domestic e-commerce platforms will begin accepting pre-orders at 10:10 AM Beijing time on March 4th.

Share This Article: