These earbuds give everything but your voice the ‘talk to the hand’ treatment

We’ve had active noise canceling in wireless earbuds for a while, but that mostly benefits the person wearing the headphones to drown out the outside world. If you’ve been on the other end of a phone call with someone wearing ’em, you’ll notice that the microphones still pick up a lot other than the voice you’re trying to pay attention to. That’s what the open source ClearBuds project is trying to resolve, by adding a layer of deep-learning and audio processing to the mix.

I can ramble on for a few thousand words here (and I still might), but if a picture is worth 1,000 words, then a 23-second 30 FPS video is worth almost 700,000 words, and I just can’t compete with that. Check it out:

The ClearBuds project is a result of a research initiative by three University of Washington researchers, who were roommates during the pandemic. The system includes a microphone system and real-time machine-learning systems that can run on a smartphone.

Most earbuds only use the audio from one of the buds to send audio to the phone. The ClearBuds system sends two streams that can then be analyzed and processed quickly enough to be used for live audio, such as video or phone calls. The team’s algorithm suppresses any non-voice sounds, then enhances the speaker’s voice.

“ClearBuds differentiate themselves from other wireless earbuds in two key ways,” said co-lead author Maruchi Kim, a doctoral student in the Paul G. Allen School of Computer Science & Engineering. “First, ClearBuds use a dual microphone array. Microphones in each earbud create two synchronized audio streams that provide information and allow us to spatially separate sounds coming from different directions with higher resolution. Second, the lightweight neural network further enhances the speaker’s voice.”

“Because the speaker’s voice is close by and approximately equidistant from the two earbuds, the neural network can be trained to focus on just their speech and eliminate background sounds, including other voices,” said co-lead author Ishan Chatterjee. “This method is quite similar to how your own ears work. They use the time difference between sounds coming to your left and right ears to determine from which direction a sound came from.”

Check out the full project page, and cross your fingers that this tech finds its way into some headphones soon, because, frankly, I can’t wait to not hear barking dogs, zooming cars and my niece singing we don’t talk about Bruno-no-no in the background. Okay, let’s be honest, I’ll miss the singing. Everything else can go, though.

Credit belongs to : www.techcrunch.com

You May Also Like

Nigerian proptech Spleet gets $2.6M led by MaC VC to scale its property management products

For the average individual living in Lagos — Nigeria’s most populous city, with over 20 million people — apartment hunting is an extreme sport. Not only is rent expensive — low- to middle-income housing can cost between $1,000 and $5,000 yearly — but renters must also pay a year in advance, sometimes even two before […]

Nigerian proptech Spleet gets $2.6M led by MaC VC to scale its property management products by Tage Kene-Okafor originally published on TechCrunch

App Store experienced sharp revenue drop in September, Morgan Stanley says

Apple’s App Store suffered a 5% year-on-year dip in net revenue in September according to a note from Morgan Stanley analyst Erik Woodring. This is the biggest drop in App Store revenue since the financial services company started tracking its data. Woodring said gaming was the biggest reason for the decline as the sector plunged […]

App Store experienced sharp revenue drop in September, Morgan Stanley says by Ivan Mehta originally published on TechCrunch

error: Content is protected !!