Please enable JavaScript.
Coggle requires JavaScript to display documents.
Radar-Assisted Microphone Arrays for Speaker Localization and Speech…
Radar-Assisted Microphone Arrays for Speaker Localization and Speech Separation
background
complex indoor speech environments
classrooms
meeting rooms
multiple active speakers
target tasks
speaker localization
speech enhancement
speech separation
audio-based processing
classical localization methods
GCC-PHAT
SRP-PHAT
MUSIC
deep learning approaches
learned spatial features
time-frequency masking
audio limitations
noise
reverberation
overlapping speech
microphone-array structures
fixed arrays
known geometry
controlled beamforming
distributed arrays
wider room coverage
multiple recording positions
ad-hoc arrays
phones / laptops / tablets
flexible deployment
array challenges
unknown microphone positions
synchronization errors
device mismatch
radar-based speech sensing
mmWave radar
phase modulation
micro-vibration
range cues
Doppler radar
vocal signal acquisition
non-contact sensing
radar-only limitations
missing high frequencies
low sampling rate
phase noise
range limits
radar-microphone fusion
microphone contribution
high-quality speech content
acoustic detail
radar contribution
spatial cues
motion cues
target guidance
existing systems
RadioSES
mmFusion
mmMUSE
Wavoice
possible fusion functions
guide localization
guide beamforming
target-speaker selection
support separation
tensions and gaps
audio-only systems
strong foundation
vulnerable acoustic cues
ad-hoc arrays
flexible deployment
uncertain geometry
radar-only sensing
robust to acoustic noise
limited speech quality
current fusion systems
mostly single microphone
limited distributed-array integration
research gap
radar-assisted distributed microphone arrays
radar-assisted ad-hoc microphone arrays