At Bose, the business problem is to develop novel AI-powered audio processing algorithms that can run in real-time, on physical devices, for applications such as voice pickup, hearing augmentation and others, to create products that provide transformative sound experiences.
Requirements
- Practical knowledge of Applied audio ML (TensorFlow/PyTorch, TFLite/ONNX is a plus) and Audio DSP (Python, Matlab and/or C/C++).
- Hands-on experience in at least one of the following research topics: Audio source separation, Speech enhancement, Microphone array signal processing, Tiny ML, Generative audio modelling
- Familiarity with methods for spatial sound synthesis and/or room acoustics simulation/analysis is a plus.
Responsibilities
- Most of your time will be devoted to prototyping, implementing and evaluating ML algorithms, curating and developing internal resources, and presenting your findings.
- You will integrate your novel solutions into existing systems and platforms to showcase new (proof of concept) solutions.
- You will be able to contribute to projects, which will be shipped to Bose customers, apply for patents, and/or submit papers to top-tier AI and signal processing conferences (e.g., NeurIPS, ICASSP, Interpeech, etc.).
Other
- Pursuing or recently finished a graduate-level degree in ML, Computer Science, Music Technology or a related field.
- Strong communication skills. You will be presenting your work to a large interdisciplinary community.
- Bose is an equal opportunity employer. We evaluate qualified applicants without regard to race, color, religion, sex, sexual orientation, gender identity, genetic information, national origin, age, disability, veteran status, or any other legally protected characteristics.