The company is looking to solve the complex system-level challenges posed by the growing demands of future AI/ML workloads.
Requirements
- Strong background in compiler design and optimization techniques.
- Experiences in developing and optimizing software for high-performance computing systems
- Experiences in LLVM / MLIR (preferred)
- Familiarity with PyTorch, Tensorflow, or JAX.
- Familiarity with hardware architectures such as CPUs, GPUs, TPUs, and NPUs.
- Strong analytical and problem-solving skills
- Experiences in silicon development
Responsibilities
- Design and implement ML compilers for high-performance deep learning applications.
- Optimize compilers for efficient execution of deep learning models on various hardware platforms.
- Design a staged lowering infrastructure to meet rapidly evolving workload requirements effectively.
- Design an algorithm to optimize data locality to minimize energy consumption.
- Work closely with hardware architects and developers to integrate new ML techniques and algorithms into the compiler.
- Collaborate with cross-functional teams to define and deliver ML compiler features and improvements.
- Troubleshoot and debug compiler issues, and provide technical support to customers.
Other
- BS in Computer/Electrical Engineering or Computer Science with 10+ years of working experiences in silicon development or MS in Computer/Electrical Engineering or Computer Science with 8+ years of relevant working experience or PhD and 5+ years of relevant working experience preferred.
- Excellent communication and interpersonal skills
- Ability to work independently and as part of a team
- You're inclusive, adapting your style to the situation and diverse global norms of our people.
- An avid learner, you approach challenges with curiosity and resilience, seeking data to help build understanding.
- You're collaborative, building relationships, humbly offering support and openly welcoming approaches.
- Innovative and creative, you proactively explore new ideas and adapt quickly to change.