NVIDIA is looking to solve the challenge of achieving high throughput and energy efficiency in future fast, scalable storage accesses by GPU threads in post-Moore world systems, requiring co-optimization of architecture, runtime systems, operating systems, and compilers.
Requirements
- depth in I/O system software
- I/O systems architectures
- deep knowledge in GPU architecture
- proficiency in CUDA programming
- programming large-scale clusters
- experience in profiling and system performance analysis tools
- Experience with experimental computer architecture research, software infrastructure development and evaluation.
Responsibilities
- Develop novel architectures and system software implementations to enable scalable multi-GPU platforms.
- Understand and analyze the interplay between operating systems, CPU and GPU architectures, and efficient algorithm designs.
- Publish original research and speak at conferences and events.
Other
- A Ph.D. in CE/CS/EE or equivalent experience with a strong background
- 5+ years of research work experience in computer architecture, operating systems, system administration, compilers, and/or HPC.
- A strong publication, patent, presentation, and research collaboration history is a huge advantage.
- Demonstrated expertise in one specific area of the above topics with the ability to become the go-to resource within a team from differing backgrounds.
- Strong interpersonal skills are needed and being a creative and dynamic presenter is a huge advantage.