Symbolica is looking to bridge the gap between theoretical mathematics and cutting-edge technologies, creating symbolic reasoning models that think like humans – precise, logical, and interpretable, by designing, building, and optimizing the infrastructure and tools that enable research and development efforts.
Requirements
- Proficiency in cloud platforms (e.g., AWS, including Lambda) and containerization tools (e.g., Docker, Kubernetes).
- Proven experience in building and maintaining CI/CD pipelines tailored for machine learning workflows.
- Experience designing and managing GPU-optimized Kubernetes clusters is a strong plus.
Responsibilities
- Leading the implementation and management of infrastructure for large-scale machine learning workflows, including training systems and model deployment.
- Developing tools and frameworks to support the global team’s experiments and ensure reproducibility and scalability.
- Optimizing compute resources and ensuring efficient use of cloud and on-premises hardware for training and inference.
- Building and maintaining CI/CD pipelines tailored for machine learning development.
- Collaborating closely with machine learning scientists, researchers and engineers to identify and address infrastructure needs.
Other
- 5+ years of experience in software engineering or infrastructure roles, with at least 2 years in machine learning infrastructure or MLOps.
- Exceptional problem-solving skills, with the ability to design and implement robust, scalable systems.
- Competitive compensation, including an attractive equity package, with salary and equity levels aligned to your experience and expertise.
- Onsite role based in San Francisco office (345 California St)