PENN Entertainment is looking to scale and evolve their machine learning platform to accelerate ML innovation across the company.
Requirements
- Proficiency in Python and SQL.
- Deep familiarity with cloud platforms such as GCP, AWS, or Azure.
- Hands-on experience with ML model deployment, CI/CD pipelines, containerization (Docker, Kubernetes), and orchestration tools (Dagster, Airflow, Kubeflow, or similar).
- Experience with model packaging and serving technologies such as MLflow, Seldon, Vertex AI, or AWS SageMaker.
- Exposure to large language models (LLMs) and their deployment considerations.
- Familiarity with monitoring, observability, and alerting tools for ML systems.
Responsibilities
- Design, build, and maintain core components of the ML platform including model serving infrastructure, feature stores, and monitoring systems.
- Develop and maintain CI/CD pipelines for ML workflows to support reproducibility, scalability, and continuous delivery of models.
- Collaborate with ML engineers and data scientists to support model experimentation, packaging, and deployment in both batch and real-time contexts.
- Contribute to the development of best practices for MLOps, including versioning, lineage tracking, observability, and governance.
- Write clean, testable, and well-documented code and contribute to team knowledge through documentation and design reviews.
- Partner with data engineering and platform teams to ensure seamless integration with data pipelines and compute environments.
Other
- Strong communication skills and a desire to work cross-functionally with data scientists, ML engineers, and platform teams.
- Bachelor’s or Master’s degree in Computer Science, Engineering, or a related technical field.
- 5+ years of experience in machine learning engineering, data engineering, or backend software engineering, with demonstrated experience building ML systems in production.