The partner company is looking to design, build, and maintain scalable machine learning systems, deploy models into production, and optimize performance across operations.
Requirements
- 3–5 years of experience in AI/ML engineering, data science, or software engineering with a machine learning focus.
- Strong understanding of the machine learning lifecycle, including training, deployment, and monitoring.
- Advanced programming skills in Python and experience with ML libraries such as Scikit-learn, TensorFlow, or PyTorch.
- Proficiency with MLOps tools including Docker, Kubernetes, MLflow, and CI/CD pipelines.
- Experience with data engineering tools and pipelines such as Airflow, Spark, and Kafka.
- Familiarity with cloud platforms like AWS, GCP, or Azure.
Responsibilities
- Deploy and monitor machine learning models in production using tools like Docker, Kubernetes, and MLflow to ensure scalability and reliability.
- Build and maintain data pipelines using Airflow, Spark, or Kafka to support model training and inference.
- Integrate ML models into business applications, collaborating with software engineers to operationalize solutions.
- Monitor model performance and detect data drift, implementing alerting and retraining pipelines.
- Clean, preprocess, and ensure high-quality data for machine learning applications.
- Collaborate with cross-functional teams to translate business problems into technical solutions.
- Optimize ML workflows to improve performance, scalability, and efficiency.
Other
- This role thrives on collaboration, technical expertise, and the ability to translate business needs into scalable ML solutions.
- Strong collaboration and communication skills to work with technical and non-technical stakeholders.
- Ability to work remotely aligned with Eastern Time Zone hours.
- Flexible remote work with supportive team environment.
- Professional development and career growth opportunities.