The company is looking to build, deploy, and operationalize AI/ML solutions by developing scalable AI/ML platforms and pipelines for production environments.
Requirements
- Proven experience in AWS and Databricks ecosystems.
- Strong proficiency in Python, PySpark, and related ML frameworks.
- Handson experience with data engineering, model management, and MLOps workflows.
- Strong understanding of cloud infrastructure, automation, and container orchestration.
- Demonstrated experience in AI/ML coding, prompt writing, and generative AI development.
- Knowledge of Terraform, Kubernetes, AWS XRay, and Azure Databricks.
- Experience with machine learning model deployment, monitoring, and optimization.
Responsibilities
- Design, build, and maintain scalable AI/ML platforms and pipelines for production environments.
- Develop and operationalize ML workflows, including data ingestion, transformation, training, and deployment.
- Collaborate with data scientists and engineers to enable efficient experimentation and model lifecycle management.
- Work with AWS (Lambda, SQS, EC2, EBS, S3) and Databricks to optimize performance and reliability of AI systems.
- Implement infrastructureascode solutions using tools like Terraform and manage containerized workloads using Kubernetes.
- Develop, test, and maintain code in Python (including PySpark) and other languages such as R, JavaScript, and PowerShell.
- Leverage generative AI tools and frameworks, including LangChain, for building advanced AI applications.
Other
- 10+ years of overall IT experience with at least 5 years focused on AI/ML engineering and platform development.