To design, build, and scale machine learning and generative AI solutions on AWS to deliver secure, governed, and measurable business value to the hospitality industry
Requirements
- 5+ years in ML/AI engineering or applied data science
- Hands-on experience delivering at least 2 production-grade ML or GenAI solutions
- Strong expertise with AWS services (SageMaker, Bedrock, IAM, Lambda, Step Functions, S3)
- Proficiency in Python (PyTorch/TensorFlow, scikit-learn) and SQL
- Experience with ML pipelines, Docker, and CI/CD
- Knowledge of AI governance, security, and responsible AI practices
- Experience with LangChain, LlamaIndex, DSPy, MLflow, SageMaker Pipelines, Feature Stores
- Familiarity with vector search technologies (OpenSearch, Pinecone, Weaviate, pgvector)
- Infrastructure-as-Code (Terraform, CDK, CloudFormation)
Responsibilities
- Design, fine-tune, and deploy ML/LLM models using AWS SageMaker, Bedrock, and OpenAI APIs
- Build and scale generative AI applications including chatbots, document analysis pipelines, conversational analytics, and retrieval-augmented generation (RAG) workflows
- Develop ML-driven transaction and entity matching solutions using similarity search, feature engineering, and vector databases
- Rapidly prototype proof-of-concepts (PoCs), iterate with stakeholders, and productionize with CI/CD and monitoring
- Automate training and inference pipelines, manage model/version governance, and track cost and performance metrics
- Ingest and transform large datasets using Python, SQL, Spark, and AWS Glue; design and manage vector indices
Other
- Work associated with this position is sedentary in nature and performed indoors at a desk either remotely or in an office setting
- Travel for this position is less than 10%
- Bachelor’s degree in Computer Science, Data Science, Engineering, or related field (or equivalent experience)
- Self-driven and resourceful; comfortable navigating ambiguity
- Strong communicator able to bridge business and technical needs