The university is looking to enhance and automate operations across multiple departments by transforming manual processes into AI-driven solutions, aiming to improve efficiency, accuracy, and service quality while reducing operational costs.
Requirements
- Strong proficiency in developing and deploying machine learning models and AI systems in production environments, with deep knowledge of contemporary AI frameworks, tools, and best practices.
- Excellent software development skills with proficiency in Python, TensorFlow/PyTorch, and experience with containerized deployments and MLOps practices.
- Extensive experience with end-to-end data pipelines using tools like Apache Airflow, Prefect, cloud platforms (AWS, Azure, GCP), data warehousing solutions (Snowflake, Redshift), processing frameworks (Spark, Kafka), and container technologies (Docker, Kubernetes), with proficiency in Python, SQL, and version control/CI/CD practices.
- Demonstrated experience in the full ML lifecycle including data preparation, feature engineering, model training, validation, deployment, and monitoring in production.
- Advanced knowledge of NLP techniques and large language models (LLMs), including prompt engineering, context management, and implementation strategies for enterprise applications.
- Experience deploying and scaling AI systems in cloud environments (AWS, Azure, or GCP), with knowledge of cloud-native AI services.
- Ability to integrate AI solutions with existing enterprise systems, APIs, databases, and authentication services to create cohesive user experiences.
Responsibilities
- Design, develop, and implement AI solutions to automate and enhance university operations, including service desk automation, administrative task processing, and QA testing systems.
- Create robust, scalable architectures that integrate with existing university systems and accommodate future growth.
- Design and implement end-to-end data pipelines that efficiently collect, process, and prepare data for AI systems.
- Build robust ETL processes using tools like Apache Airflow, cloud services, and data warehousing solutions to ensure reliable data flow between source systems and AI applications.
- Develop and fine-tune machine learning models for specific university use cases, including customizing large language models through prompt engineering, transfer learning, and domain adaptation.
- Integrate AI systems with existing university infrastructure, including identity management, knowledge bases, ticketing systems, and communication platforms.
- Monitor AI system and data pipeline performance, detect and address drift or degradation, optimize resource utilization, and continuously improve model accuracy and efficiency based on real-world usage patterns and feedback.
Other
- This role is hybrid and in the office a minimum of three days a week to facilitate collaboration and teamwork.
- Applicants must be authorized to work in the United States.
- The University is unable to work sponsor for this role, now or in the future.
- Ability to manage projects, prioritize tasks and deliver on schedule.