Contribute to our data transformation initiatives by developing and implementing the infrastructure, pipelines, and processes that power our organization's data-driven operations.
Requirements
- Strong proficiency in SQL (multiple dialects) with ability to optimize complex queries
- Experience with Python for data processing, automation, and integration
- Demonstrated experience building and maintaining data pipelines
- Experience integrating systems through APIs and web services
- Knowledge of data modeling and database design principles
- Experience with AWS or other cloud platforms (especially data services)
- Familiarity with containerization and orchestration technologies
Responsibilities
- Design, develop, and maintain scalable data pipelines for collecting, processing, and storing organizational data
- Build and implement modern data infrastructure leveraging cloud technologies (preferably AWS)
- Integrate disparate data sources through API connections and automated workflows
- Work with business requirements and technical specifications to develop implementation solutions
- Contribute to technical data projects from development through implementation and maintenance
- Implement data quality controls, monitoring systems, and alert mechanisms
- Automate manual data processes to improve efficiency and reliability
Other
- 2-4 years of experience in data engineering, software development, or related technical roles
- Bachelor's degree preferred, computer science, information systems, or related field (or equivalent experience)
- Must possess good organizational skills, be resourceful and have attention for detail
- Demonstrated ability to efficiently handle multiple priorities in a fast-paced, change oriented environment
- Travel to CES in January.