Relyance AI is looking to enhance its API services, develop robust data pipelines, maintain its microservices architecture, and build scalable systems to support its global data infrastructure.
Requirements
- Experience with Python.
- Ability to write clear, concise, and maintainable code considering design principles and applying sound testing practices.
- Experience in designing and evolving data models and ETL pipelines.
- Proficient with public cloud concepts and delivering working solutions on public cloud infrastructure, preferably GCP (BigQuery, BigTable, Pub/Sub).
- Experience with Infrastructure as Code tools, containerization and orchestration tools, and end-to-end ownership of data governance and management.
Responsibilities
- Design, develop, and maintain API services using Python to ensure seamless integration and functionality.
- Create and optimize data pipelines to efficiently process and transform large datasets.
- Maintain and enhance our microservices architecture, ensuring reliability and scalability.
- Build and maintain data dashboards using tools like Retool and Tooljet to provide actionable insights to stakeholders.
- Design and implement scalable systems that can handle increasing data volumes and user demands.
- Develop systems that manage the entire lifecycle of data, ensuring data quality, security, and accessibility.
Other
- A track record of delivering scalable and reliable service components and data ingestion and processing pipelines.
- A deeply curious mindset, proactive about continuous improvement, and excitement for learning quickly in a fast-growth environment.
- Customer and mission-driven: motivated by bringing the most value as possible to users and shaping an industry from the ground up.
- Close attention to detail and a forward-thinking outlook.
- Excitement for a fast-paced, iterative, but heavily test-driven development environment.