FHI is looking to build, debug, and maintain production data pipelines and system integrations to deliver reliable data, actionable insights, and process automation across the business.
Requirements
- Strong SQL skills: able to encapsulate complex logic and messy data into simple, consistent models for analysts.
- Practical experience with Python (Pandas nice to have).
- Experience integrating external systems via inbound and outbound APIs.
- Understanding of logging, error handling, and control flow required to operate production data pipelines.
- Solid grasp of data architecture and modeling (normalized/denormalized, star/snowflake).
- Experience using version control with a team, ideally Git.
Responsibilities
- Build and maintain ETL pipelines that ingest and validate source-system data with minimal transformation.
- Design and implement SQL transformation layers that translate that raw source-system data into analyst-ready models.
- Build and maintain data integrations via inbound and outbound APIs.
- Independently troubleshoot data failures, across the entire data pipeline.
- Automate manual processes and improve data delivery and reliability.
- Create clear documentation (ETL processes, object usage, data models) and test/validate code changes.
Other
- Bachelor’s in Computer Science, or equivalent experience.
- Experience maintaining data pipelines and integrations across SQL Server/Snowflake or similar environments.
- Prolonged periods of sitting and working on a computer.
- Location: Remote (U.S.) role with working hours aligned to Eastern Time (ET).
- Strong preference for candidates based in North Carolina.