GovCIO is seeking a Data Engineer to support operational solutions in their multi-cloud lakehouse environments on Azure and AWS, ensuring customer satisfaction through efficient data engineering practices.
Requirements
- Experience with Azure cloud, Azure API, Azure Synapse Analytics, Spark, Databricks, Python, Medallion Architecture in data lakehouse environments, ELT/ETL processes, and CI/CD pipelines.
- Proven expertise in Azure (e.g., Data Lakehouse, Synapse, Data Factory).
- Extensive hands-on experience with Databricks.
- Strong understanding and practical application of Medallion Architecture in data lakehouse environments.
- Solid knowledge of data modeling, ETL/ELT processes, and big data technologies.
- Proficiency in Python and Spark.
- Experience with CI/CD pipelines, Infrastructure as Code (e.g., Terraform, ARM templates).
Responsibilities
- Delivery of data engineering work (ingestion, conditioning and publishing) aligned with published standards.
- Data pipeline development and data integration.
- Write QA notebooks for engineering work to be checked by CX Insights QA personnel, aligned with policy.
- Align engineering work with published performance and monitoring standards for data pipelines.
- Remediation of data pipelines.
- Comply with Data Quality and governance standards.
- Documentation for Data Products.
Other
- This position will be fully remote within the United States.
- Ability to obtain and maintain a Suitability/Public Trust clearance.
- Strong communication and leadership skills.
- Ability to understand the business and translate complex data concepts for non-technical stakeholders.
- Role model the attributes of an ideal team player and foster a culture of technical excellence and continuous learning.