Capital Technology Group is seeking to modernize and innovate the way the federal government delivers software, specifically by supporting high-impact, civic tech within the federal government. The role aims to build scalable, high-impact data systems that power analytics, AI, and business decisions.
Requirements
- 3+ years of experience in data engineering or analytics engineering
- Strong SQL and Python skills
- Hands-on experience with Databricks (PySpark, Delta Lake, or Databricks SQL)
- Proficiency in dbt (core or Cloud), including model development, tests, and deployment
- Familiarity with data warehouse or lakehouse architectures (e.g., Snowflake, BigQuery, Redshift, or similar)
- Comfort working in version-controlled, collaborative environments (Git/GitHub)
- Exposure to CI/CD, Terraform, Airflow, or data observability tools
Responsibilities
- Design, build, and maintain ETL/ELT pipelines on Databricks (Spark, Delta Lake, SQL Warehouse)
- Develop and manage dbt models for data transformation, testing, and documentation
- Optimize data workflows for performance, cost, and scalability
- Implement data quality, lineage, and CI/CD best practices using Git and modern orchestration tools (e.g., Airflow, Dagster, Prefect)
- Help evolve our lakehouse and semantic modeling layer to support analytics and AI use cases
- Mentor junior engineers and code reviews
- Collaborate with cross-functional stakeholders to define data requirements and deliver solutions in agile environments
Other
- MUST BE US Citizens
- be able to obtain Public Trust clearance
- Clearly communicate technical topics to both technical and non-technical audiences
- Bachelor's degree in Computer Science, Engineering, or a related technical field
- Excellent written and verbal communication skills, with the ability to explain complex topics to diverse audiences