GDIT is looking to solve big-data obstacles for their clients and advance their mission by delivering transformative solutions. They need a Data Engineer Senior to develop enterprise-grade data platforms, services, and pipelines.
Requirements
- 2-4 years of experience working with MS SQL Server and SSIS to build ETL pipelines
- 2-4 years direct experience in Data Engineering with experience in tools such as: Big data tools: Hadoop, Spark, Kafka, etc.
- Relational SQL and NoSQL databases, including Postgres and Cassandra.
- Data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.
- AWS cloud services: EC2, EMR, RDS, Redshift (or Azure equivalents)
- Object-oriented/object function scripting languages: Python, Java, C++, Scala, etc.
- Advanced working SQL knowledge and experience working with relational databases, query authoring and optimization (SQL) as well as working familiarity with a variety of databases.
Responsibilities
- Lead and architect migration of data environments with performance and reliability.
- Assess and understand the ETL jobs, workflows, BI tools, and reports
- Address technical inquiries concerning customization, integration, enterprise architecture and general feature / functionality of data products
- Experience in crafting database / data warehouse solutions in cloud (Preferably AWS. Alternatively Azure, GCP).
- Support an Agile software development lifecycle
- Experience with message queuing, stream processing, and highly scalable ‘big data’ data stores.
- Experience manipulating, processing, and extracting value from large, disconnected datasets.
Other
- Ability to obtain a position of public trust with the US government.
- US Citizenship Required
- Requires a Bachelor's degree in Data Science or closely related discipline and 5 years of experience
- 2-4 years industry experience coding commercial software and a passion for solving complex problems.
- Excellent communication and customer service skills and a passion for data and problem solving.