Accenture Federal Services is looking to hire a Data Engineer to develop stages of distributed parallel data processing pipelines to help the US federal government make the nation stronger and safer and life better for people.
Requirements
- 2 years of experience as a software engineer or a data engineer with 2 years of experience with Python, Java, or other programming languages
- 2 years of experience applying agile methodologies to the software development life cycle (SDLC)
- 2 years of experience with Git repositories and CI/CD pipelines. (Examples include but are not limited to GitHub and GitLab 2 years of experience with distributed parallel streaming and batch data processing pipelines)
- 2 years of experience integrating with data SDKS / APIs and data analytics SDKs / APIs
- 1 year of experience developing, operating, and maintaining data processing pipelines in a classified environment
- 1 year of experience data mapping, modeling, enriching, and correlating classified data
- 1 year of experience with Python/PySpark
Responsibilities
- develop stages of distributed parallel data processing pipelines, which includes but is not limited to: configuring data connections, data parsing, data normalization, data mapping and modeling, data enrichment, and integration with data analytics
- Operate and maintain the data processing pipelines in accordance with the availability requirements of the platform
- Follow agile methodologies when applied to the data engineering part of the system development life cycle (SDLC)
- Update technical documentation such as system design documentation (SDD); standard operating procedures (SOPs); and tactics, techniques, and procedures (TTPs); and training material
Other
- Active Top Secret federal security clearance and eligible for SCI is required
- Applicants for employment in the US must have work authorization that does not now or in the future require sponsorship of a visa for employment authorization in the United States.