Job Board
LogoLogo

Get Jobs Tailored to Your Resume

Filtr uses AI to scan 1000+ jobs and finds postings that perfectly matches your resume

Esimplicity Logo

Senior Data Engineer

Esimplicity

Salary not specified
Sep 4, 2025
Columbia, MD, US
Apply Now

eSimplicity is looking for a Data Engineer to develop, expand, and optimize data and data pipeline architecture, as well as optimize data flow and collection for cross-functional teams, to improve the lives and ensure the security of all Americans by providing intuitive products and services to Federal agencies.

Requirements

  • Minimum of 8 years of previous Data Engineer or hands on software development experience with at least 4 of those years using Python, Java and cloud technologies for data pipelining.
  • Expert data pipeline builder and data wrangler who enjoys optimizing data systems and building them from the ground up.
  • Experienced in designing data architecture for shared services, scalability, and performance
  • Experienced in designing data services including API, meta data, and data catalogue.
  • Experienced in data governance process to ingest (batch, stream), curate, and share data with upstream and downstream data users.
  • Ability to build and optimize data sets, ‘big data’ data pipelines and architectures?
  • Demonstrated understanding and experience using software and tools including big data tools like Spark and Hadoop; relational databases including MySQL and Postgres; workflow management and pipeline tools such as Apache Airflow, and AWS Step Function; AWS cloud services including Redshift, RDS, EMR and EC2; stream-processing systems like Spark-Streaming and Storm; and object function/object-oriented scripting languages including Scala, Java and Python.?

Responsibilities

  • Responsible for developing, expanding and optimizing our data and data pipeline architecture, as well as optimizing data flow and collection for cross functional teams.
  • Support software developers, database architects, data analysts and data scientists on data initiatives and ensure optimal data delivery architecture is consistent throughout ongoing projects.
  • Creates new pipeline and maintains existing pipeline, updates Extract, Transform, Load (ETL) process, creates new ETL feature , builds PoCs with Redshift Spectrum, Databricks, AWS EMR, SageMaker, etc.;
  • Implements, with support of project data specialists, large dataset engineering: data augmentation, data quality analysis, data analytics (anomalies and trends), data profiling, data algorithms, and (measure/develop) data maturity models and develop data strategy recommendations.
  • Operate large-scale data processing pipelines and resolve business and technical issues pertaining to the processing and data quality.
  • Assemble large, complex sets of data that meet non-functional and functional business requirements
  • Identify, design, and implement internal process improvements including re-designing data infrastructure for greater scalability, optimizing data delivery, and automating manual processes

Other

  • All candidates must pass public trust clearance through the U.S. Federal Government.
  • A Bachelor’s degree in Computer Science, Information Systems, Engineering, Business, or other related scientific or technical discipline. With ten years of general information technology experience and at least eight years of specialized experience, a degree is NOT required.
  • Self-sufficient and comfortable supporting the data needs of multiple teams, systems, and products.
  • Flexible and willing to accept a change in priorities as necessary.
  • Ability to work in a fast-paced, team-oriented environment