Job Board
LogoLogo

Get Jobs Tailored to Your Resume

Filtr uses AI to scan 1000+ jobs and finds postings that perfectly matches your resume

Esimplicity Logo

Senior Data Engineer

Esimplicity

Salary not specified
Aug 21, 2025
Columbia, MD, US
Apply Now

eSimplicity is seeking a Senior Data Engineer to help evaluate and design robust data integration solutions for large-scale, disparate datasets spanning multiple platforms and infrastructure types, including cloud-based and potentially undefined or evolving environments.

Requirements

  • Expertise in data lakes, data warehouses, data meshes, data modeling and data schemas (star, snowflake…).
  • Strong expertise in SQL, Python, and/or R, with applied experience in Apache Spark and large-scale processing using PySpark or Sparklyr.
  • Experience with Databricks in a production environment.
  • Strong experience with AWS cloud-native data services, including S3, Glue, Athena, and Lambda.
  • Strong proficiency with GitHub and GitHub Actions, including test-driven development.
  • Proven ability to work with incomplete or ambiguous data infrastructure and design integration strategies.
  • Experience with cloud platform services: AWS and Azure.

Responsibilities

  • Responsible for developing, expanding and optimizing our data and data pipeline architecture, as well as optimizing data flow and collection for cross functional teams.
  • Support software developers, database architects, data analysts and data scientists on data initiatives and ensure optimal data delivery architecture is consistent throughout ongoing projects.
  • Creates new pipeline and maintains existing pipeline, updates Extract, Transform, Load (ETL) process, creates new ETL feature , builds PoCs with Redshift Spectrum, Databricks, AWS EMR, SageMaker, etc.
  • Implements, with support of project data specialists, large dataset engineering: data augmentation, data quality analysis, data analytics (anomalies and trends), data profiling, data algorithms, and (measure/develop) data maturity models and develop data strategy recommendations.
  • Operate large-scale data processing pipelines and resolve business and technical issues pertaining to the processing and data quality.
  • Assemble large, complex sets of data that meet non-functional and functional business requirements
  • Building required infrastructure for optimal extraction, transformation and loading of data from various data sources using AWS and SQL technologies

Other

  • All candidates must pass public trust clearance through the U.S. Federal Government.
  • Bachelor's degree in Computer Science, Software Engineering, Data Science, Statistics, or related technical field.
  • 10+ years of experience in software/data engineering, including data pipelines, data modeling, data integration, and data management.
  • Occasional travel for training and project meetings. It is estimated to be less than 25% per year.
  • Excellent analytical, organizational, and problem-solving skills.