Job Board
LogoLogo

Get Jobs Tailored to Your Resume

Filtr uses AI to scan 1000+ jobs and finds postings that perfectly matches your resume

MeridianLink Logo

Principal Data Engineer 1836

MeridianLink

$148,000 - $202,000
Aug 25, 2025
Remote, US
Apply Now

MeridianLink is looking to hire a Principal Data Engineer to design, build, implement, and maintain data processing pipelines for the extraction, transformation, and loading (ETL) of data from a variety of data sources. The role will develop robust and scalable solutions that transform data into a useful format for analysis, enhance data flow, and enable end-users to consume and analyze data faster and easier.

Requirements

  • Deep expertise in Python, SQL, and distributed processing frameworks like Apache Spark, Databricks, Snowflake, Redshift, BigQuery.
  • Proven experience with cloud-based data platforms (preferably AWS or Azure).
  • Hands-on experience with data orchestration tools (e.g., Airflow, dbt) and data warehouses (e.g., Databricks, Snowflake, Redshift, BigQuery).
  • Strong understanding of data security, privacy, and compliance within a financial services context.
  • Experience working with structured and semi-structured data (e.g., Delta, JSON, Parquet, Avro) at scale.
  • Familiarity with modelling datasets in Salesforce, Netsuite and Anaplan to solve business use cases required.
  • Ability to assess unusual circumstances and uses sophisticated analytical and problem-solving techniques to identify the cause

Responsibilities

  • design, build, implement, and maintain data processing pipelines for the extraction, transformation, and loading (ETL) of data from a variety of data sources
  • lead the writing of complex SQL queries to support analytics needs
  • developing technical tools and programming that leverage artificial intelligence, machine learning, and big-data techniques to cleanse, organize and transform data and to maintain, defend and update data structures and integrity on an automated basis
  • evaluate and recommend tools and technologies for data infrastructure and processing
  • Collaborate with engineers, data scientists, data analysts, product teams, and other stakeholders to translate business requirements to technical specifications and coded data pipelines
  • work with tools, languages, data processing frameworks, and databases such as R, Python, SQL, Databricks, Spark, Delta, APIs
  • Work with structured and unstructured data from a variety of data stores, such as data lakes, relational database management systems, and/or data warehouses

Other

  • Ability to enhance relationships and networks with senior internal/external partners who are not familiar with the subject matter often requires persuasion
  • Partner with stakeholders across engineering, finance, sales, and compliance to translate business requirements into reliable data models and workflows.
  • Evaluate emerging technologies and lead POCs that shape the future of our data stack.
  • Champion a culture of security, automation, and continuous delivery in all data workflows
  • Bachelor's or master's degree in computer science, Engineering, or a related field.