Job Board
LogoLogo

Get Jobs Tailored to Your Resume

Filtr uses AI to scan 1000+ jobs and finds postings that perfectly matches your resume

University of Texas at Austin Logo

Senior Data Engineer

University of Texas at Austin

$115,000 - $124,968
Dec 5, 2025
TX, US
Apply Now

The UT Data Hub improves university outcomes and advances the UT mission to transform lives for the benefit of society by increasing the useability and value of institutional data. You will create complex data pipelines into UT’s cloud data ecosystem in support of academic and administrative needs. In collaboration with our team of data professionals, you will help build and run a modern data hub to enable advanced data-driven decision making for UT.

Requirements

  • At least two years of hands-on experience in Data Engineering using cloud-based platforms (AWS, Azure, or GCP) with emphasis on Databricks or Spark-based pipelines.
  • Proven experience in designing, building, and automating scalable, production-grade data pipelines and integrations across multiple systems and APIs.
  • Proficiency in Python and SQL, with demonstrated ability to write efficient, reusable, and maintainable code for data transformations and automation.
  • Strong knowledge of ETL/ELT principles, data lakehouse architectures, and data quality monitoring.
  • Experience implementing and maintaining CI/CD pipelines for data workflows using modern DevOps tools (e.g., GitHub Actions, Azure DevOps, Jenkins).
  • Familiarity with data governance, security, and compliance practices within cloud environments.
  • Strong analytical, troubleshooting, and performance optimization skills for large-scale distributed data systems.

Responsibilities

  • Lead the design, development, and automation of scalable, high-performance data pipelines across institutional systems, AWS, Databricks, and external vendor APIs.
  • Implement Databricks Lakehouse architectures to unify structured and unstructured data, enabling AI-ready data platforms that support advanced analytics and machine learning use cases.
  • Build robust and reusable ETL/ELT workflows using Databricks, Spark, Delta Lake, and Python to support batch and streaming integrations.
  • Ensure performance, reliability, and data quality of data pipelines through proactive monitoring, optimization, and automated alerting.
  • Partner with business and technical stakeholders to define and manage data pipeline parameters—including load frequency, transformation logic, and delivery mechanisms—ensuring alignment with analytical and AI goals.
  • Ensure all data engineering solutions adhere to university security, compliance, and governance guidelines, while leveraging best practices in cloud-native data development.
  • Develop and maintain comprehensive technical documentation of data pipeline designs, data flows, and operational procedures.

Other

  • This is a fixed term position that is expected to continue for a 1-year limited term from start date with a possibility for extension.
  • Flexible work arrangements are available for this position, including the ability to work 100% remotely.
  • This position provides life/work balance with typically a 40-hour work week and travel limited to training (e.g., conferences/courses).
  • Must be authorized to work in the United States on a full-time basis for any employer without sponsorship.
  • This position requires you to maintain internet service and a mobile phone with voice and data plans to be used when required for work.