The Trade Desk is seeking to solve hard problems at scale in the field of data engineering to create a better media ecosystem
Requirements
4+ years of experience in a Data engineering role and have a broad understanding of Data Modeling, SQL, OLAP, and ETL required.
4+ years of experience working with multiple database platforms by designing and implementing data and analytics solutions using technologies such as Snowflake, Databricks, Vertica, SQL Server, and MySQL required.
4+ years of experience required in one or more programming languages, particularly SQL. Proficiency in the following programming languages also required: PL/SQL, Python, C, Scala or Java.
Experience with workflow technologies like Spark, Airflow, Glue, Prefect or Dagster required
Experience with version control systems, specifically Git required.
Familiarity with DevOps best practices and automation of processes like building, configuration, deployment, documentation, testing, and monitoring.
Understanding BI and reporting platforms required, awareness of industry trends in the BI/reporting space, and how it can apply to an organization’s product strategies.
Responsibilities
Data Pipeline Development: Design, build, and optimize scalable ETL/ELT pipelines for both batch and real-time data processing from disparate sources.
Infrastructure Management: Assist in the design and implementation of data storage solutions, including data warehouses and data lakes (e.g., Snowflake, S3, Spark), ensuring they are optimized for performance and cost efficiency.
Data Quality and Governance: Implement data quality checks, monitor data pipeline performance, and troubleshoot issues to ensure data accuracy, reliability, and security, adhering to compliance standards (e.g., GDPR, CCPA).
Collaboration: Work closely with product managers, data scientists, business intelligence analysts, and other software engineers to understand data requirements and deliver robust solutions.
Automation and Optimization: Automate data engineering workflows using orchestration tools (e.g., Apache Airflow, Dagster, Azure Data Factory) and implement internal process improvements for greater scalability.
Mentorship: Participate in code reviews and provide guidance or mentorship to junior team members on best practices and technical skills.
Documentation: Produce comprehensive and usable documentation for datasets, data models, and pipelines to ensure transparency and knowledge sharing across teams.
Other
Bachelor's degree in computer science, information security, or a related field, or equivalent work experience. Masters degree preferred.
Strong analytical and problem-solving skills with attention to detail.
Excellent communication and collaboration skills to work effectively with diverse teams and stakeholders.