Point72 is looking to improve its data engineering capabilities to support the firm’s expanding data needs.
Requirements
- Expertise in Databricks, including Spark (PySpark or Scala), Delta Lake, and notebook-based development workflows
- Proficiency in building scalable, distributed data pipelines in a cloud environment (preferably Azure or AWS)
- Strong programming skills in Python and SQL
- Solid understanding of data architecture principles, data modeling, and data warehousing
- Experience with version control (e.g., Git), CI/CD workflows, and modern data orchestration tools (e.g., Airflow, dbt)
- Experience with cloud-native and distributed processing frameworks
- Experience with data governance best practices
Responsibilities
- Design, develop, and maintain robust data pipelines and ETL workflows in Databricks
- Ingest, process, and normalize large volumes of structured and unstructured financial data from a variety of sources
- Optimize performance of data pipelines and ensure high availability, reliability, and data quality across all production systems
- Implement data governance best practices, including data lineage, cataloging, auditing, and access controls
- Support the integration of third-party data vendors and APIs into the broader data ecosystem
- Continuously evaluate and implement new tools and technologies to improve data engineering capabilities
- Collaborate closely with data scientists, analysts, and portfolio managers to understand data needs and deliver scalable data infrastructure
Other
- 3–6 years of professional experience in data engineering or a similar role, ideally within a financial services or high-performance computing environment
- Demonstrated ability to work collaboratively in a fast-paced, high-stakes environment with both technical and non-technical stakeholders
- Bachelor’s or master’s degree in computer science, engineering, or a related technical field
- Commitment to the highest ethical standards