The company is looking to hire a Data Engineer to support the design and development of scalable data pipelines for Finance & Capital Markets, addressing complex data challenges.
Requirements
- Hands-on experience with AWS (S3, Glue, Redshift, EMR, Kinesis).
- Strong knowledge of Apache Spark, Kafka, and Python.
- Familiarity with Parquet, Iceberg, and Medallion Architecture.
- Understanding of financial data flows, time-series data, and risk/compliance reporting.
- Exposure to Databricks, Delta Lake, or DBT.
- Familiarity with Agile and DevOps practices.
Responsibilities
- Develop and maintain batch and streaming data pipelines using AWS, Spark, and Kafka.
- Implement Medallion Architecture layers to structure and transform data.
- Work with Parquet and Iceberg to ensure optimized storage and queries.
- Support real-time data processing for trading and market data.
- Collaborate with business teams to deliver analytics-ready data.
- Ensure data quality, security, and compliance in line with financial regulations.
Other
- 3–6 years of experience in Data Engineering, preferably in Finance/Capital Markets.
- Passion for solving complex data challenges.
- Collaborate with business teams to deliver analytics-ready data.
- Ensure data quality, security, and compliance in line with financial regulations.