The Platform Data Engineering team is looking for a Senior Data Engineer to build platform-level data engineering frameworks and tools, and to design and implement data pipelines that feed data into the data warehouse platform across clouds (AWS + Azure).
Requirements
- 4+ years of experience in one high performance programming language - Java and/or Python preferred
- 3 Normal & Dimensional modeling experience
- ETL/ELT pipelines loading into data warehouses / data lakes
- Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores.
- Experience with relational SQL and NoSQL databases, including Postgres and Cassandra.
- Experience with stream-processing systems: Storm, Spark-Streaming, etc.
- Experience with big data tools: Hadoop, Spark, Kafka, etc.
Responsibilities
- Architect, design and develop data pipeline for scale and maintainability
- Lead in the design, implementation, and deployment of successful systems and services
- Ensure the quality of architecture and design of systems
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and Big Data technologies.
- Perform design and code reviews
- Create data tools for analytics, data scientist team members that assist them in building and optimizing our product
- Create data apis for application team members that assist them in building and optimizing our product
Other
- work independently with minimal guidance
- enjoy optimizing data systems and building them from the ground up
- passionate about developing scalable systems
- Cross-training peers and mentoring teammates
- Graduate from B.E/ B.Tech / MCA / M.Tech Background.