Freddie Mac is looking to enhance its internal data platform to support data-driven decision-making and modeling across the organization, specifically for prepayment model development, trading analytics, and securitization.
Requirements
- At least 5 years of experience developing production software
- Strong Python skills with at least two years of experience writing production code
- At least two years of experience in data engineering, including Apache Spark
- At least one year of experience with Snowflake
- Exposure to AWS and a willingness to learn more
- Experience writing automated unit, integration, regression, performance and acceptance tests
- Solid understanding of software design principles
Responsibilities
- Design, build, maintain and support ETL/ELT data pipelines using AWS Services (e.g. AWS EMR) and Snowflake
- Maintain data ingestion libraries written in Java and Python
- Collaborate with data producers, data scientists / modelers and data consumers to understand their requirements and design innovative solutions to empower them
- Design and develop new code, review existing code changes, and implement automated tests.
- Actively seek opportunities to continuously improve the technical quality and architecture to improve the product’s business value.
- Improve the product’s test automation and deployment practices to enable the team to deliver features more efficiently.
- Operate the data pipelines in production including release management and production support.
Other
- Passionate about hands-on software development
- A desire to work on all aspects of the software development lifecycle: requirements gathering, design, development, testing and operations
- Strong collaboration and communication skills (both written and verbal)
- Desire to continuously improve the team’s technical practices
- Ability to quickly learn, apply and deploy new technologies to solve emerging problems