FactSet is looking to build and maintain next-generation data pipelines and ETL infrastructure to empower its sophisticated user base, including quantitative analysts, data scientists, and application developers, and to drive product excellence.
Requirements
- Experience programming with Python and SQL
- Experience with data processing and ETL pipelines
- Experience with Python and SQL for data transformation
- Familiarity with workflow orchestration tools (Airflow, Dagster, or similar)
- Experience working with Cloud infrastructure (AWS, Azure, GCP)
- Familiarity with data warehousing platforms (Snowflake, Redshift, or similar)
- Linux programming environment
Responsibilities
- Design, develop, and maintain highly scalable data pipelines that are robust, efficient, and reliable
- Actively participate in Agile/Scrum development process, contributing to regular sprint planning, daily standups, and iterative reviews
- Collaborate closely with stakeholders to develop clear specifications and innovative features that directly address client needs
- Contribute to regular sprint planning, daily standups, and iterative reviews to ensure continuous progress and improvement
- Help develop clear specifications and innovative features that directly address client needs and drive product excellence
- Design and implement reliable and superior software
- Participate in our ongoing effort to build out our data infrastructure and deliver solutions that transform the way our clients interact with data
Other
- BS or MS degree in Computer Science (or equivalent)
- Motivated self-starter with creative thinking
- Strong desire to learn
- Attention to detail along with the ability to see the big picture
- Excellent communication and collaboration skills