Epsilon is looking to optimize AI/ML workflows through data engineering and platform support, and to support the migration of existing data or applications to cloud platforms.
Requirements
- Proficient in programming with Scala, Python, or Java; comfortable building data-driven solutions at scale.
- Familiarity with Apache Spark and exposure to Hadoop, Hive, or related big data technologies.
- Experience with cloud platforms (AWS, Azure, Databricks or GCP) and an interest in cloud migration projects.
- Exposure to modern data tools and frameworks such as Kubernetes, Docker, and Airflow (a plus).
- Hadoop certification is a plus.
- Spark certification is a plus.
Responsibilities
- Collaborate with decision scientists to enable and optimize AI/ML workflows through data engineering and platform support.
- Provide support for Spark, Hive, and Hadoop jobs, including troubleshooting and performance analysis and optimization.
- Participate in agile sprint cycles, helping to review designs, provide feedback, and ensure successful delivery.
- Contribute to best practices for application development.
- Gather requirements for platform and application enhancements and work with the team to implement them.
- Continuously learn and expand your technical skills in data engineering and cloud technologies.
- Support the migration of existing data or applications to cloud platforms (AWS, Azure, Databricks or GCP)
Other
- 3+ years of software development experience in a scalable, distributed, or multi-node environment.
- Strong problem-solving skills with the ability to own problems end-to-end and deliver results.
- Consultative attitude — comfortable being “first in,” building relationships, communicating broadly, and tackling challenges head-on.
- Collaborative teammate with an eagerness to learn from peers and mentors while contributing to a culture of growth.
- Motivated to grow your career within a dynamic, innovative company.