Epsilon is seeking to design, develop, and maintain high-performance applications and microservices within a Big Data environment to unlock real opportunities through innovation and technology.
Requirements
- Proficiency in Java and building data-driven solutions at scale.
- Familiarity with Apache Spark and exposure to Hadoop, Hive, or similar Big Data tools.
- Experience with cloud platforms (AWS, Azure, Databricks, GCP).
- Knowledge of modern frameworks and tools such as Kubernetes, Docker, and Airflow.
- Strong problem-solving skills and ability to deliver end-to-end solutions.
Responsibilities
- Develop and optimize Spark jobs, including troubleshooting and performance tuning.
- Collaborate across teams to deliver robust, scalable solutions in agile sprint cycles.
- Implement best practices for application development and deployment.
- Build and deploy components on cloud platforms (AWS, Azure, Databricks, GCP).
- Continuously expand technical expertise in data engineering and cloud technologies.
Other
- Bachelor’s degree in Computer Science or related field.
- 3+ years of software development experience in distributed or multi-node environments.
- Collaborative mindset with a passion for learning and growth.
- Time to Recharge: Flexible time off (FTO), 15 paid holidays
- Comprehensive health coverage, 401(k), tuition assistance, commuter benefits, professional development, employee recognition, charitable donation matching, health coaching and counseling