JPMorgan Chase is looking to enhance, build, and deliver trusted market-leading technology products in a secure, stable, and scalable way within the Commercial & Investment Bank Regulatory Reporting Team
Requirements
- Proven expertise in Kafka, Spark, Structured Streaming, and Spark SQL
- Design and implement scalable data processing pipelines using Apache Kafka, Apache Spark, and Structured Streaming
- Integrate data processing solutions with AWS services such as Apache Kafka/Amazon MSK, Amazon S3, AWS Lambda, and Amazon EMR
- Strong experience with AWS services and cloud-based architectures
- Advance experience with Snowflake and Snow pipe streaming
- Experience with data enrichment, transformation, and optimization techniques
- Experience with Python/shell scripting and working in a Linux environment
Responsibilities
- Executes creative software solutions, design, development, and technical troubleshooting with ability to think beyond routine or conventional approaches to build solutions or break down technical problems
- Implement real-time data processing solutions to handle large volumes of data efficiently
- Ensure data processing solutions adhere to security and compliance standards
- Document data processing workflows, architecture, and best practices
- Optimize data processing pipelines for performance and scalability
- Monitor and troubleshoot performance issues in Kafka and Spark applications
- Leads communities of practice across Software Engineering to drive awareness and use of new and leading-edge technologies
Other
- Formal training and certification on software engineering concepts and 3+ years applied experience
- Adds to team culture of diversity, opportunity, inclusion, and respect
- Experience developing, debugging, and maintaining code in a large corporate environment
- Ability to work in an agile team
- Experience building distributed systems at Internet scale