Capital One is seeking to solve complex business problems in the technology space by building and maintaining sophisticated data pipelines that ingest data from major auto customers.
Requirements
- Experience with big data technologies (Spark, Flink, Kafka, Snowflake, AWS Big Data Services, Redshift)
- Experience with programming languages like Java, Scala, Python
- Experience with public cloud (AWS, Microsoft Azure, Google Cloud)
- Experience with Distributed data computing tools (Kafka, Spark, Flink)
- Experience with NoSQL implementation (DynamoDB, OpenSearch)
- Experience with data warehousing (Redshift or Snowflake)
- Experience with UNIX/Linux including basic commands and shell scripting
Responsibilities
- Design and build Enterprise Level scalable, low-latency, fault-tolerant streaming data platform
- Build the next generation Distributed Streaming Data Pipelines and Analytics Data Stores using streaming frameworks
- Build data pipelines using big data technologies on medium to large scale datasets
- Work in a creative & collaborative environment driven by agile methodologies
- Collaborate with and across Agile teams to design, develop, test, implement, and support technical solutions
- Perform unit tests and conduct reviews with other team members to ensure code is rigorously designed, elegantly coded, and effectively tuned for performance
- Collaborate with digital product managers, and deliver robust cloud-based solutions
Other
- Bachelor’s Degree
- At least 3 years of experience in application development
- At least 1 year of experience in big data technologies
- Ability to work in a fast-paced, inclusive, and iterative delivery environment
- Ability to collaborate with and across Agile teams
- Ability to work with digital product managers
- Must not require sponsorship for employment authorization