Intuit is looking to leverage big data technologies to gain new insights into customer experiences by building data frameworks, ingestion pipelines, and tools.
Requirements
- Proficiency in developing Software for Java (Spring & Springboot), Scala for spark streaming & spark applications, or other JVM based languages.
- Working Knowledge of SQL, XML, JSON, YML, very strong Python and Linux
- Knowledgeable with tools and frameworks Docker, Spark, Scala, Jupiter Notebook, Databricks Notebook, Kubernetes, Feature Management Platforms, SageMaker
- Advanced experience with scripting language – Python or Shell is a must have
- Strong knowledge of software development methodologies and practices
- Experience with cloud platforms such as AWS, Azure or GCP - Amazon web services: EC2, S3, and EMR (Elastic Map Reduce) or equivalent cloud computing approaches
- Strong expertise in Data Warehousing and analytic architecture
Responsibilities
- 90% hands-on development in all phases of the software life cycle.
- Rapidly fix bugs and solve problems
- Code reviews and Defect remediation
- Clean, transform and validate data for use in analytics and reporting
- Monitor data quality and pipeline performance, troubleshoot and resolve data issues
- Designing/developing ETL jobs across multiple big data platforms and tools including S3, EMR, Hive, Spark SQL, PYSpark.
- Actively stay abreast of industry best practices, share learnings, and experiment and apply cutting edge technologies while proactively identifying opportunities to enhance software applications with AI technology.
Other
- Collaborates effectively with senior engineers and architects to solve problems spanning their respective areas to deliver end-to-end quality in our technology and customer experience.
- Influences and communicates effectively
- Experience with Agile Development, SRUM, and/or Extreme Programming methodologies
- BS or MS in Computer Science, Data Engineering or related fiel
- 1+ years of core development experience