The company is looking to solve the problem of manual processes and inefficiencies by implementing data solutions that support process automation.
Requirements
- Experience in architecting, designing, and maintaining data environments in a large enterprise
- Experience in python programming, spark SQL, and data lake design
- Hands-on experience in AWS services (S3, Glue, Lambda, Step Function, Athena, Cloud Watch, DynamoDB and more)
- Proficiency in writing complex SQL queries and optimizing them for performance
- Experience with ETL processes using SSIS, Airflow, or relevant job orchestration
- Experience in visualization techniques (Quicksight, Power BI)
- Strong understanding of automation tools and frameworks for data pipelines
Responsibilities
- Collaborate with stakeholders to design, architect, and support the data needs of the automation team
- Provide technical input for the design, development, testing, deployment, and maintenance of standard and complex data needs
- Interpret data models and extract meaningful insights from data within various source systems
- Develop, manage and track project deliverables in a timely manner
- Recognize system deficiencies and implement effective solutions
- Stay updated on the latest features, upgrades, and industry trends to ensure continuous improvement of data utilized in automation solutions
- Ensure internal controls systems are accurately documented and working effectively
Other
- 4+ years of relevant experience
- Basic understanding of Javascript and some experience supporting data needs for workflow tools
- Understanding of basic accounting principles
- Experience in project lifecycle, including requirements gathering and development of implementation strategies
- Experience in platforms such as Workday Studio, Appian or similar tools
- Must be living within a commutable distance of Dallas, TX metroplex or San Juan Capistrano, CA
- Pre-employment criminal background screening required