Accenture Flex is looking for individuals to help drive business transformation for leading organizations and communities by applying their skills and experience to address today's biggest business challenges using the latest emerging technologies.
Requirements
- Minimum of 3 years of work experience with one or more of the following: Databricks Data Engineering, DLT, Azure Data Factory, SQL, PySpark, Synapse Dedicated SQL Pool, Azure DevOps, Python
- Azure Function Apps
- Azure Logic Apps
- Precisely & COSMOS DB
- Advanced proficiency in PySpark.
- Advanced proficiency in Microsoft Azure Databricks, Azure DevOps, Databricks Delta Live Tables and Azure Data Factory.
Responsibilities
- Create new data pipelines leveraging existing data ingestion frameworks, tools
- Orchestrate data pipelines using the Azure Data Factory service.
- Develop/Enhance data transformations based on the requirements to parse, transform and load data into Enterprise Data Lake, Delta Lake, Enterprise DWH (Synapse Analytics)
- Perform Unit Testing, coordinate integration testing and UAT Create HLD/DD/runbooks for the data pipelines
- Configure compute, DQ Rules, Maintenance Performance tuning/optimization
Other
- Required active participation/contribution in team discussions will be key as you contribute in providing solutions to work related problems.
- Applicants for employment in the US must have work authorization that does not now or in the future require sponsorship of a visa for employment authorization in the United States.
- Candidates who are currently employed by a client of Accenture or an affiliated Accenture business may not be eligible for consideration.
- Job candidates will not be obligated to disclose sealed or expunged records of conviction or arrest as part of the hiring process.
- The Company will not discharge or in any other manner discriminate against employees or applicants because they have inquired about, discussed, or disclosed their own pay or the pay of another employee or applicant.