NISC is looking to optimize their data and data pipeline architecture in Databricks to support their application experts, software developers, database architects, and data analysts on a Data Roadmap strategy
Requirements
- Experience building and optimizing data pipelines, architectures, and data sets
- Hands-on experience developing and optimizing data pipelines and workflows using Databricks
- Experience with AWS: Lambda, S3, SQS, SNS, CloudWatch, etc
- Experience with Databricks and Delta Lake
- Experience with big data tools: Hadoop, Spark, Kafka, etc
- Experience with relational SQL and NoSQL databases, including Oracle, Postgres Cassandra, and DynamoDb
- Experience with data pipeline and workflow management tools: Hevo Data, Airflow, etc
Responsibilities
- Assemble large, complex data sets that meet functional / non-functional business requirements
- Design and build optimal data pipelines from a wide variety of data sources using AWS and Databricks technologies
- Create data tools for analytics and data scientist team members that assist them in building and optimizing a unified data stream
- Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc
- Create and maintain a culture of engagement and one that is conducive of NISC’s Statement of Shared Values
- Commitment to NISC’s Statement of Shared Values
Other
- Bachelor’s degree in Computer Science, Statistics, Informatics, Information Systems or similar discipline, preferred
- Certification in Database Administration, along with relevant experience in lieu of 4-year degree
- Strong verbal and written communication skills
- Ability to demonstrate composure and think analytically in high pressure situations
- Commitment to NISC’s Statement of Shared Values