Nava is looking to modernize an existing legacy enterprise platform to improve public-facing processes and experiences for one of its major government partners.
Requirements
- At least 5 years data engineering experience
- At least 2 years of experience in cloud data architecture (AWS preferred) and big data technologies
- Experience with AWS services including AWS Glue, AWS Athena, AWS Redshift, AWS RDS
- Experience with Databricks or Spark
- Experience with building ETL/ELT pipelines in Python
- Proficient with relational databases and advanced SQL queries
- Experience with data cleaning and data modeling while protecting sensitive data
Responsibilities
- Collect data access patterns and review current data models to optimize designs for customer use cases
- Develop scalable data ingestion and processing pipelines
- Implement large-scale data ecosystems within cloud-based platforms that include data management and data governance of structured and unstructured data
- Design, develop, test, automate, and deploy data engineering solutions in a cloud platform, such as AWS
- Participate in software design and code reviews
- Develop automated testing, monitoring and alerting, and CI/CD for production systems
- Maintain security and privacy standards in all aspects of the data pipeline
Other
- Must be legally authorized to work in the United States
- Must meet any other requirements for government contracts for which candidates are hired
- Work authorization that doesn’t require visa sponsorship, now or in the future
- May be subject to a government background check or security clearance, depending on the contract
- Must reside in one of the following states: Alabama, Arizona, California, Colorado, DC, Florida, Georgia, Illinois, Louisiana, Maine, Maryland, Massachusetts, Michigan, Minnesota, Missouri, Nevada, North Carolina, New Jersey, New York, Oklahoma, Oregon, Pennsylvania, Rhode Island, South Carolina, Texas, Tennessee, Utah, Virginia, Washington, Wisconsin