Elastic, the Search AI Company, is seeking a Staff Data Engineer to contribute to building a world-class Data Platform, making it easier for internal customers to leverage data as a strategic asset.
Requirements
- Demonstrated interest in data engineering through personal projects, coursework, or contributions to open-source projects.
- Proficiency in programming languages, particularly Bash, Python, and SQL.
- Knowledge of database systems, including OLAP, OLTP, Document, and Vector databases.
- General understanding of data pipelines, ETL/ELT processes, data modeling, and data warehousing concepts.
- Basic Knowledge of Git and DevOps practices.
- Proficiency with data processing frameworks such as Kafka, Flink, and Spark.
- Experience with Apache Iceberg and Data Lake houses
Responsibilities
- Assist in developing, implementing, and optimizing data pipelines using Apache Airflow for workflow orchestration.
- Develop and maintain ETL/ELT processes using Apache Spark, Apache Iceberg, and dbt.
- Implement real-time data streaming solutions using Apache Kafka and Apache Flink.
- Use Terraform for infrastructure-as-code to deploy resources on Google Cloud Platform (GCP).
- Document & Guide Users on Best Practices for using our platform.
- Monitor and Audit the Data Platform to ensure policies and procedures are maintained.
- Participate in code reviews to improve code quality and validate standard processes are met.
Other
- Strong Analytical and Problem Solving Skills.
- Excellent Verbal and Written Communication Skills.
- Eagerness to learn and stay updated with the latest data engineering trends.
- Competitive pay based on the work you do here and not your previous salary
- Health coverage for you and your family in many locations