Applied Intuition is looking to scale its open-source data infrastructure to support massive volumes of data for its platform needs, and is seeking an infrastructure engineer to design and develop both external and internal products across the entire data lifecycle.
Requirements
- Experience with large-scale open source data technologies (Spark, Kafka, Hudi, Flyte, etc.).
- Experience with containerization and other modern software development workflows.
- Knowledge of the open source landscape with judgment on when to choose open source versus build in-house.
- Expertise with modern programming languages (Python, C++, GoLang, Scala, etc.).
- Experience with other open-source data technologies not listed above.
- Expertise with Kubernetes.
- Experience with enterprise software, including on-prem and/or cloud environments.
Responsibilities
- Scale infrastructure to support all deployment types (cloud, hybrid, on-prem) and across regions.
- Be involved in the end to end data lifecycle, from the external-facing product to the underlying platform and infrastructure for it.
- Build features to tune processing pipeline for fast data ingestion and indexing depending on customer's needs and workloads.
- Enable product workflows that expose performant query interfaces and offer easy-to-use integration hooks.
- Develop and deploy high-quality software using modern tooling and frameworks, especially open-source technologies.
- Handle massive volumes of data for Applied Intuition's platform needs
- Work directly with all business units across the company to design and develop both external and internal products
Other
- In-office work 5 days a week, with occasional remote work allowed.
- Bachelor's, Master's, or Ph.D. degree (not explicitly mentioned but implied)
- Ability to work in a team and interact with external and internal users to collect feedback.
- Must be willing to take ownership over technical and product decisions.
- Must be able to work in the location listed (varies by job posting)