Radical AI, Inc. is looking to accelerate scientific research & development, specifically in materials R&D, by leveraging artificial intelligence and machine learning to pioneer generative materials science and revolutionize how materials are created and discovered. The company aims to speed up R&D and address global challenges.
Requirements
- 6+ years in data engineering, with proven experience building and managing enterprise-scale, auditable ETL pipelines and complex datasets.
- Strong background in computer science: algorithms, data structures, system design and data modeling.
- Strong Python and/or Go skills and and experience deploying production-grade code (not just notebooks).
- Experience with relational SQL/NoSQL databases and data lakehouse/warehouse frameworks (e.g., BigQuery, Snowflake, Redshift, Databricks).
- Experience with data pipeline frameworks (e.g., Beam, Spark, Kafka, Pulsar, etc.), modern data orchestration frameworks (e.g., Dagster, Prefect), and cloud-native storage (e.g., S3, ADLS).
- Experience with large scale distributed system design, operation and optimization.
- Experience with lightweight, embedded analytical databases (e.g., Duckdb, SQLite).
Responsibilities
- Design and implement data pipelines connecting our autonomous lab with operational and product systems, including our AI-driven discovery engine.
- Build and support scalable, audit-proof architecture.
- Define and enforce best practices for data modeling, lineage, observability, and reconciliation across finance data domains.
- Ensure data systems are AI-ready and capable of supporting predictive analytics, autonomous agent workflows, and large-scale automation.
- Own the end-to-end planning, execution, and delivery of largescale data initiatives.
- Help define the roadmap for agentic AI automation, enabling intelligent workflows, process automation, and AI-driven decision-making within scientific discovery.
- Work side by side with AI researchers and engineers, shaping the next generation of autonomous systems for materials discovery.
Other
- B.S. or M.S. in Computer Science, Information Security, Engineering, or a related field, or equivalent practical experience.
- An experimental mindset: comfortable testing hypotheses, learning from failures, and iterating quickly; doesn’t shy away from problems with no right answer.
- Proven ability to scope ambiguous problems, develop end-to-end solutions, and communicate outcomes effectively.
- Foundational member of the team, you'll have the opportunity to shape the standards, culture, and values of the data engineering team.
- Drive a continuous improvement culture through retrospectives, process streamlining, and lean execution experiments.