Shaping the next generation of productivity tools by working on pioneering technologies to surprise and delight our users.
Requirements
- Understanding of how modern deep learning models work and how they’re applied to solve user problems.
- Experience with distributed computing frameworks (e.g., Spark, Hadoop) for large-scale data processing.
- Familiarity with cloud platforms (e.g., AWS, GCP, Azure) and ML deployment tools.
- Contributing actionable insights to drive performance improvements in production environments.
- Statistical analysis and modeling, including correlation analysis, clustering methods, probability theory, principal component analysis, outlier detection, and data visualization.
- Using ML approaches for statistical analysis.
- Proficiency with Python and numeric/statistical libraries like pandas, numpy, scipy, etc.
Responsibilities
- Analyzing user feedback and telemetry to identify domain gaps between training and actual distributions.
- Designing smaller-scale experiments to validate your hypotheses and inform further experimentation.
- Developing tooling, presentations, diagrams, etc to effectively communicate the actionable insights you generate from your analysis.
- Collaborating with our research and post training teams to inform additional model iteration and data collection.
Other
- Ability to distill complex data analysis into clear, user-focused recommendations and visualizations that drive product enhancements.
- Strong English communication skills (both written and verbal).