Rakuten International is looking to lead the shift to an AI-first future in software testing by building and enhancing AI-powered automation tools and intelligent testing frameworks to accelerate development, improve quality, and minimize manual effort.
Requirements
- Proficiency in at least one programming language (e.g., Python, Java, JavaScript/TypeScript), with the ability to read, write, and debug test scripts.
- Familiarity with test automation frameworks and tools (e.g., Selenium, Playwright, PyTest, JUnit), and interest in learning AI-enhanced testing platforms.
- Exposure to or willingness to learn Continuous Integration/Continuous Deployment (CI/CD) systems and how automated tests fit into build pipelines.
- Basic understanding of containerization and orchestration tools (e.g., Docker, Kubernetes), and how tests run within CI/CD pipelines in cloud-native environments.
- Exposure to cloud platforms (AWS, Azure, or Google Cloud), especially related to test execution environments, observability tooling, or cost-aware testing strategies.
- Interest in AI and machine learning, especially in how they apply to software quality—such as generating test cases from requirements, summarizing test results, or identifying risky changes.
- Interest in or exposure to AI/ML technologies, especially in the context of software quality—such as using AI tools for test case generation, test optimization, or intelligent defect analysis.
Responsibilities
- Design, develop, and maintain AI-augmented test automation frameworks and services to accelerate and scale automated testing across the product development organization.
- Champion the adoption of AI-powered tools and intelligent workflows, becoming an agent of change for automation maturity and continuous improvement.
- Participate in code and architectural reviews, with an emphasis on identifying opportunities to embed AI-driven quality checks and predictive insights into the development lifecycle.
- Stay current with emerging AI technologies and trends in software testing, continuously seeking opportunities to apply machine learning, large language models (LLMs), and generative AI to improve quality practices.
- Collaborate with the Platform Services team to integrate AI-enabled testing capabilities into core developer tooling and infrastructure, enhancing usability and developer productivity.
- Contribute to training and enablement materials, helping engineering teams understand how to apply AI tools for test generation, defect triage, impact analysis, and risk-based testing.
Other
- Eagerness to grow, with examples (school, internships, side projects) of trying new tools, adopting automation, or exploring AI in coding or testing contexts.
- Basic understanding of software testing concepts and QA methodologies, including test case creation, defect reporting, and regression testing.
- Strong problem-solving and debugging skills, with curiosity about how AI tools can assist in identifying defects and improving test coverage.
- Collaborative mindset and effective communication skills, with the ability to work on cross-functional teams and learn from senior engineers.
- Demonstrated ability to learn and apply new tools quickly, including AI-enabled platforms or APIs, and enthusiasm for exploring emerging technologies in quality automation.