Lumen is looking to research, explore, and secure emerging AI systems and technologies to support their Cyber Defense Services team in identifying and mitigating AI/ML security risks.
Requirements
- Coursework or research experience in machine learning, data analytics, or AI fundamentals.
- Exposure to Python, R, or Julia and comfort with scripting for data analysis or ML experimentation.
- Familiarity with basic cybersecurity principles such as threat modeling, secure coding, and vulnerability scanning.
- Demonstrated curiosity in AI/ML security, including adversarial attacks, model poisoning, prompt injection, and data leakage.
- Awareness of responsible AI frameworks (NIST AI RMF, Google SAIF).
- Knowledge of large language models (LLMs), generative AI, and agentic AI architectures.
- Experience or exposure to ML frameworks such as TensorFlow, PyTorch, or Scikit-learn.
Responsibilities
- Support research and testing efforts related to AI/ML security threats such as adversarial attacks, model poisoning, prompt injection, and data leakage.
- Assist in developing and documenting methodologies for AI red-teaming and secure AI lifecycle practices.
- Contribute to the evaluation of AI/ML systems for security vulnerabilities and bias, supporting defensive and mitigation activities.
- Analyze datasets and model outputs for potential integrity and confidentiality concerns.
- Help maintain lab environments and datasets for ongoing AI adversarial testing.
- Prepare documentation, findings, and presentations for internal and external audiences.
- Collaborate with senior analysts, engineers, and researchers in the Cyber Defense Services team.
Other
- Intern must be available to work full time (40 hours/week) during the 10-week program
- Program Dates: May 29 – August 7, 2026.
- This position is fully remote / work from home in the continental US.
- US Work Authorization required for this role.
- Program eligibility is contingent on the candidate’s commitment to the entire 10-week program.