The company is looking to solve the problem of optimizing cellular network performance using AI-RAN systems, aligned with 3GPP, AI-RAN, and O-RAN architectures.
Requirements
- Python
- PyTorch and TensorFlow
- ML concepts (deep learning, forecasting, anomaly detection, embeddings)
- Data preprocessing and ML pipelines
- Linux, Git, development workflows
- Docker
- Kubernetes
- LLMs, LangChain, vector databases (RAG workflows)
Responsibilities
- Design and prototype next-generation (5G/6G and beyond) AI-RAN systems aligned with 3GPP, AI-RAN, and O-RAN architectures.
- Apply machine learning techniques to optimize cellular network performance, including KPI analytics, anomaly detection, forecasting, and intelligent control.
- Develop service-aware and network-application co-optimization solutions for advanced connectivity and edge intelligence use cases.
- Implement innovative algorithms and prototypes within cloud-native, GPU-accelerated, and AI-driven environments.
- Perform hands-on testing, validation, and performance benchmarking of AI models, pipelines, and cloud-native applications in lab environments.
- Document technical findings, including results, insights, diagrams, and contributions to demos, internal publications, and innovation disclosures.
Other
- Currently a candidate for a bachelor’s or master’s degree in computer science, Software Engineering, Electrical Engineering, Data Science, AI/ML, or a related field.
- Strong analytical and problem-solving mindset
- Curiosity and willingness to learn new technologies
- Ability to work independently in a research-oriented environment
- Clear and concise communication skills