Job Board
LogoLogo

Get Jobs Tailored to Your Resume

Filtr uses AI to scan 1000+ jobs and finds postings that perfectly matches your resume

Nokia Logo

Software Engineer

Nokia

Salary not specified
Nov 4, 2025
Remote, US
Apply Now

Nokia's Cloud and Network Services (CNS) is building out products and capabilities for their Autonomous Networks vision, creating a data fabric to feed intelligent decision-making for autonomous operations. The Data Engineering Architect for Autonomous Networks will be responsible for the data architecture and fabric services for the Autonomous Network Fabric (ANF), a petabyte-scale service.

Requirements

  • Deep expertise in data lakehouse tools (e.g., Iceberg) and cloud-native storage systems.
  • Deep understanding of cloud ecosystems (GCP, Azure, AWS) and CNCF technologies.
  • Proven hands-on experience with Neo4j or similar graph databases and Cypher query language.
  • Solid working knowledge of ClickHouse, PostgreSQL, BigQuery, RedShift etc. for analytical and operational querying.
  • Strong programming skills in Python, Scala, or Java.
  • Proficiency with Kubernetes, Helm, CI/CD pipelines, and GitOps tooling (e.g., ArgoCD, Flux).
  • Understanding of data governance, schema evolution, version control, and secure data access models.

Responsibilities

  • Design and implement robust, scalable multi-tenant Lakehouse infrastructure supporting structured and unstructured and graph-based data.
  • Implement data lifecycle strategies for versioning, schema evolution, compaction, retention, and governance across multi-tenant datasets across the combination of data mesh and data fabric architecture supporting deployments across on-prem, SaaS and OnPrem and hybrid Cloud environments.
  • Storage optimization: use S3-compatible storage, Ceph or HDFS for managing large datasets in Parquet or Avro formats with intelligent partitioning and indexing strategies.
  • Implement and manage database replication, clustering, failover strategies, backup scheduling, and disaster recovery plans across selected data Lakehouse technologies such as BigQuery, Clickhouse, PostGre, Neo4j, Redshift etc.
  • Performance tuning and capacity planning: monitor workload patterns, optimize queries, index strategies, and storage utilization for current and projected data growth.
  • Design and implement reporting services to enable API based integrations, dashboards in UI, and adhoc analytics.
  • Enforce security practices including data encryption, access control, GDPR compliance and audit logging.

Other

  • 4-6 years of experience in data engineering, distributed systems, or cloud-native data platforms. Experience on a SaaS product and building products for on-prem delivery.
  • Experience with database replication, clustering, failover, backup/recovery processes, and high availability, and multi-tenancy configurations.
  • Solid knowledge of API design, versioning, and backward compatibility principles.
  • Familiarity with observability tooling (Grafana, Prometheus, OpenTelemetry etc.), GitOps (e.g. ArgoCD), and Kubernetes Operators/CrossPlane.
  • Deep understanding of observability frameworks, SRE principles, and SLA/SLO enforcement.