PointClickCare is looking to solve the business and technical problem of enhancing and implementing batch and real-time data solutions, mentoring team members, and delivering on business and technical objectives in an ambiguous and uncertain environment. They aim to shape the future of their data ecosystem by driving innovation that impacts the entire organization.
Requirements
- Principal Data Engineer with at least 10 years of professional experience in software or data engineering, including a minimum of 4 years focused on streaming and real-time data systems
- Proven experience driving technical direction and mentoring engineers while delivering complex, high-scale solutions as a hands-on contributor
- Deep expertise in streaming and real-time data technologies, including frameworks such as Apache Kafka, Flink, and Spark Streaming
- Strong understanding of event-driven architectures and distributed systems, with hands-on experience implementing resilient, low-latency pipelines
- Practical experience with cloud platforms (AWS, Azure, or GCP) and containerized deployments for data workloads
- Fluency in data quality practices and CI/CD integration, including schema management, automated testing, and validation frameworks (e.g., dbt, Great Expectations)
- Operational excellence in observability, with experience implementing metrics, logging, tracing, and alerting for data pipelines using modern tools
Responsibilities
- Lead and guide the design and implementation of scalable streaming data pipelines
- Engineer and optimize real-time data solutions using frameworks like Apache Kafka, Flink, Spark Streaming
- Collaborate cross-functionally with product, analytics, and AI teams to ensure data is a strategic asset
- Advance ongoing modernization efforts, deepening adoption of event-driven architectures and cloud-native technologies
- Drive adoption of best practices in data governance, observability, and performance tuning for streaming workloads
- Embed data quality in processing pipelines by defining schema contracts, implementing transformation tests and data assertions, enforcing backward-compatible schema evolution, and automating checks for freshness, completeness, and accuracy across batch and streaming paths before production deployment
- Establish robust observability for data pipelines by implementing metrics, logging, and distributed tracing for streaming jobs, defining SLAs and SLOs for latency and throughput, and integrating alerting and dashboards to enable proactive monitoring and rapid incident response
Other
- For Remote Roles: If this role is remote, there will be in-office events that will require travel to and from the Mississauga and/or Salt Lake City office. These will include, but not limited to, onboarding, team events, semi-annual and annual team meetings.
- For Hybrid Roles: If this role is Hybrid, there will be an expectation to reside within commutable distance to the office/location specified in the job listing. This will include, but not limited to, weekly/bi-weekly/monthly events in the office with your specific team. This is a requirement for this role.
- Collaborative, adventurous and passionate
- Strong collaboration and communication skills, with the ability to influence stakeholders and evangelize modern data practices within your team and across the organization
- Comfortable leveraging AI tools to accelerate development