PracticeTek is looking to transform revenue cycle workflows by developing AI and LLM-powered capabilities to automate coding, claims management, denials workflows, and patient financial interactions.
Requirements
- Hands-on experience with AWS AI/ML services (e.g., SageMaker, Bedrock, Lambda, Step Functions) and MLOps practices; deep specialization is not required, but you should be comfortable learning and extending existing patterns.
- Strong knowledge of LLMs and LLM-based systems, including RAG architectures, embeddings, vector stores, and semantic search; experience applying these to real business workflows.
- Proficiency in Python and modern ML/DL frameworks (e.g., PyTorch, scikit-learn); familiarity with transformer-based models and common open-source LLM tooling.
- Solid understanding of model deployment, containerization, and CI/CD concepts (Docker; EKS/ECS or similar orchestration is a plus).
- Practical experience with data engineering for ML (feature pipelines, schema versioning, data quality gates, batch/stream processing) in collaboration with data engineering teams.
- Experience working in or around regulated or privacy-sensitive environments (healthcare, fintech, or similar) and an appreciation of security, compliance, and governance constraints.
- Strong problem-solving and system design skills: able to architect solutions that are scalable, maintainable, and robust under real-world production load.
Responsibilities
- Lead the design and implementation of d ML/AI services for RCM, including use cases such as claim triage, denials prediction, automated document understanding, financial insights, and workflow automation.
- Develop and maintain end-to-end ML pipelines (data preparation, feature engineering, training, evaluation, deployment, and monitoring) with reproducibility, scalability, and cost efficiency in mind.
- Build and optimize LLM-based workflows, including RAG, embeddings, vectorization, and semantic search, to deliver accurate, context-aware answers using practice, payer, and RCM data.
- Design and implement AWS-native AI pipelines leveraging services such as Lambda, Step Functions, SageMaker, Bedrock, and AgentCore, integrated into our broader platform architecture.
- Prototype and deploy ML/DL models for structured transformations, ranking, prediction, and workflow decisioning, with an emphasis on measurable business impact (e.g., days in A/R, denial rate, collection rate).
- Implement observability and monitoring for models in production, including data quality checks, drift detection, guardrails for LLMs, and feedback loops from users.
- Ensure all AI/ML solutions adhere to security, privacy, and compliance standards (including HIPAA and, where relevant, PCI), with appropriate handling of PHI and access controls.
Other
- Several years of professional experience (typically 5+ years) building and shipping ML/AI or data-intensive systems in production; experience in a lead, senior, or staff capacity is preferred but formal “architect” or PhD-level research experience is not required.
- Collaborate with Product, RCM Operations, Data Engineering, and other engineering teams to translate business problems into ML/AI solutions and prioritize high-ROI RCM use cases.
- Mentor and lead engineers by providing technical guidance, code reviews, and best practices for ML/AI development, MLOps, and LLM/RAG patterns.
- Contribute to and maintain technical documentation, architectural diagrams, and playbooks for ML and LLM services to enable efficient onboarding and cross-team adoption.
- Effective communication and collaboration skills, with the ability to work closely with non-technical stakeholders such as RCM operations, finance, and clinical leaders.