S3. Artificial Intelligence (AI) Use Policy

Policy Purpose

This policy sets out how clinicians at Trust Children and ReadyStepGrow may use artificial intelligence (AI) tools in their work. It ensures AI is used only in ways that are safe, family-centred, confidential, and consistent with professional standards and national regulations.

AI use also supports accessibility of information, ensuring therapy plans and communications are written in clear, strengths-based language that families can understand, while remaining consistent and funder-ready.

Policy Statement

Trust Children and ReadyStepGrow support the use of AI tools to improve clarity, efficiency, and accessibility of therapy documentation and communication. AI may be used to assist with tasks such as consolidating draft notes, synthesising assessment information, mapping goals, generating family-facing summaries, or drafting clinical and family communications.

AI will only be used within the approved team workspace.

  • All inputs must not include any identifying information (e.g., names, addresses — home or other, dates of birth, service IDs, phone numbers, email addresses).
  • Clinicians remain fully accountable for the final content — AI is a drafting aid, not a decision-maker.
  • Clinicians must also follow the expectations of their own professional registration bodies and associations (e.g., Ahpra Boards, Speech Pathology Australia, Occupational Therapy Board, Psychology Board). This policy provides local guidance but does not replace professional obligations.

Policy Details

Approved Uses

AI may be used to support:

  • Consolidating de-identified draft notes into professional, ICF-aligned clinical language.
  • Synthesising structured assessment results and observations into a clear clinical summary.
  • Producing plain-language, strengths-based versions of therapy plans for families.
  • Synthesising content across the therapy plan template (e.g., consolidating assessment information, therapy goals, or linking Section 4a NDIS Plan Goals and Section 4b Therapy Goals into Section 4c Goal Mapping).
  • Drafting family communications such as emails, letters, and plain-English explanations of therapy processes.
  • Drafting or editing written clinical communication such as reports, summaries, and inter-professional correspondence.
  • Supporting administrative efficiency (e.g., generating structured summaries, clarifying language flow).

Workspace Boundaries

  • Only the approved team AI workspace may be used.
  • AI tools must not be used on personal accounts or unapproved public platforms.
  • AI access is for clinical work only — not for personal or unrelated purposes.

Confidentiality and Privacy

  • All inputs must not include any identifying information (e.g., names, addresses — home or other, dates of birth, service IDs, phone numbers, email addresses).
  • De-identified drafts should use terms such as “the child” or “caregiver.”
  • The service ensures the approved AI workspace is configured so that no data entered is used to train external models or stored outside secure systems.
  • Clinicians must not use unapproved or personal AI accounts, as data entered into these may be stored, reused, or made public.

Critical Reflection and Accountability

AI outputs must always be critically reviewed for:

  • Clinical accuracy
  • Alignment with ICF and NDIS standards
  • Relevance to family priorities
  • Cultural safety and sensitivity

Clinicians remain accountable for final therapy documents, notes, and communications. AI does not replace clinical reasoning or professional judgment.

Supervisors will support reflective practice around AI use in supervision.

Informed Consent and Transparency

  • Families are informed that AI may be used to support drafting of therapy documentation and communication.
  • All information entered into AI is de-identified. No personal data is ever used.
  • Clinicians always review and finalise documents to ensure accuracy and clinical integrity.

Ethical and Legal Considerations

  • Clinicians must uphold obligations under privacy law, health records legislation, and registration body (e.g., Ahpra, Speech Pathology Australia, Australian Psychological Society) Codes of Conduct.
  • Awareness of algorithmic bias is required — AI outputs must be sense-checked for equity and fairness.
  • AI tools are not therapeutic devices and are not regulated by the TGA unless specified. Clinicians remain responsible for ensuring fitness for purpose.
  • Clinicians must hold professional indemnity insurance that covers AI use.

Accountability

  • Clinicians are accountable for correct, confidential, and professional use of AI, and for critically reviewing all outputs.
  • Clinicians are also accountable to their professional registration bodies for compliance with codes of conduct and practice standards when using AI.
  • Supervisors are accountable for monitoring AI use in reflective practice and ensuring this policy is upheld.
  • The service is accountable for providing a safe, approved AI workspace and training clinicians in safe use.

How to Raise a Concern

If staff observe or suspect inappropriate AI use (e.g., personal accounts used, identifiable information entered, AI output not critically reviewed):

  1. Raise the concern directly with the clinician, if appropriate.
  2. If unresolved, escalate to the supervisor or Clinical Director.
  3. Document the concern in service records if relevant, using objective and minimal information.
  4. Use supervision or team meetings to reflect and problem-solve.
  5. Where concerns impact confidentiality or child safety, escalate immediately under the Child Safety Policy.

Related Policies and How They Connect


Document Control: v1.0 · Created: Sep 2025 · Review cycle: Quarterly in first 12 months (Jan, May, Aug 2026), then Annual (Jan 2027 onward) · Owner: △△D Pty Ltd


Guide → Process → Workflow

Previous
Previous

S2. Visual Identity & Branding