Latest news
Artificial intelligence in general practice: industry guidance for safe and smart use
Artificial intelligence (AI) is rapidly reshaping industries and redefining how people interact with information, make decisions, and navigate daily life. From the apps that anticipate our shopping lists to the tools that help us plan travel or manage finances, AI is becoming an invisible assistant woven into everyday routines.
As these technologies grow more sophisticated, they are also beginning to play a meaningful role in healthcare. In general practice, AI’s potential to streamline workflows, support clinical reasoning, and enhance patient communication is immense. With a growing ecosystem of conversational tools, digital scribes, and diagnostic supports now emerging, here in Australia national health bodies are offering guidance to help GPs and practice teams adopt these innovations safely, confidently and responsibly.
To help make sense of this rapidly evolving space, three key Australian health bodies have released practical resources for general practice teams. This article provides an overview of the following resources:
- The RACGP’s advice on conversational AI
- The TGA’s guidance on digital scribes
- The Australian Commission on Safety and Quality in Health Care: recommendations for safe AI use in healthcare
RACGP: Conversational AI in general practice
The Royal Australian College of General Practitioners (RACGP) has developed guidance to help general practitioners and practice teams understand the opportunities and considerations associated with of using ‘conversational AI’ tools in everyday practice. ‘Conversational AI’ refers to technologies (such as advanced chatbots and virtual assistants) that engage in natural, human-like dialogue. Some examples include platforms like OpenAI’s ChatGPT, Google’s Gemini and Microsoft’s Copilot.
While conversational AI has great potential to revolutionise certain aspects of healthcare, the RACGP cautions practices to approach use of these tools carefully. As this technology is still evolving, many questions arise regarding patient safety, privacy and data security.
A key concern emphasised by the RACGP is the reliability of AI-generated information. These tools can sometimes produce answers that sound highly confident but are factually incorrect or misleading, including instances of ‘hallucinations’, where fabricated content appears plausible. In a clinical environment, such errors can have significant consequences if used without appropriate review.
AI tools also inherit biases from their training data, meaning they may inadvertently generate discriminatory, inappropriate or unsafe recommendations for certain patient groups. It’s also important for healthcare professionals to remember that these chatbots are not specifically trained on up-to-date medical evidence, and their reasoning processes are often opaque, making it hard for GPs to verify information.
The RACGP also emphasises the importance of protecting patient information. Sensitive patient data should never be entered into conversational AI tools. Practices should consider risks related to privacy, legal obligations, integration with clinical systems, and the use of AI-generated text within medical documents.
To support safe and responsible use of AI tools, the RACGP recommends that practice teams:
- Double check AI outputs: GPs are responsible for care quality, so AI-generated content has to be verified before being utilised in clinical settings.
- Utilise AI as a supplement, not a substitute: AI content should never be the sole basis for clinical decisions or advice.
- Involve patients in the decision to use AI tools: If an AI tool uses patient information or interacts with patients (such as a chatbot for an intake procedure), ensure they are aware and agree.
- Be transparent and update policies: Inform patients about the use of AI and reflect this in the practice’s privacy policy.
- Ensure legal compliance: Follow privacy laws, professional standards, and any regulatory requirements.
To ensure your practice is making informed, confident decisions about AI use, be sure to read the RACGP’s comprehensive resource in full.
TGA: When digital scribes are regulated as medical devices
The Therapeutic Goods Administration (TGA), as part of the Department of Health, Disability and Ageing, has released guidance to help general practices understand when digital scribes fall under medical device regulation. Digital scribes are AI-powered tools that listen to consultations and generate clinical notes or summaries, and their classification depends on how they process information. Under Therapeutic Goods Act 1989, if an AI scribe simply transcribes spoken dialogue into text (with no interpretation), it is not considered a medical device. However, if a software analyses the conversation and provides medical information or recommendations (like suggesting diagnosis or treatment) it is classified as a medical device. These tools must meet all regulatory requirements and be included on the Australian Register of Therapeutic Goods (ARTG) before they can be used legitimately in practice.
GPs are reminded that any tool performing diagnostic functions must be TGA-approved, otherwise its use is considered illegal. Even when an AI scribe tool is not regulated as a medical device, it is the practice’s responsibility to comply with privacy and consumer laws, cybersecurity standards and the Australian Health Practitioner Regulation Agency’s (AHPRA) practitioner requirements.
As with other AI tools, transparency with patients is essential. Practices must inform patients if a digital scribe is being used, clearly explain its role during the consultation, outline any privacy protections in place and obtain informed consent prior to use. Ensuring patients understand how their information is captured and stored is key to maintaining trust in the consultation environment.
For a complete understanding of these requirements, practice teams are encouraged to read the TGA’s full guidance on digital scribes.
The Australian Commission on Safety and Quality in Health Care: Key principles for safe AI use
The Australian Commission on Safety and Quality in Health Care (the Commission) has published key principles to help clinicians navigate the safe and responsible use of AI in healthcare settings. The Commission recognises that the use of AI can potentially have significant benefits for patients but notes it also poses new risks, particularly when evidence of safety and effectiveness may not yet match the pace of implementation.
To help practitioners and their teams, the Commission has developed three practical AI guidelines for clinicians to better understand, evaluate and safely integrate AI tools into clinical care:
- AI Clinical Use Guide (2025)
- AI Safety Scenario – Interpretation of Medical Images (2025)
- AI Safety Scenario – Ambient Scribe (2025)
To help clinicians meet their professional and ethical obligations when working with AI tools, the Commission highlights several key principles across these AI guidelines:
- Evaluate and understand the tool: Review the evidence for the AI’s effectiveness and reliability and be aware of its limitations before use.
- Be transparent and obtain consent: Inform patients about any AI usage while undergoing care and ensure informed consent is obtained. Ensure patient data is protected and used in compliance with privacy laws.
- Guard against ‘automation bias’: Maintain clinical judgment and do not rely solely on AI tools and outputs, always double-check and verify the AI’s suggestions.
- Monitor and adjust: Continually monitor AI’s performance in practice and be prepared to adjust or stop use if problems arise.
For a deeper understanding of these principles and practical examples, practice teams are encouraged to read the Commission’s full suite of AI guidance documents.
Key takeaways for General Practice
Across the guidance released by the RACGP, TGA, and the Commission, a clear and consistent message emerges: AI offers valuable opportunities for general practice, but its use must be approached with care, vigilance and strong clinical oversight. GPs should ensure any AI tool is used within appropriate legal and regulatory boundaries to maintain transparency, privacy and patient consent as core priorities.
The GPs and clinical team’s judgment and accountability remain fundamental when incorporating these emerging technologies into practice. While AI can assist with information retrieval, documentation and administrative efficiency, it cannot replace clinical expertise, contextual reasoning or the need for ethical, patient-centred care. Clinical oversight is essential to ensure AI-generated content is appropriate, accurate and safe to use.
By engaging with the guidance provided by the RACGP, TGA and the Commission, including verifying AI outputs, involving patients in decisions about AI use, protecting privacy, and adhering to legal and regulatory requirements, general practices can begin integrating these tools in a way that enhances care while maintaining the high standards, safety and trust that patients expect.
