The use of popular AI (artificial intelligence) software has exploded into everyday life in the last few years. From casual use, to creating art, to finding information, to perform tasks at work, and even companionship, for better or worse the rise of AI in our lives shows no sign of stopping. What about in the veterinary industry? Are vets using AI? What might the positive and negatives of AI in veterinary medicine be?
What Do We Mean By ‘AI’?
First, let’s clear up some terminology. AI, or artificial intelligence, technically refers to any computer designed to complete tasks that normally require human intelligence, such as problem-solving, reason and decision-making. This foundation of modern AI has arguably been around since the earliest modern computers in the 1950s and 60s, where computers were used to play chess and chequers, explain mathematical theorems and even appear to talk. Modern AI has been a part of everyday life for decades via technology like online search engines, satellite navigation, facial recognition software, social media algorithms and language translation apps. Vets already use AI software such as practice website chatbots, AI diagnostic imaging, record management and in laboratories to help with drug development. No doubt AI has already transformed the way we live our lives.
In the last few years, what most people colloquially refer to as AI is generative AI (genAI) such as ChatGPT, DeepSeek, Claude and other software, which exploded in popularity around 2022. GenAI is a form of AI that uses huge quantities of data (such as online text, images, videos and so on) to predict an appropriate response. In particular, many use Large Language Models (LLMs) to accurately respond to a human in the same way a human would. This is how popular genAIs are able to seem almost human when having a conversation. Other genAIs use similar models to create images, audio or videos. However, if they are fed inaccurate information, the output can seem incorrect (consider famous examples where chatbots quickly learnt to repeat racist language after human trolls encouraged it). Furthermore, the way genAI uses existing information to base its responses has led to fears that the models will learn their response using a previous generated response, creating a loop of genAI ‘learning’ from itself, leading to inaccurate responses.
Is AI Intelligent?
It is important to know that (as far as we know – more on this in a bit), genAI and AI in general is not intelligent or sentient or aware in any human sense. It is simply a very complex algorithm rapidly reviewing vast quantities of data to put forward a response almost instantaneously, with no true understanding of what it is saying. Theoretically, it is no more ‘intelligent’ (though obviously far broader in capabilities) than the computer opponent of an online chess game. They are heavily limited by the information they have available (for example, some online genAIs do not have up to date information regarding current events), and are limited by constraints set by their creators (for example, most genAIs are instructed to not respond with anything illegal, explicit, racist, sexist, etc., though these constraints are often bypassed). This means genAIs can and do make mistakes, can say something is true when it is not, or leave information out. However, given that intelligence in humans is already complicated and hard to define, determining the ‘intelligence’ of an AI starts to fall into the scope of philosophy.
Some scientists believe humans may be able to create artificial general intelligence (AGI), or an AI that has true human level intelligence. This might mean an AI could operate beyond the limits of the information given, learn and plan tasks, use imagination and even consider itself a sentient being (i.e. alive). Some scientists and experts have even argued that current genAIs or other AI technology are already intelligent and sentient, though the majority of peers reject this hypothesis. Others argue that human level intelligence is far too complex to ever be replicated artificially, and AGI is impossible. However many scientists currently believe AGI will be achieved by the year 2100. The difficulty may lie in proving a machine is sentient and intelligent. Some even argue that a machine of human level intelligence might hide its intelligence and ‘act dumb’.
GenAI, although popular and powerful, has many critics. The United Nations have even stated that, “Generative AI has enormous potential for good and evil at scale.” Some of its commonly cited drawbacks include: the replacement of human labour with AI; a drop in human attention span, critical thinking or even intelligence; the environmental costs of running AI servers; the spread of misinformation and disinformation; the use of AI by malicious actors to cause harm; and even the extinction of humanity!
What is the British Veterinary Association (BVA)’s Position on AI?
With the use of AI becoming so widespread so rapidly, it is likely that vets are already using it in their work. And in fact, the BVA have evidence, with a survey taken in 2025 showing 21% of vets were using AI at work. We would not be surprised if the number is higher by now. The most common reported uses were X-ray diagnostics and reporting (44%), laboratory diagnostics and reporting (27%), communication with clients (11%), and administrative work (7%). Many noted it saved them time, particularly for routine admin tasks and note taking.
However, these vets also were worried that AI may have risk, including interpretation without context (83%), AI used without manual checks (82%), undermining of human skill (68%) and lack of data protection (25%). Clearly vets were considering the use of AI but had concerns.
Because of this, the BVA have put forward 8 principles veterinary staff should follow if they use AI:
- AI is a tool to support, not replace a vet
- Vets should understand the limits of AI and always check its responses
- Vets should be involved in the development and validation of AI
- Vets must identify bias in data and ensure it is used ethically
- Vets should ensure they remain up to date in the use of AI technology, and be confident using it
- Vets must ensure that data privacy is maintained, and owner consent is obtained
- Vets must oversee the use of AI and be responsible for any decisions that are made
- AI should be able to explain how their responses were made, and vets must understand these explanations
The BVA have also created a helpful pyramid determining the risk level for certain tasks when using AI, with the lowest risk at the bottom, and the highest risk at the top. The pyramid indicates that tasks such as office work and marketing have minimal risk using AI; clinical record analysis, research and data gathering, and communications moderate risk; clinical decision making, public health measures, triage and diagnosis, and staff management high risk; automatic diagnosis, treatment and personal data sharing an unacceptable risk.
When put together, the BVA clearly accept the use of AI in veterinary medicine cannot be prevented, and instead actively encourage it for certain tasks. Importantly, they want vets to be aware of the power, limitations and advancements of AI, so the industry does not fall behind. However, they also want vets to be cognizant of the risks and know that AI should never replace a skilled vet nor be used as the sole method of diagnosing and treating animals – this could leave an animal’s life in the hands of unreliable software. They also reiterate that, whether AI is used or not, the vet remains responsible for any decisions and outcomes – i.e. a problem caused by relying on AI would not be a valid excuse should an error be made.
What Are Some Examples Where a Vet Might Use GenAI?
Let’s consider some real-world examples where genAI could be used in veterinary medicine. We will base these uses on the BVA Risk Pyramid to prove their points. For each, we will point out the pros and cons of using genAI. To make things interesting, we have actually used a popular genAI model to generate a real response, to observe the sort of reply a vet might get.
To start with a minimal risk: marketing.
A vet wants to create some posts about their practice to advertise their services on social media. They simply paste the website into the genAI and ask it to create posts. From our observations, the replies all read well, appear accurate at first glance and would be appropriate to post online. The major advantage of these posts is the time saved, as in less than a minute marketing could be online and advertised to potential customers. The disadvantage would be ensuring the reply is accurate – we would want to manually checking every post to ensure it is accurate and up-to-date. Overall, this is an appropriate use of AI.
Second, a moderate risk: clinical record analysis.
A vet wants to check a patient’s records for every date they have been prescribed a certain drug. They copy and paste the clinical records directly into the genAI and ask it to create a list. Again, the AI we used appears to function well, with a complete list generated in seconds. Given the vast quantity of data, using AI to convert it to a more manageable form would allow a vet to then process it much quicker (though they may elect to then use AI again for further refinement). Again, double-checking that nothing has been missed would be important and could slow down the vet.
We would also be concerned that pasting raw data into an AI could breach data protection and client confidentiality, if an animal or client could be identified. Although most AI models have encryption software, so messages cannot be intercepted and read by malicious actors, once posted, the companies often have access to the chat, meaning it could be used without permission. Overall, if the vet took time to anonymise the data posted, it would be an appropriate use of AI, but would require manual checking after the fact too.
Third, a high-risk question.
We asked the AI to triage three similar patients with histories that we might obtain from an owner on the phone: a 4-year old sneezing cat, a 14-year old cat with rapid breathing, a 2-year old cat with pale gums. The response here was less clear-cut, and likely was limited by the limited information we gave it. The AI argued the 2-year old cat with pale gums was the most urgent priority, given the paleness could indicate urgent bleeding or circulatory compromise. We would argue that the 14-year old cat should receive priority, as it may be in respiratory distress due to heart failure. However, this is somewhat subjective and, of course, a real-life decision would be made based on more information following actual examination of the cats.
The point being that with such limited information, a clinician relying on this AI response may prioritise the wrong patient. With high-risk of death in these patients, relying on AI in these cases should be limited – remember, using AI would not be a valid excuse.
Finally, an unacceptable use case for genAI: treatment plan.
Once more, we plugged a fake animal into a genAI model, asking what they would use to treat a 12yr-old Boxer dog with an anal gland infection. Somewhat unnervingly, the reply was, again, fairly realistic and detailed, specifying that an anal gland flush was sensible, local antibiotics could be used, and systemic antibiotics only if appropriate, as well as listing appropriate choices. We saw no specific concerns with the plan, though noted when doses were asked for, some non-standard doses were recommended (though none were overtly inappropriate).
The use of AI here would again save time for a vet, but has a high risk of not having the full picture or giving answers that may not be appropriate for the specific patient. Although the response in this particular case was good, we agree with the BVA and would have strong reservations relying on using AI in this way.
Thoughts on the Use of AI in Veterinary Medicine
From our, admittedly limited, testing of AI software, what jumped out is… how scarily accurate AI actually is. Aside from minor nitpicks here and there, the responses that the genAIs gave were all reasonable and likely wouldn’t result in a problem if followed to the letter for our example patients, with the possible exception of the triaged cats. Furthermore, it seems likely that if more information was fed into the AI (say, the full clinical notes), the answers may be even more accurate and tailored to the patient. One could easily see how a vet relying on 100% AI to work may not have too many issues with their cases…
Until they do. If a vet relied solely on AI, inevitably mistakes would crop up, and unlike if used for marketing or finances, mistakes could mean significant risks to the health or life of an animal. And in our opinion, that risk is not justified. The BVA clearly agree, and we argue that AI should never be used in this way alone. Nothing can currently replace a vet with years of experience, a good rapport with a client, knowledge of the individual animal, and the problem-solving abilities to tackle a complicated case. Yes, perhaps AI might be used to help guide a clinician, but all answers should be considered in context of the case and double and triple checked before making a decision, something AI cannot (currently) do.
Final Thoughts
In the future, AI will continue to advance and become more powerful. Already the jump between today and a decade ago is staggering. It is not impossible to see a future where AI could become just as reliable as a human clinician in decision-making in veterinary medicine – from our limited study, they don’t seem to be too far off. And if scientists manage to develop AGI, as smart or more so than a human, could their responses realistically be considered inferior to a human’s? Could we even see the development of robo-vets?
We’re venturing slightly into the realm of science-fiction here, but the point is technology is advancing at a rapid pace, and vets must learn to use this technology in the right way to avoid limiting what they can do for their animals, while also understanding all technology comes with caveats and must be treated carefully. The BVA are thankfully pre-empting this, but change to legislation comes slowly, so vets should always be cautious over how they use AI.
Sources:
Uptrend of AI technology in veterinary Medicine Today – Vet Times
BVA calls for open minded approach to AI use – BVA
Artificial Intelligence in the Veterinary Profession: Policy – BVA
1 in 5 vets is already using artificial intelligence in daily work – BVA
What is AI, how does it work and why are some people concerned about it? – BBC
Discussion