skip to main content
Close Icon We use cookies to improve your website experience.  To learn about our use of cookies and how you can manage your cookie settings, please see our Cookie Policy.  By continuing to use the website, you consent to our use of cookies.
Global Search Configuration

Ovum view


The use of AI assistants in healthcare is making alternative care models that improve patient engagement at home more relevant. However, AI assistants continue to struggle with fundamental issues around speech recognition, user identification, privacy, and data security. Designing a virtual care experience requires precision and a comprehensive understanding of the implications of malpractice for all parties involved, at all stages from software development to the delivery of care. Over time, AI assistants will evolve to become a key component of home-based healthcare, but until then, most will be limited to low-risk tasks that pose no threat to patients.

AI assistants are improving patient engagement at home

The design, implementation, and management of voice experiences for AI assistants such as Alexa and Google Assistant is key for companies interested in using voice as an interface for smart solutions. Amazon and Google are already helping several companies add voice controls to any connected solution that has a microphone and speaker: Amazon is offering the Alexa Voice Service and Alexa Skills Kit and Google has a similar program known as Actions on Google.

For the development of tailored voice experiences, a growing number of consultancies in the developer community have taken charge of helping businesses design branded "skills" that are more complex and follow specific execution protocols (such as those found in the healthcare sector). These skills are being designed directly by the consultancies or by the businesses themselves through tools that allow voice-based home assistants to be used in more complex settings.

As for the healthcare sector, more and more service providers are introducing innovative approaches to home healthcare and assisted living, driven by the need to reduce the cost of healthcare altogether. They are backed by evidence that patient recovery can be faster at home than in hospitals. Home AI assistants, therefore, present exciting new opportunities to deliver a range of sophisticated healthcare solutions in the home. For example, individuals with age-related illnesses or chronic health issues living independently at home can use AI assistants to control medication intake and schedule medical appointments. AI assistants can also educate the patient and record health indicators for caregivers to analyze and act on.

Today's AI assistants are enabling healthcare providers and other health-related entities to expand the reach of medical treatments – but only in a complementary way The development of AI is still in the experimental phase, meaning that AI assistants are not ready to provide comprehensive medical diagnosis, but rather assistance in relatively low-risk, non-life-threatening medical matters such as booking a doctor's appointment or playing music to relax a person with mental health issues. Most commercial AI assistants at present can't even identify different users, let alone make sure a patient is taking the right pill.

The risk of exposing patients to incorrect treatments has numerous implications to bear in mind. AI assistants can potentially mislead patients and expose them to hazards if the whole voice experience is not designed and configured correctly. Any vulnerability in the design of the virtual experience can cost lives. Therefore, software quality assurance and the implementation of safety standards are mandatory steps in the development process. Defective software may be fine most of the time in several industries, but when dealing with healthcare, there should be no room for errors.

From a legal standpoint, the use of AI in healthcare raises several questions. Physicians and other licensed healthcare professionals are usually held accountable for malpractice, but could smart tech also be blamed if something goes wrong? Are all parties involved equally liable? What rules should be established to reduce the risks? Is the existing law suitable to cover AI home assistants? Similar questions arise from other AI application areas: For example, who is liable if a driverless car is involved in a road accident? The benefits of AI are evident, but some of the risks are still uncertain. The legal implications of using AI assistants still represent a gray area that needs further development and consensus among all parties involved.



Mariana Zamoszczyk, Senior Analyst, Consumer Services

Recommended Articles


Have any questions? Speak to a Specialist

Europe, Middle East & Africa team: +44 7771 980316

Asia-Pacific team: +61 (0)3 960 16700

US team: +1 212-652-5335

Email us at

You can also contact your named/allocated Client Services Executive using their direct dial.
PR enquiries - Email us at

Contact marketing -

Already an Ovum client? Login to the Knowledge Center now