Artificial intelligence and cybercrime implications for individuals and the healthcare sector The British Journal of Psychiatry Cambridge Core
pWe use cookies to distinguish you from other users and to provide you with a better experience on our websites Close this message to accept cookies or find out how to manage your cookie settingspp
Published online by Cambridge University PressĀ
23 September 2024ppThe malicious use of artificial intelligence is growing rapidly creating major security threats for individuals and the healthcare sector Individuals with mental illness may be especially vulnerable Healthcare provider data are a prime target for cybercriminals There is a need to improve cybersecurity to detect and prevent cyberattacks against individuals and the healthcare sector including the use of artificial intelligence predictive toolspp The malicious use of artificial intelligence has created new types of security threat for both individuals and the healthcare sector Although artificial intelligence is a fundamental technology of our age it has enabled the creation of new types of largescale cyberthreat and artificial intelligencebased cybercrime has grown rapidly worldwide Medical data are a prime target for cybercriminals given the high value of stolen data Of further concern to psychiatry is that both patients with mental illness and mental health data may be especially vulnerable to artificial intelligencebased security threats This editorial will discuss the common types of artificial intelligencebased cyberthreat faced by both individuals and the healthcare sector and address potential ways to mitigate risks including the use of artificial intelligence predictive toolspp Artificial intelligence has transformed cybercrime Cybercriminals are using artificial intelligence to enhance attacks so that it is harder for antivirus software to detect to create new types of attack based on synthetic data deepfakes and to automate the creation of largescale attacks Artificial intelligence has changed the scope of fraudulent schemes Today a single scammer can use generative artificial intelligence such as ChatGPT to run hundreds of thousands of scams 24 h a day from anywhere in the worldReference Schneier and Raghavan1 Generative artificial intelligence has made it easier for criminals with only a limited programming skills or technical knowledge The phishing text provided by large language models LLMs is more sophisticated and without spelling and grammatical errors unlike spam emails of the pastReference Schneier and Raghavan1 Criminals can also purchase data from data brokers to help customise phishing attacks Additionally there are malicious LLMs WormGPT and FraudGPT developed specifically for criminals and advertised on underground forums Generative artificial intelligence is also used by hackers to crack passwords with most being cracked in less than a minutepp Individuals face many types of artificial intelligenceenabled cyberattack Cybercriminals use artificial intelligence voicecloning technology to impersonate people and convince family members to send money2 Artificial intelligencebased facial recognition has magnified the ability to recognise individuals Artificial intelligence is used to manipulate photos and videos to create explicit content for extortion schemes Many artificial intelligencebased phishing attacks lead victims to a site to harvest passwords and other personal credentials Artificial intelligence can be used to replicate writing styles and impersonate users such that fake messages are difficult to distinguish from genuine communications Artificial intelligence applications can track user activities to learn habits and preferences Cybercriminals are enticing people to invest in fraudulent schemes by claiming that artificial intelligence is involvedpp The use of artificial intelligence to scam individuals is of particular concern for psychiatry as mental illness may increase the vulnerability to cybercrime Factors that may increase vulnerability to online deception include severe mental illness emotional instability poorer shortterm memory cognitive impairment and a lack of technical skills Additionally impulsivity and overconfidence in information technology knowledge and skills may increase susceptibility to cybercrime Psychiatrists should be aware that artificial intelligence is increasing the quantity and quality of cybercrime against individuals and be able to recommend trusted sources for consumer education on cybersecurity to their patients Individuals of all ages backgrounds and levels of technological sophistication need to learn the best practices to protect themselves from cyberattackspp Businesses also face many types of artificial intelligenceenabled cyberattack Artificial intelligencebased attacks against business to steal money and critical information include impersonation and targeted phishing attacks against employees more effective ransomware distributed denial of service attacks and attacks that hijack cloud infrastructure3 In healthcare artificial intelligence is increasingly used for a variety of purposes including customer service administrative tasks diagnosis in radiology and pathology ongoing patient monitoring drug discovery and medical research Although artificial intelligence systems are increasingly used for critical applications some in management may not realise that artificial intelligence systems are vulnerable to cyberattacks Adversarial attacks which involve input data intentionally crafted to cause misclassification by an artificial intelligence model are of major concern across many domains including medicineReference Finlayson Bowers Ito Zittrain Beam and Kohane4 Incorrect classification from an adversarial attack may cause false diagnostic predictions as demonstrated with medical imagingpp An adversarial attack may degrade the performance of a healthcare system collecting data from multiple connected smart medical devices and may target susceptible locations in electronic medical records EMRs In other studies the targets of adversarial attacks include an electrocardiogram an implantable cardioverter defibrillator and an electroencephalogram Adversarial attacks may also occur against speechbased emotion recognition systems Medical image models may be more vulnerable to adversarial attacks than natural image models owing to specific characteristics of medical image data and modelspp There is a critical need for robust security to monitor personal and business data against artificial intelligence cybercrime Consumers need increased awareness of cybercrime and ongoing training on how to protect against it Mental illness may increase vulnerability to online fraud and victimisation may worsen the symptoms of mental illness Some characteristics associated with increased vulnerability to online fraud include impulsivity older age sensationseeking cognitive impairment willingness to trust and a lack of technical knowledge The effects of cybercrime on victims are longlasting with impacts that are psychological as well as financialpp Business also faces new challenges as security approaches must be able to detect artificial intelligence attacks that an organisation often has never seen before The increasing use of artificial intelligence for critical applications in businesses including healthcare creates greater incentives for cybercriminals to attack these algorithms and increases the negative consequences of a successful attack On an enterprise level businesses including healthcare need to take a robust multifaceted approach that includes artificial intelligencedriven cybersecurity solutions to enhance the more traditional human and technology approaches to combat cybercrime Artificial intelligence can provide continuous monitoring recognise and diagnose threats in real time help identify false positives and improve access control management Artificial intelligence can detect pattern changes in the overall data ecosystem with a level of sensitivity that would not be recognised by humans Artificial intelligence should be involved in the many technological challenges of providing cybersecurity in the increasingly complex and interconnected environments Artificial intelligence cybersecurity tools may detect attacks to remote patient monitoring devices which now routinely involve artificial intelligenceReference Vijayakumar Pradeep Balasundaram and Prusty5 Artificial intelligence tools are needed to provide a comprehensive approach to defend against artificial intelligence cyberattacks The use of artificial intelligence cybersecurity tools is of particular importance in healthcare given the potential for system failures to cause severe harm to individualspp The limitations of using artificial intelligence for cybersecurity including details of the methods involved in artificial intelligencebased detection and prediction of threats and malicious activities are not discussed here The financial investment required for artificial intelligencebased cybersecurity and the costs of acquiring huge volumes of training data are not estimated Other types of cybercrime are not discussed including risks to artificial intelligence algorithms Policy proposals for the governance of artificial intelligence algorithms issues of transparency auditing and accountability are not suggestedpp Artificial intelligencebased security threats are a serious concern for both individuals and the healthcare sector Individuals with mental illness may have an increased susceptibility to artificial intelligence enabled cyberattacks Healthcare records including mental health are a prime target Numerous incidents including adversarial attacks have occurred against individuals and the healthcare system As both dependence on technology and the sophistication of cybercriminals has increased there is a need to augment cybersecurity with artificial intelligence to detect and prevent cyberattacks Artificial intelligencebased cybersecurity is an important and necessary addition to cybersecurity across domains including healthcarepp Supplementary material is available online at httpsdoiorg101192bjp202477pp Data availability is not applicable to this article as no new data were created or analysed in this studypp SM and TG wrote the initial draft All authors reviewed and approved the final manuscriptpp This work received no specific grant from any funding agency commercial or notforprofit sectorspp JRG is a member of the BJPsych editorial board and did not take part in the review or decisionmaking process of this paperppAdditional references are included in the Supplementary material available at httpsdoiorg101192bjp202477ppLoadingppNo CrossRef data availableppView all Google Scholar citations
for this article
pp Please choose a valid location ppTo send this article to your Kindle first ensure noreplycambridgeorg is added to your Approved Personal Document Email List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account Then enter the name part of your Kindle email address below Find out more about sending to your Kindle
Find out more about saving to your Kindle
pp
Note you can select to save to either the freekindlecom or kindlecom variations freekindlecom emails are free but can only be saved to your device when it is connected to wifi kindlecom emails can be delivered even when you are not connected to wifi but note that service fees apply
pp
Find out more about the Kindle Personal Document Service
pp
To save this article to your Dropbox account please select one or more formats and confirm that you agree to abide by our usage policies If this is the first time you used this feature you will be asked to authorise Cambridge Core to connect with your Dropbox account
Find out more about saving content to Dropbox
pp
To save this article to your Google Drive account please select one or more formats and confirm that you agree to abide by our usage policies If this is the first time you used this feature you will be asked to authorise Cambridge Core to connect with your Google Drive account
Find out more about saving content to Google Drive
pp No HTML tags allowed Web page URLs will display as text only Lines and paragraphs break automatically Attachments images or tables are not permittedppYour email address will be used in order to notify you when your comment has been reviewed by the moderator and in case the authors of the article or the moderator need to contact you directlypp
Do you have any conflicting interests
Conflicting interests help
ppPlease list any fees and grants from employment by consultancy for shared ownership in or any close relationship with at any time over the preceding 36 months any organisation whose interests may be affected by the publication of the response Please also list any nonfinancial associations or interests personal professional political institutional religious or other that a reasonable reader would want to know about in relation to the submitted work This pertains to all the authors of the piece their spouses or partnersp
Published online by Cambridge University PressĀ
23 September 2024ppThe malicious use of artificial intelligence is growing rapidly creating major security threats for individuals and the healthcare sector Individuals with mental illness may be especially vulnerable Healthcare provider data are a prime target for cybercriminals There is a need to improve cybersecurity to detect and prevent cyberattacks against individuals and the healthcare sector including the use of artificial intelligence predictive toolspp The malicious use of artificial intelligence has created new types of security threat for both individuals and the healthcare sector Although artificial intelligence is a fundamental technology of our age it has enabled the creation of new types of largescale cyberthreat and artificial intelligencebased cybercrime has grown rapidly worldwide Medical data are a prime target for cybercriminals given the high value of stolen data Of further concern to psychiatry is that both patients with mental illness and mental health data may be especially vulnerable to artificial intelligencebased security threats This editorial will discuss the common types of artificial intelligencebased cyberthreat faced by both individuals and the healthcare sector and address potential ways to mitigate risks including the use of artificial intelligence predictive toolspp Artificial intelligence has transformed cybercrime Cybercriminals are using artificial intelligence to enhance attacks so that it is harder for antivirus software to detect to create new types of attack based on synthetic data deepfakes and to automate the creation of largescale attacks Artificial intelligence has changed the scope of fraudulent schemes Today a single scammer can use generative artificial intelligence such as ChatGPT to run hundreds of thousands of scams 24 h a day from anywhere in the worldReference Schneier and Raghavan1 Generative artificial intelligence has made it easier for criminals with only a limited programming skills or technical knowledge The phishing text provided by large language models LLMs is more sophisticated and without spelling and grammatical errors unlike spam emails of the pastReference Schneier and Raghavan1 Criminals can also purchase data from data brokers to help customise phishing attacks Additionally there are malicious LLMs WormGPT and FraudGPT developed specifically for criminals and advertised on underground forums Generative artificial intelligence is also used by hackers to crack passwords with most being cracked in less than a minutepp Individuals face many types of artificial intelligenceenabled cyberattack Cybercriminals use artificial intelligence voicecloning technology to impersonate people and convince family members to send money2 Artificial intelligencebased facial recognition has magnified the ability to recognise individuals Artificial intelligence is used to manipulate photos and videos to create explicit content for extortion schemes Many artificial intelligencebased phishing attacks lead victims to a site to harvest passwords and other personal credentials Artificial intelligence can be used to replicate writing styles and impersonate users such that fake messages are difficult to distinguish from genuine communications Artificial intelligence applications can track user activities to learn habits and preferences Cybercriminals are enticing people to invest in fraudulent schemes by claiming that artificial intelligence is involvedpp The use of artificial intelligence to scam individuals is of particular concern for psychiatry as mental illness may increase the vulnerability to cybercrime Factors that may increase vulnerability to online deception include severe mental illness emotional instability poorer shortterm memory cognitive impairment and a lack of technical skills Additionally impulsivity and overconfidence in information technology knowledge and skills may increase susceptibility to cybercrime Psychiatrists should be aware that artificial intelligence is increasing the quantity and quality of cybercrime against individuals and be able to recommend trusted sources for consumer education on cybersecurity to their patients Individuals of all ages backgrounds and levels of technological sophistication need to learn the best practices to protect themselves from cyberattackspp Businesses also face many types of artificial intelligenceenabled cyberattack Artificial intelligencebased attacks against business to steal money and critical information include impersonation and targeted phishing attacks against employees more effective ransomware distributed denial of service attacks and attacks that hijack cloud infrastructure3 In healthcare artificial intelligence is increasingly used for a variety of purposes including customer service administrative tasks diagnosis in radiology and pathology ongoing patient monitoring drug discovery and medical research Although artificial intelligence systems are increasingly used for critical applications some in management may not realise that artificial intelligence systems are vulnerable to cyberattacks Adversarial attacks which involve input data intentionally crafted to cause misclassification by an artificial intelligence model are of major concern across many domains including medicineReference Finlayson Bowers Ito Zittrain Beam and Kohane4 Incorrect classification from an adversarial attack may cause false diagnostic predictions as demonstrated with medical imagingpp An adversarial attack may degrade the performance of a healthcare system collecting data from multiple connected smart medical devices and may target susceptible locations in electronic medical records EMRs In other studies the targets of adversarial attacks include an electrocardiogram an implantable cardioverter defibrillator and an electroencephalogram Adversarial attacks may also occur against speechbased emotion recognition systems Medical image models may be more vulnerable to adversarial attacks than natural image models owing to specific characteristics of medical image data and modelspp There is a critical need for robust security to monitor personal and business data against artificial intelligence cybercrime Consumers need increased awareness of cybercrime and ongoing training on how to protect against it Mental illness may increase vulnerability to online fraud and victimisation may worsen the symptoms of mental illness Some characteristics associated with increased vulnerability to online fraud include impulsivity older age sensationseeking cognitive impairment willingness to trust and a lack of technical knowledge The effects of cybercrime on victims are longlasting with impacts that are psychological as well as financialpp Business also faces new challenges as security approaches must be able to detect artificial intelligence attacks that an organisation often has never seen before The increasing use of artificial intelligence for critical applications in businesses including healthcare creates greater incentives for cybercriminals to attack these algorithms and increases the negative consequences of a successful attack On an enterprise level businesses including healthcare need to take a robust multifaceted approach that includes artificial intelligencedriven cybersecurity solutions to enhance the more traditional human and technology approaches to combat cybercrime Artificial intelligence can provide continuous monitoring recognise and diagnose threats in real time help identify false positives and improve access control management Artificial intelligence can detect pattern changes in the overall data ecosystem with a level of sensitivity that would not be recognised by humans Artificial intelligence should be involved in the many technological challenges of providing cybersecurity in the increasingly complex and interconnected environments Artificial intelligence cybersecurity tools may detect attacks to remote patient monitoring devices which now routinely involve artificial intelligenceReference Vijayakumar Pradeep Balasundaram and Prusty5 Artificial intelligence tools are needed to provide a comprehensive approach to defend against artificial intelligence cyberattacks The use of artificial intelligence cybersecurity tools is of particular importance in healthcare given the potential for system failures to cause severe harm to individualspp The limitations of using artificial intelligence for cybersecurity including details of the methods involved in artificial intelligencebased detection and prediction of threats and malicious activities are not discussed here The financial investment required for artificial intelligencebased cybersecurity and the costs of acquiring huge volumes of training data are not estimated Other types of cybercrime are not discussed including risks to artificial intelligence algorithms Policy proposals for the governance of artificial intelligence algorithms issues of transparency auditing and accountability are not suggestedpp Artificial intelligencebased security threats are a serious concern for both individuals and the healthcare sector Individuals with mental illness may have an increased susceptibility to artificial intelligence enabled cyberattacks Healthcare records including mental health are a prime target Numerous incidents including adversarial attacks have occurred against individuals and the healthcare system As both dependence on technology and the sophistication of cybercriminals has increased there is a need to augment cybersecurity with artificial intelligence to detect and prevent cyberattacks Artificial intelligencebased cybersecurity is an important and necessary addition to cybersecurity across domains including healthcarepp Supplementary material is available online at httpsdoiorg101192bjp202477pp Data availability is not applicable to this article as no new data were created or analysed in this studypp SM and TG wrote the initial draft All authors reviewed and approved the final manuscriptpp This work received no specific grant from any funding agency commercial or notforprofit sectorspp JRG is a member of the BJPsych editorial board and did not take part in the review or decisionmaking process of this paperppAdditional references are included in the Supplementary material available at httpsdoiorg101192bjp202477ppLoadingppNo CrossRef data availableppView all Google Scholar citations
for this article
pp Please choose a valid location ppTo send this article to your Kindle first ensure noreplycambridgeorg is added to your Approved Personal Document Email List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account Then enter the name part of your Kindle email address below Find out more about sending to your Kindle
Find out more about saving to your Kindle
pp
Note you can select to save to either the freekindlecom or kindlecom variations freekindlecom emails are free but can only be saved to your device when it is connected to wifi kindlecom emails can be delivered even when you are not connected to wifi but note that service fees apply
pp
Find out more about the Kindle Personal Document Service
pp
To save this article to your Dropbox account please select one or more formats and confirm that you agree to abide by our usage policies If this is the first time you used this feature you will be asked to authorise Cambridge Core to connect with your Dropbox account
Find out more about saving content to Dropbox
pp
To save this article to your Google Drive account please select one or more formats and confirm that you agree to abide by our usage policies If this is the first time you used this feature you will be asked to authorise Cambridge Core to connect with your Google Drive account
Find out more about saving content to Google Drive
pp No HTML tags allowed Web page URLs will display as text only Lines and paragraphs break automatically Attachments images or tables are not permittedppYour email address will be used in order to notify you when your comment has been reviewed by the moderator and in case the authors of the article or the moderator need to contact you directlypp
Do you have any conflicting interests
Conflicting interests help
ppPlease list any fees and grants from employment by consultancy for shared ownership in or any close relationship with at any time over the preceding 36 months any organisation whose interests may be affected by the publication of the response Please also list any nonfinancial associations or interests personal professional political institutional religious or other that a reasonable reader would want to know about in relation to the submitted work This pertains to all the authors of the piece their spouses or partnersp