Using AI in healthcare has a lot of advantages, such as the acceleration of drug creation and the analysis of medical images. However, the same AI systems that help healthcare can likewise be employed for malicious applications like malware development. The Health Sector Cybersecurity Coordination Center (HC3) lately released an analyst note outlining the possibilities for hackers to use artificial intelligence tools for this reason and the proof is increasing that AI tools are currently being misused.
AI systems have changed to a point where they could be employed to write human-like content with very good fluency and ingenuity, which includes correct computer code. ChatGPT is one popular AI tool in recent weeks. The OpenAI-created chatbot can produce human-like content according to requests. There are over 1 million ChatGPT users in December. The tool is employed for numerous purposes, such as writing poetry, songs, books, web articles, and email messages, and successfully passing the Medical Licensure and Bar exams.
One of the major problems is using AI tools to speed up malware creation. IBM researchers created an AI-based tool to show the possibilities of using AI to make a new type of malware. The tool, called DeepLocker, integrates a variety of extremely-targeted and elusive attack tools that enable the malware to disguise its motive until it gets to a certain victim. The malicious activities are then let loose when the AI model pinpoints the target via indicators such as geolocation, facial and voice recognition.
Hackers are actually using OpenAI code to create malware. One hacker utilized the OpenAI tool to compose a Python multi-layer encryption/decryption script that can be taken as ransomware and another person developed a data-stealer that can search for, copy, compress, and exfiltrate sensitive data. Although AI systems have a lot of benefits, these tools will undoubtedly be employed for malicious purposes. Presently, the cybersecurity community has not designed mitigations or a means to protect against using these tools for developing malware. Stopping the misuse of these AI tools may not be possible.