With businesses increasingly adopting AI for their process automation, there are rising concerns about protecting sensitive data from cyber attacks. AI can be used to manipulate datasets, yes, but it can also be used to protect data.
We look at how AI is used to cause cyber attacks. This is being done in various ways including Data Poisoning, Generative Adversarial Networks, and Bot Manipulation.
Data is the fuel on which ML operates. Unscrupulous elements train ML models to target or manipulate data so that the training dataset performs erroneously. This is called Data Poisoning. Attackers typically manipulate the data in a way in which it suits their modus operandi. For example the dataset could be manipulated to mark spam emails as safe. Alternatively, they could also corrupt the data even before it is introduced in the AI training. There need to be stringent guidelines in place to address such security concerns.
Generative Adversarial Networks
Generative Adversarial Networks have two neural networks pitted against each other in which one of the AI systems stimulates content and the other picks mistakes. Together, they create content which is good enough to pass as original. GANs are vulnerable to misuse as it is possible to create natural looking human faces using GANs and generating fake identities to fool facial recognition, crack passwords, malware detection evasion, and also to divert attention from the actual attacks by mimicking regular traffic patterns etc. As malicious uses increase, the ML algorithms have to be made stronger and smarter so as to be able to identify such malicious coding within themselves.
As discussed above, AI algorithms which are already trained to make decisions can also be manipulated into making wrong or bad decisions. A recent attack on crypto currency is a great example of the same. Unscrupulous elements understood how bots were trading and used bots themselves to manipulate the algorithm. As algorithms are trained to be more intelligent, they are also increasingly capable of making bad decisions too.
Likewise, AI can also be used to stop these cyber attacks using Intrusion Detection, Tracing the Dark Web , and Multi-entity Response.
Usually intrusion detection works by monitoring previously detected intruders and malicious attributes. With the help of machine learning, intrusion detection is also possible in hitherto unrecognized patterns. Deep learning is capable of learning from unstructured data which has its sources in heterogeneous environments.
Tracing the Dark Web
Dark web refers to Internet content which mandates use of specific configurations, software etc and is a hub for illicit activities including cyber threats. ML is used in two ways to keep a tab of the dark web: a) Identification of threats to keep us updated with the nature of attacks b) Identification of organizational info. These can be used to tell if company assets such as software codes are being misused. Such identifications can help in a quicker response to the attacks. ML also gathers insights into the chaotic patterns in which hackers change IPs and other info to remain undetected.
Machine Learning has enabled an intelligent threat response to quickly and effectively deal with threats. As it gets threat detection results, responses are driven by ML algorithms which are usually based on user recommendations. Depending on the nature of the threat, AI can block the source of the threat in an automated manner or sometimes even send out false signals to get more information about the threat or the attacker. Multi-entity response can be used to deal with a greater volume of threats.
You can sign up for our newsletter to read about new things in the field of AI