By: Dan Fein, Director of Email Security Products, Darktrace

With remote working here to stay, the continued escalation of cyberattacks in Canada means that no organization is immune to cyber threats. Attackers are constantly finding new and innovative avenues into digital businesses, and increasingly, adversaries are targeting the soft underbelly of an organization: the supply chain. With increased supply chain complexity comes reduced visibility and, in many cases, a truly limited understanding of the impact an attack could cause. Given this complexity, human error inside our software supply chains can result in extremely expensive breaches.
Consider the impact an intentional cyberattack on the supply chain could have. A simple way for cyberattacks to gain a foothold is with malicious phishing emails sent from the true accounts of compromised trusted partners and suppliers. Supply chain account takeover – also known as Vendor Email Compromise – is one of the most pressing issues when it comes to email security, as these regularly bypass traditional, rules-based tools which cannot distinguish them from legitimate activity and can even allow attackers to send seemingly legitimate invoices to partners of the compromised supplier.
Many organizations are looking for new ways to detect ‘malicious activities’ and are leaning on AI to distinguish what rules-based tools cannot. The following are real-world email threats that companies often encounter involving suppliers, all of which can be uniquely detected in their earliest stages by self-learning AI technology.
Detection of compromise in trusted third parties
Cyberattacks can be initiated by hackers who compromise third-party suppliers to send malicious emails to customers. Despite the emails coming from a known source, AI technology is able to recognize even the most subtle behavioral shifts indicative of threatening activity. My company, Darktrace, uses unsupervised machine learning to detect unusual email activity for any user, trusted or never-before-seen, and autonomously neutralizes the threat in real-time.
Detection of malicious links sent via email
In many cases, hackers will provide a link directing the user to a legitimate file storage site, being used to host a malicious payload. This tactic is commonly used by cybercriminals to bypass legacy security gateways because traditional gateways fail to defend against file storage links using reputation checks since the domains themselves are legitimate. AI technology, however, can recognize when these domains are unusual in the context of normal activity, and block these malicious links before they reach the supplier.
An inability to decipher between legitimate and illegitimate sources
Cybercriminals look to focus on the weakest link within a system, which is, in most cases, human vulnerability. While the C-suite is often well protected by vigilant security teams, lower-level employees often do not have such aggressive safeguards and become extremely lucrative targets for attackers. When these attackers fail to actually compromise the accounts of senior individuals within an organization, they can attempt to spoof the accounts of these individuals or of trusted third-party suppliers, and prey upon employees at the lower level.
Extremely subtle shifts in email communication can only be detected by using a sophisticated, unsupervised, self-learning approach to security. Email environments urgently need security technology that looks at each individual email in the wider context of the organization, the recipient, and past interactions with the sender, to stop anomalous emails being sent with malicious intent, no matter how legitimate they seem. Only unsupervised, self-learning AI technology can do this in real-time and mitigate the opportunity for cyberatacks pose a widescale threat against an entire organization.