Ola Williams, the Country Manager, Microsoft Nigeria has said that over the past few years, AI has completely changed the battleground for both cybercriminals and defenders. While nefarious actors have found increasingly inventive ways to put AI to use, new research shows that AI is also modifying the abilities of security teams, transforming them into ‘super defenders’ that are faster and more effective than ever before.
In fact, the latest edition of Microsoft’s Cyber Signals research shows that, regardless of their expertise level, security analysts are around 44 percent more accurate and 26 percent faster when using Copilot for Security. This is good news for IT teams at organisations across the continent who are up against increasingly insidious threats.
Deepfakes alone increased by tenfold over the past year, with the Sumsub Identity Fraud Report showing that the highest number of attacks were recorded in African countries such as South Africa and Nigeria.
“We’ve seen how these attacks, when successful, can have drastic financial implications for unsuspecting businesses. Just recently an employee at a multinational firm was scammed into paying $25 million to a cybercriminal who used deepfake technology to pose as a coworker during a video conference call.
The Cyber Signals report warns that these kinds of attacks are only going to become more sophisticated as AI evolves social engineering tactics.
READ ALSO: We’re celebrating first year of copilot with significant new innovations–Microsoft
This is a particular concern for businesses operating in Africa, which is still a global cybercrime hotspot. While Nigeria and South Africa estimate annual losses to cybercrime of around $500 million and R2.2 billion respectively, Kenya experienced its highest ever number of cyberattacks last year, recording a total of 860 million attacks. What’s more, understanding of deepfakes and how they operate is limited. A KnowBe4 survey of hundreds of employees across the continent revealed that 74 percent of participants were easily manipulated by a deepfake, believing the communication was authentic.
Fortunately, AI can also be used to help companies disrupt fraud attempts. In fact, Microsoft records around 2.5 billion cloud-based, AI-driven detections every day.
AI-powered defence tactics can take multiple forms, such as AI-enabled threat detection to spot changes in how resources on the network are used or behavioural analytics to detect risky sign-ins and anomalous behaviour.
The use of AI assistants which are integrated into internal engineering and operations infrastructure can also play a significant role in helping to prevent incidents that could impact operations.
It’s critical, however, that these tools be used in conjunction with both a Zero Trust model and continued employee education and public awareness campaigns, which are needed to help combat social engineering attacks that prey on human error.
The number of phishing attacks detected across African countries increased significantly last year, with more than half of people surveyed in countries such as South Africa, Nigeria, Kenya and Morocco saying that they generally trust emails from people they know. And with AI in the hands of threat actors, there has been an influx of perfectly written emails that improve upon the obvious language and grammatical errors which often reveal phishing attempts, making these attacks harder to detect.
History, however, has taught us that prevention is key to combatting all cyberthreats, whether traditional or AI-enabled. Beyond the use of tools like Copilot to enhance security posture, Microsoft’s Cyber Signals report offers four additional recommendations for local businesses looking to better defend themselves against the backdrop of a rapidly evolving cybersecurity landscape.
READ ALSO: Microsoft announces new services, updates to help accelerate AI transformation
Adopt a Zero Trust approach
Key is to ensure the organisation’s data remains private and controlled from end to end. Conditional access policies can provide clear, self-deploying guidance to strengthen the organisation’s security posture, and will automatically protect tenants based on risk signals, licensing, and usage. These policies are customisable and will adapt to the changing cyberthreat landscape.
Enabling multifactor authentication for all users, especially for administrator functions, can also reduce the risk of account takeover by more than 99 percent.
Drive awareness among employees
Aside from educating employees to recognise phishing emails and social engineering attacks, IT leaders can proactively share and amplify their organisations’ policies on the use and risks of AI. This includes specifying which designated AI tools are approved for enterprise and providing points of contact for access and information. Proactive communications can help keep employees informed and empowered, while reducing their risk of bringing unmanaged AI into contact with enterprise IT assets.
Apply vendor AI controls and continually evaluate access controls
Through clear and open practices, IT leaders should assess all areas where AI can come in contact with their organisation’s data, including through third-party partners and suppliers. What’s more, anytime an enterprise introduces AI, the security team should assess the relevant vendors’ built-in features to ascertain the AI’s access to employees and teams using the technology. This will help to foster secure and compliant AI adoption. It’s also a good idea to bring cyber risk stakeholders across an organisation together to determine whether AI employee use cases and policies are adequate, or if they must change as objectives and learnings evolve.
Protect against prompt injections
Finally, it’s important to implement strict input validation for user-provided prompts to AI. Context-aware filtering and output encoding can help prevent prompt manipulation. Cyber risk leaders should also regularly update and fine-tune large language models (LLMs) to improve the models’ understanding of malicious inputs and edge cases. This includes monitoring and logging LLM interactions to detect and analyse potential prompt injection attempts.
As we look to secure the future, we must ensure that we balance preparing securely for AI and leveraging its benefits, because AI has the power to elevate human potential and solve some of our most serious challenges. While a more secure future with AI will require fundamental advances in software engineering, it will also require us to better understand the ways in which AI is fundamentally altering the battlefield for everyone. Implementing these practices can help make sure we’re never compromised by ‘bringing a knife to a gun fight’.