Learn more about Springboard for Business
One of the most daunting aspects of life as a cybersecurity pro is that the battle against hackers, phishers, and other similar shady characters is relentless. Just as new security tools and protocols are implemented, bad actors step up their attacks and find entirely new vulnerabilities. Now, with the rapid evolution and adoption of generative AI tools, both sides of the security battle are racing to add AI to their arsenal.
The current (and costly) threat landscape
When most people think about cybersecurity threats, a few different categories typically spring to mind, most revolving around private data being made public. News of compromised username/password combinations, credit card data, and other personal bits of information hit mainstream news outlets regularly. But take a peek at sites like DarkReading or TheHackerNews on any given day, and you’ll quickly realize that intrusions happen on a much more frequent basis.
According to a recent TechRepublic report, NCC Group’s Global Threat Alliance Team found that March 2023 marked the highest number of ransomware attacks ever recorded—a 91% increase vs. February’s numbers and a 62% increase versus the same period last year. Ransom demands are also on the rise, with the largest recorded payment in 2022 exceeding $8 million. This represents a roughly 30% increase over the largest payment made in 2021, but costs associated with lawsuits, civil action, and settlements could cause the ultimate cost of a breach to skyrocket. Add in the loss of consumer trust and reputational risk to the brand, and you begin to get a picture of just how high the stakes can get.
In a related story making waves in the CISO community, a former cybersecurity chief was found personally liable for actions taken in the wake of a security breach. Although the behaviors that led to prosecution raised questions surrounding the moral and ethical handling of a specific breach, the ruling served as a shot across the bow for CISOs everywhere that failure to report data theft of any magnitude could bear significant consequences.
Increased threats, payouts, and new personal liabilities… all combine to create a perfect storm for security executives and the teams they lead. To help blunt the impact of these emerging threats, new AI-powered tools are being developed to help CISOs detect vulnerabilities and recognize suspicious patterns at a speed and scale that were unheard of just a year ago.
But, as with all advances used by white hats, the black hats also embrace these technologies.
AI to the rescue (maybe?)
As discussed in an earlier Springboard blog post on AI in the workplace, OpenAI has hit the mainstream, and it’s here to stay. Until the general release of ChatGPT, AI’s potential has largely outweighed its functionality. It had undoubtedly proved its worth in academic circles and in the hands of data scientists at organizations large enough to afford the luxury, but ChatGPT changed all that, bringing AI to the masses.
Some of the earliest instances of bad actors using AI to improve and accelerate attack strategies include using generative AI tools to create highly-personalized content that improves phishing schemes. Just as ChatGPT is being used to generate marketing content and personalized search results, hackers are using the tool to more effectively convince employees to divulge compromising information. Poorly-worded emails rife with typos and even more poorly formatted GIFs are giving way to highly-personalized text messages that could give even the most skeptical recipient pause, all thanks to the help of commercially available AI.
Another potentially more damaging AI-related challenge that organizations could face may not be hacker-related at all. Instead, the threat comes from within, in the form of employees unintentionally disclosing private or sensitive information as they leverage ChatGPT and other similar tools in their day-to-day work. One large hardware manufacturer has already reported instances of proprietary source code being entered into the open AI platform, prompting the company to ban generative AI tools altogether.
This relatively new source of security concern, dubbed “conversational AI leak,” introduces yet another layer of security and privacy concerns to be addressed at a corporate level. Banning OpenAI tools or providing alternative resources hosted in a more private environment may help. Still, with AI and large language model (LLM) tech becoming more deeply integrated into other platforms and more widely available – including on mobile BYOD tech – these responses may ultimately prove to be half-measures.
Ultimately, effectively safeguarding the organization comes down to an approach that is equal parts technology and human, particularly when it comes to malware. Three key ways to mitigate human-related security risks include:
- Ensuring key personnel have a deep understanding of data & security-related topics
Most enterprise-sized organizations already have some level of cybersecurity awareness and training in place, but these are often simplistic (and dated) modules within a Learning Management System (LMS). Such content might be acceptable for general awareness. Still, certain critical functions in the business – marketing, business operations, and other data-heavy practices – should have access to more in-depth learning experiences. Human-led classes, for example, combine courseware and coaching, delivering access to mentors who are thought leaders in the space and can keep employees current on tools, trends, and threats. (Springboard specializes in this kind of content. If you need a hand, click here and one of our consultants will set up some time to chat.) - Establishing and defining AI-related “rules of the road” with key stakeholders
Similar to corporate policies created in the early days of social media, working with other business line and operational leaders in the business to develop “common sense” guidelines for the use of AI in the workplace will help to ground employees in what is acceptable, what isn’t, and when to come to leadership for guidance. Most employees, for example, may not understand that feeding content into ChatGPT and other similar tools not only trains their AI to deliver better results but also becomes part of a broader pool of data. Sharing a simple “how ChatGPT works” overview can help individuals to get their arms around what is and isn’t appropriate use cases for the tool. - Emphasizing that AI supplements, not replaces, human security experts
Within a cybersecurity setting, AI can be an incredibly powerful tool for identifying patterns and pattern anomalies across broad swaths of data. This alone empowers CISOs and their teams to identify, address, and circumvent attacks quickly and on a grander scale than ever before. AI can also help close the cybersecurity skills gap prevalent in so many industries.
The number of open cybersecurity-related roles is estimated to be between 1.8 million and 3.5 million worldwide. Currently, demand for these skills is far outpacing supply, so as CISOs and companies ramp up new ways to blunt the impact of this talent shortfall, they also need to find innovative ways to accelerate closing the skills gap.
One of the most effective ways to make this happen is reskilling and redeploying existing employees, moving them from parts of the business where the demand isn’t as great, and providing them with new skills and career opportunities. Microsoft, Google, and other tech giants have initiated such programs, and many other enterprise-sized providers have created internal “capability academies” to develop and hone cybersecurity skills among existing employees.
Since you’re here…
Springboard for Business grows businesses by empowering leaders and their teams with the critical thinking, data, and technology skills central to the future of work. Companies like Amazon, Walmart, HP, JPMorgan Chase, and Visa have partnered with Springboard for Business to upskill and reskill employees around the world. Click here to learn more.