Whether you love it, hate it, fear it, or embrace it, ChatGPT and other emerging artificial intelligence (AI) tools are set to play an increasingly larger role in the day-to-day operations of businesses around the globe. So much so, in fact, that some companies are creating new executive-level roles that aim to place greater emphasis—and greater visibility—on the importance of data, analytics, and AI.
These functions have historically resided within the IT organization and ultimately reported to a Chief Technology Officer (CTO) or Chief Information Officer (CIO), but a recent Wall Street Journal article highlights how advances in technology, combined with an increasingly large appetite for data analysis and reporting, are necessitating a change.
Citing a discussion with Ryan Bulkoski, leader of the Data, Analytics & AI Practice at executive recruiting firm Heidrick & Struggles, the article posits that some 70% of Fortune 500 companies have dedicated executives at or just below the C-suite level who are solely focused on data. Citing a 2022 survey, Bulkoski highlights that many of those roles have been in existence for five years or less.
Although having a dedicated data-centric function within the business may be a relatively new concept, using data and metrics for strategic discussions (beyond that of pure financial measures) has been part of the business landscape for decades.
Marketing organizations, in particular, have long held a “metrics first” mindset. As online advertising and email marketing began to rise in prominence in the mid-1990s, the role of the CMO changed dramatically. Impressions, opens, clicks, and conversions rapidly became part of the lexicon, impacting everything from budget allocations to creative execution.
As connection speeds increased and mobile devices hit the mainstream, location- and device-specific data was added to the mix. Social media added another layer of complexity, and with today’s push for increasing levels of privacy and personalization, marketers are forced to once again find new ways to leverage data creatively, responsibly, and, most of all, ethically.
That last concept—the ethical use of data—will become an increasingly important subject to address as all departments forge ahead in the new world of commercially-available AI. As businesses continue to become infatuated with the promise and potential of tools like ChatGPT, and as business leaders urge teams to explore ways they can use OpenAI as an efficiency-enhancing resource, reports of employees unwittingly leaking proprietary information are beginning to emerge.
Given the open nature of the code that powers this new breed of AI tools, some interesting legal questions are being raised. EU countries have been among the most forward-thinking in terms of personal privacy, developing and enacting some of the strictest policies curtailing the use of consumer data for commercial use. In 2018, the EU passed the General Data Protection Regulation (or as it’s more commonly known, GDPR) that drove many of the privacy protections and advertising personalization controls you see on websites today. Now, some are asking if the current iteration of OpenAI violates the core tenets of GDPR, and if so, will compliance measures have an adverse effect on the tool’s capabilities.
Questions like that will surely continue to emerge as OpenAI goes further mainstream, but as regulatory bodies wrestle with relatively tactical topics related to compliance, some of the leading voices from the technology community are raising even bigger concerns.
In late March 2023, more than 1,100 people, including Elon Musk and Apple co-founder Steve Wozniak, signed a somewhat controversial open letter asking “all AI labs to immediately pause for at least six months the training of AI systems more powerful than GPT-4,” with GPT-4 being the latest iteration of the core AI framework. The signatories contend that AI is evolving at such a rapid pace and that the promise is so immense, that competing AI labs have been focused solely on commercial success and pushing for the creation of increasingly powerful “digital minds” that the creators cannot “understand, predict, or reliably control.”
It all sounds pretty ominous, doesn’t it? Particularly because the concept of an all-powerful AI has been part of the cultural zeitgeist for decades. From the supercomputer HAL 9000 in 2001: A Space Odyssey to the deviously all-powerful Agent Smith from The Matrix, fear of AI has been a popular trope. Public statements like the open letter referenced above aren’t exactly helping the general public to gain trust in these new tools, now that AI is becoming available to anyone with access to a web browser and an internet connection.
As evidenced by the team that unintentionally shared proprietary data, a more imminent threat comes not from the AI behind the screen, but from the people behind the keyboard.
One of the hottest topics of debate within the human resources community, for example, is how and when to use AI and Machine Learning (ML) tools within the business. And once these tools are in service, ensure they’re not used in a way that inadvertently introduces bias or conflicts with Diversity, Equity, Inclusion, and Belonging (DEIB); Environmental, Social, and Governance (ESG), efforts or other policies.
It’s tempting to think an AI tool can be the ultimate weapon in avoiding bias, but we must remember that humans are ultimately responsible for both creating the algorithms and defining the data the AI/ML tool will leverage.
Tools such as Applicant Tracking Systems (ATS), for example, have been using AI/ML technology for years, but they don’t exactly have a sterling reputation. Originally designed to help high-volume recruiters sort through large amounts of resumes in search of the best candidates, ATS platforms—when improperly tuned—can unintentionally weed out great candidates at scale, essentially doing the wrong thing faster and more efficiently.
This phenomenon has given rise to the term “the paper ceiling”: the invisible barrier that keeps qualified candidates from getting a job if their resume doesn’t have the right keywords, and created a cottage industry for consultants promising to help weary job seekers gain a competitive edge with the perfectly-worded resume.
Given these current market dynamics, creating a new data- and AI-centric discipline within the organization seems like a great move. In addition to allowing the CIO, CTO, and CISO to focus on the myriad of other pressing tech issues that confront every business today, there’s likely a public relations benefit to highlighting that a dedicated team is responsible for collecting, synthesizing, and utilizing data throughout the enterprise.
But if employers truly want to prepare their organization for the future, data awareness and digital mastery can’t be relegated to the C-suite alone. You have to information-enable the enterprise. Accomplishing this starts with laying a solid foundation of data analysis, awareness, and education.
Many (if not most) employers already leverage some form of employee training surrounding data-related issues, but it largely focuses on cybersecurity and mitigating risks associated with phishing scams and other similar attempts at gaining access to privileged data. However, to achieve the level of knowledge and insight required to make the most of emerging data analysis and AI tools—and to avoid the potential pitfalls they introduce—human-led, domain-specific training will deliver the level of skills mastery it takes to elevate the business.
If you’re ready to explore how you can help your teams stay ahead of the curve on emerging tech and trends, check out our latest e-book, browse our data science courses, or connect with one of our consultants today.