As generative AI becomes more popular, organizations must consider how to deploy it ethically. But what does the ethical deployment of AI look like? Does it involve reigning in human-level intelligence? Preventing bias? Or both?
To assess how companies approach this topic, Deloitte recently polled 100 C-level executives at U.S. companies with between $100 million and $10 billion in annual revenue. The results indicated how business leaders incorporate ethics into their generative AI policies.
Top priorities for AI ethics What ethical issues do these organizations find most important? Organizations prioritized the following ethical issues in AI development and deployment:
Balancing innovation with regulation (62%). Ensuring transparency in how data is collected and used (59%). Addressing user and data privacy concerns (56%). Ensuring transparency in how enterprise systems operate (55%). Mitigating bias in algorithms, models, and data (52%). Ensuring systems operate reliably and as intended (47%). Organizations with higher revenues — $1 billion or more per year — were more likely than smaller businesses to state that their ethical frameworks and governance structures encourage technological innovation.
Unethical uses of AI can include misinformation, especially critical during election seasons, and reinforcing bias and discrimination. Generative AI can replicate human biases accidentally by virtue of copying what it sees, or bad actors can use generative AI to intentionally create biased content more quickly.
Threat actors who use phishing emails can take advantage of generative AI’s speedy writing. Other potentially unethical use cases may include AI making major decisions in warfare or law enforcement.
The U.S. government and major tech companies agreed to a voluntary commitment in September 2023 that set standards for disclosing the use of generative AI and the content made using it. The White House Office of Science and Technology Policy issued a blueprint for an AI Bill of Rights, which includes anti-discrimination efforts.
U.S. companies that use AI at certain scales and for high-risk tasks must report information to the Department of Commerce as of January 2024.
SEE: Get started with a template for an AI Ethics Policy.
“For any organization adopting AI, the technology presents both the potential for positive results and the risk of unintended outcomes,” said Beena Ammanath, executive director of the Global Deloitte AI Institute and Trustworthy AI leader at Deloitte, in an email to TechRepublic.
Who is making AI ethics decisions? In 34% of cases, AI ethics decisions come from directors or higher titles. In 24% of cases, all professionals make AI decisions independently. In rarer cases, business or department leaders (17%), managers (12%), professionals with mandatory training or certifications (7%), or an AI review board (7%) make AI-related ethics decisions.
Large companies — $1 billion or more of yearly revenue — were more likely to allow workers to make independent decisions about the use of AI than companies with under $1 billion of annual revenue.
Most executives surveyed (76%) said their organization conducts ethical AI training for their workforce, and 63% say they conduct it for the board of directors. Workers in the building phases (69%) and pre-development phases (49%) receive ethical AI training less often.
“As organizations continue to explore opportunities with AI, it is encouraging to observe how governance frameworks have emerged in tandem to empower workforces to advance ethical outcomes and drive positive impact,” said Kwasi Mitchell, U.S. chief purpose & DEI officer at Deloitte. “By adopting procedures designed to promote responsibility and safeguard trust, leaders can establish a culture of integrity and innovation that enables them to effectively harness the power of AI, while also advancing equity and driving impact.”
Are organizations hiring and upskilling for AI ethics roles? The following roles were hired or are part of hiring plans for the organizations surveyed:
AI researcher (59%). Policy analyst (53%). AI compliance manager (50%). Data scientist (47%). AI governance specialist (40%). Data ethicist (34%). AI ethicist (27%). Many of those professionals (68%) came from internal training/upskilling programs. Fewer have used external sources such as traditional hiring or certification programs, and fewer still look to campus hires and collaboration with academic institutions.
“Ultimately, businesses should be confident that their technology can be trusted to protect the privacy, safety, and equitable treatment of its users, and is aligned with their values and expectations,” said Ammanath. “An effective approach to AI ethics should be based on the specific needs and values of each organization, and businesses that implement strategic ethical frameworks will often find that these systems support and drive innovation, rather than hinder it.”
Be First to Comment