What are AI ethics?
AI ethics—sometimes called "codes of AI ethics"—are standards and best practices to support the development and deployment of responsible and ethical AI. They're meant not only to deter egregiously misdeeds—like using AI to commit crimes—but also subtle yet pernicious forces like bias and data misuse.
But because AI comes in so many different forms, the practical application of AI ethics doesn't necessarily hinge on a single definition, or one all-encompassing set of industry standards. The fact that AI and ML are, in the grand scheme of things, still relatively new technologies also contributes to the lack of authoritative consensus on what AI ethics are and are not.
Prominent AI ethics frameworks
The United Nations Educational, Scientific and Cultural Organization (UNESCO) provides an effective general explanation of AI ethics in the "Recommendation on the Ethics of Artificial Intelligence" framework, which the agency adopted in November 2021. UNESCO classifies the concept as "a dynamic basis for the normative evaluation and guidance of AI technologies, referring to human dignity, well-being and the prevention of harm … rooted in the ethics of science and technology."
The Asilomar AI Principles, developed in 2017 by the Future of Life Institute, are another well-known early framework for AI ethics. These principles focus prominently on standards for AI research initiatives—safety, privacy, transparency, and alignment with human rights and values, among others.
Why ethics matter in AI
When organizations use AI, they aren't doing so in a vacuum. For example, a machine learning algorithm in a retail enterprise's e-commerce recommendation engine may look perfectly effective on paper or in a proof of concept. But if that algorithm contains an intentional or unintentional bias that causes the engine to recommend products based on racial, ethnic, gendered, or sexuality-based stereotypes, the company behind it cannot ethically write this off as a cost of doing business. The ML is unethical in practice and must be either eliminated or corrected.
The situation described above is hypothetical. But it's based on numerous examples of how enterprises have definitively or allegedly used unethical AI technology, or enacted potentially untoward practices in the course of gathering data for AI tools and projects. All of the companies in question received almost-immediate blowback for such actions. While the fallout didn't cripple these organizations financially, there was often significant reputational damage. Based on what many consider to be established AI ethical principles, the following projects were the antithesis of responsible AI:
- A major healthcare services company's machine learning algorithm used to determine caregiving priorities was revealed—by a 2019 Harvard study—to be biased against Black patients, raising major ethical questions about the organization's overall practices.
- A credit card released as a joint venture between a tech giant and one of the world's biggest banks took criticism for alleged gender bias, because its algorithm apparently assigned higher credit limits to men than women. While a government investigation uncovered no criminal wrongdoing, regulators were highly critical of both organizations' "customer service and transparency."
- AI-based apps were pivotal to the unauthorized data mining campaigns at the center of the Cambridge Analytica scandal. The controversy ultimately caused that U.K.-based analytics company to disband in 2018.
Bottom-line disadvantages of unethical AI
Bias—one of the biggest ethical risks surrounding AI—leads to analysis that is objectively compromised. The data may well be accurate, but if bias was introduced at any point, analytics operations may discriminate against specific customers, and the resulting insights will also be biased.
Consider the healthcare services organization mentioned above. Its algorithm used patients' annual health costs as predictors of hospitalization risk and didn't use race as a variable. Thus, researchers behind the algorithm didn't intend to introduce bias. But they didn't consider factors affecting how much patients could spend on healthcare—namely poverty, which disproportionately affected Black patients. When Harvard's study looked at the app's recommendations, it found white patients were more often considered "high-risk" and automatically enrolled in certain supplementary health services. This occurred even though Black patients had 26% more chronic health conditions than their white counterparts.
SImply put, it becomes very difficult to trust any analysis produced by a biased AI algorithm and consider it a reliable basis of actionable insights. Therefore, AI ethics are critical to corporate responsibility and have a role to play in the bottom line.
Key benefits of ethical AI
1. Reliable analysis
AI and ML tools programmed without bias and according to reasonable ethical principles will accurately analyze any data an enterprise ingests and produce reliable results. The reporting in this context provides data teams and business leaders with a clear, comprehensive picture of the organization's reality. Whether the picture tells a good story, a bad one, or something in between, the enterprise's stakeholders can develop a better perspective on the best strategic path forward for applying reliable analysis by following ethical AI principles.
2. Increased transparency and corporate responsibility
A 2020 study found that about 53% of people globally believe AI innovation has been good for society, but that's a narrow majority. The following year, 68% of the respondents to another report said they don't believe most enterprise AI will be ethical by 2030.
By being transparent to their customers about how AI and consumer data are used in products and services, organizations can allay some of the suspicion individuals may have. Increased consumer confidence, in turn, can bolster enterprises' reputation for responsible technology use.
3.Stronger regulatory compliance
Various jurisdictions already have strong data privacy laws in place. It's quite possible that some of these will be amended to account for data use or misuse in AI and ML projects in the near future, or that a new, comprehensive AI regulation will be introduced. It may be best to get out as far ahead of this issue as possible by adopting strong AI ethics and focusing on responsible innovation before doing so becomes a ubiquitous legal requirement.
Solutions to common ethical AI challenges
Enterprises looking to establish and maintain ethics in their AI projects may encounter some difficulties.
Overcoming the lack of standardization
Because there's no single uniform industry standard for AI ethics, and legislation is even less common, knowing where to start can be difficult. The businesses that have tried to create trustworthy AI have generally followed a strictly goal-oriented approach, whereas the academic view of AI ethics focuses on big-picture analysis of the technology's societal role. Neither approach is fully right or wrong, so enterprises will need to bridge the gap in whatever approach they take to ethical AI.
Understanding ideal AI for specific industries
Any AI ethics framework must be tailored to fit an organization's industry and specific needs. In verticals that thrive on the heavy use of AI—e.g., retail with its recommendation engines—this isn't necessarily easy. It requires the right technologies, approaches, and expertise to identify and mitigate bias or other ethical issues.
Tackling the explainability conundrum
AI isn't difficult to explain on a surface level, but getting into the nuts and bolts of it becomes quite complicated. This complexity means that not only is it difficult to explain in detail to customers—if a malfunctioning algorithm somehow caused a data breach—it can also be hard for data teams and AI engineers to trace the origin of the problem. Establishing a roadmap of algorithmic systems, well before an AI project is implemented, is essential to mitigate this particular risk.
Instilling ethics in enterprise AI
The few existing frameworks for AI ethics, like the UNESCO and Asilomar principles, will be a good starting point for organizations looking to implement ethical AI. But they aren't ironclad. Using those frameworks as general background and building atop them according to specific business requirements will be wise.
Get input from all stakeholders
Senior management and the C-suite should take a major role in forming ethical AI principles. They will have a useful big-picture perspective on these issues, and the various mistakes made by other enterprises in the past can serve as lessons learned and guide best practices. When developing an ethical AI strategy, C-level and technical staff alike must look at others' failures to know what not to emulate. In fact, it's critical to get input from all relevant stakeholders in AI ethics projects. The perspective of the sales rep who benefits from a convenient lead-generating algorithm will have plenty to say, but so will customer service agents who field complaints of biased practices—or the perception thereof—in the sales process.
Look at other industries
Enterprises in industries that don't regularly consider ethical principles would be wise to look at those that do—like healthcare. Industry leaders in that field, out of necessity, are well-experienced with ethically managing data and privacy concerns. Because these issues are integral to AI ethics, healthcare professionals' approaches to them should be studied and followed where applicable.
Incentivize a focus on ethics
Many ethical violations occur when people think profitable ends will justify unethical means. For something like AI, where unethical actions can have massive consequences, incentivizing team members to follow ethical AI policies can be effective. This can involve direct individual compensation or allocating budgetary resources toward the development of ethical AI strategies.
Work with experts
Enterprises new to AI ethics may not want to treat them as DIY projects. Consider turning to an experienced partner like Teradata for proven expertise in developing ethical AI and data strategies that best suit the priorities of your organization.
To learn more about enterprises leveraging AI in an ethical manner, check out the case study on Teradata customer Chugai Pharmaceutical Ltd. The Japan-based drug developer used cutting-edge AI and a multi-cloud deployment alongside Vantage, Teradata's connected enterprise analytics platform, to accurately and securely manage clinical trial data.
Watch our AI webinar