AI in business
It is impossible to carry out massive business functions without incorporating AI in it. AI helps to automate the processes and provides variety of benefits. It also helps to minimize human errors ratio. But It doesn’t work without ethics.
Advantages of AI
The advantages of artificial intelligence (AI) in the business environment can no longer be denied. Nonetheless, the technology remains controversial among consumers. Dystopian scenarios in which AI makes decisions about life and death without human control are currently not to be feared. Nevertheless, the discussion about ethical framework conditions must be conducted.
You don’t have to use Terminator or other Sci-Fi references
Current AI applications already have a significant impact on consumers and therefore pose important questions for companies in terms of governance and ethics.
Use of AI in Businesses
The use of AI in business is used, among other things, to automate decisions that were previously made by humans, for example by a clerk or an expert. AI helps to speed up the process of taking orders and shipment depending on the type of business.
Nowadays, all mega business stakeholders have incorporated AI in their business models to help in carrying out the mega tasks.
Credit scoring assesses the bank customer’s creditworthiness. An insurance company decides in its automated underwriting process whether it is ready to ensure the personal risks of the customer. The dermatologist uses image analysis to assess the patient’s risk of skin cancer. All of this is already a reality today and requires companies to be able to explain and be transparent.
Think carefully about the consequences of automated decisions
However, the moral dilemma does not begin with the self-driving car’s decision about whom to avoid in order to prevent an accident. It’s about much more trivial, but nevertheless momentous, decisions. For example, the US supermarket chain Target knew about a teenage girl’s pregnancy even before her father knew about it. It’s relatively easy to do with the collection of the shopping cart and consumer information: if a 19-year-old woman buys a cocoa butter lotion, a large bag, zinc and magnesium supplements, and a light blue carpet in March, a scoring algorithm can make this a fairly high probability for a pregnancy with a date of birth at the end of August. Is it a good idea for a retailer to use this knowledge now and send vouchers for baby items or congratulations on pregnancy? In any case, Target wrote to the young woman as part of a campaign and sent the relevant coupons – much to the surprise of the future (and until then ignorant) grandfather.
AI pioneers tend to act ethically
Companies should therefore better think through the consequences of their automated decisions and the rules they have learned for approaching customers. Indeed, many companies are already aware that poor or problematic results can negatively affect them. It, therefore, makes sense that they want to take steps to use AI ethically and stay in control.
This is also shown by a Forbes study supported by SAS. According to this, 70 percent of the companies surveyed worldwide that are already using AI conduct ethics training for their IT employees. 63 percent even have ethics committees to evaluate the use of AI. There is also a connection between thought leadership and ethical awareness: 92 percent of the companies that describe their AI implementation as “successful” train their technology experts on ethical issues – compared to just 48 percent of the companies that describe AI -Use are not ready yet.
The customer, consumer, citizen, or patient has an eminent interest in understanding why he is not getting the requested credit or insurance. Or why a certain medical risk is attested to him based on data-based diagnostics.
He may ask himself: What data was used for this? Have I even given my consent to the use of my data? Were the decisions made without prejudice? This raises the question of human control.
The relevant keywords for companies are therefore governance of the decision-making process, transparency, and explainability of the decision. Those affected must be able to trust decisions made using algorithms and data – without having to understand the underlying algorithms, processes, or decision rules.
But what exactly is “governance”? This is where the “four pillars of trust” come into play, which Scott Shapiro from KPMG shaped.