Last week, Google’s CEO, Sundar Pichai, called for new artificial intelligence (AI) regulations. The next day, IBM called for rules to eliminate AI biases that can discriminate against consumers, citizens, and employees based on their gender, age, and ethnicity, among other characteristics.
Mr. Pichai wrote in an editorial for The Financial Times, “There is no question in my mind that artificial intelligence needs to be regulated. It is too important not to,” reports The Verge. “The only question is how to approach it.” (FT’s site was not accessible at the time of writing this note.)
He called for a cautious and nuanced approach, based on the technologies and sectors in which AI is used. In some, new rules are needed, e.g., autonomous vehicles. Others – financial services, insurance, healthcare – are already regulated, and the existing frameworks should be extended to cover AI-powered products and services.
“Companies such as ours cannot just build promising new technology and let market forces decide how it will be used,” wrote Mr. Pichai. “It is equally incumbent on us to make sure that technology is harnessed for good and available to everyone.”
The sentiment is echoed by IBM, which issued policy proposals in preparation for the AI panel hosted by its CEO, Ginni Rometty, at the World Economic Forum in Davos last week.
IBM recommends that companies work with governments to develop standards to avoid discrimination by AI systems, that they conduct assessments to determine risks and harms, and that they maintain documentation to be able to explain decisions that adversely impact individuals.
As consumers become increasingly aware of the degree to which AI controls and shapes our lives and society and of the harms resulting from biased apps, the pressure is intensifying on technology firms and governments alike to put in place guardrails and, in some cases, put on the brakes to ensure that society can catch up to the speed of innovation and work out what needs to be regulated and how.
Of particular concern are facial recognition technologies (FRTs), which are used by law enforcement agencies around the world to identify and track (potential) criminals and by governments for social surveillance and social engineering, including monitoring and persecuting ethnic minorities.
These technologies and the pervasive surveillance they create violate basic human rights, such as the right to privacy. (See Amnesty International’s report “Surveillance Giants: How the Business Model of Google and Facebook Threatens Human Rights,” and our note Amnesty International Calls Google and Facebook a Threat to Human Rights.)
The Economist compared AI a while back to the ancient Roman god Janus, who is depicted with two faces, one looking into the past and the other into the future. (He is the god of “beginnings, gates, transitions, time, duality, doorways, passages, and endings.”) Janus, writes The Economist, “contained both beginnings and endings within him. That duality characterizes AI, too.”
There is “good” AI and “bad” AI, and then there is “good” AI with some bad mixed in – for example, biases. And then there is “good” technology which could lead to unanticipated and undesired, potentially horrific outcomes. Just like numerous other things in life and many technologies before AI. As a society, we need time to anticipate and sort this out, hopefully before we build the AI equivalent of the atomic bomb. And I don’t mean the singularity (i.e. hypothetical uncontrolled and irreversible technological growth).
And it is our responsibility as technology and business leaders to think through the consequences of using new technologies and the impact they may have on individuals, communities, society, and the world at large.
To get educated on AI biases and start eliminating them, see Info-Tech’s blueprint Mitigate Machine Bias.
To learn about AI guardrails and the controls we recommend you start putting in place even if you are just getting your feet wet with AI, look out for our upcoming blueprint on AI governance, or reach out to the analysts to get a kick-start.
Transparency, explainability, and trust are pressing topics in AI/ML today. While much has been written about why these are important and what organizations should do, no tools to help implement these principles have existed – until now.
Recently I attended the inaugural Emotion AI conference, organized by Seth Grimes, a leading analyst and business consultant in the areas of natural language processing, text analytics, sentiment analysis, and their business applications. So, what is emotion AI, why is it relevant, and what do you need to know about it?
SortSpoke’s novel approach to machine learning answers a longstanding problem in financial services – how to efficiently extract critical data from inbound, unstructured documents at 100% data quality.
Amazon is offering its cashierless store technology to other retailers. The technology known as “Just Walk Out” eliminates checkout lines, offering an “effortless” shopping experience and shifting store associates to “more valuable activities”.
As the COVID-19 pandemic is shutting down whole countries, a few of you may be wondering whether AI can help create a vaccine for the virus responsible. After all, AI is magic, right?
Alphabet is facing backlash from its shareholders over its approach to digital privacy, reports the Financial Times. And not for the first time. This time, however, things will need to change.
The EU plans to invest €6 billion to build a single European data space, reports EURACTIV. The envisioned space will house personal, business, and “high-quality industrial data” and create the infrastructure for data sharing and use across businesses and nations.
“Facebook quietly acquired another UK AI startup and almost no one noticed,” reported TechCrunch on February 10. We looked into why.
In a landmark ruling, a Dutch court has ordered an immediate halt to the government’s use of an automated system for detection of welfare fraud.