We recently covered Google’s lackadaisical approach to data privacy in the context of its partnership with Ascension, a US healthcare giant. (See our tech briefs Google Builds AI-Powered Tools for Patient Care: Project Nightingale and Google Has Personal Medical Data of Up to 50 Million Americans: Project Nightingale.) Last month, Google was under fire again, along with Facebook. Amnesty International has produced a report “Surveillance Giants: How the Business Model of Google and Facebook Threatens Human Rights.”
In this report, Amnesty International criticizes Google and Facebook for their business practices. The report says that their pervasive surveillance machinery violates core human rights, such as the right to dignity, autonomy, and privacy, the right to control information about ourselves, and the right to a space where we can freely express our identities.
The report concludes with a ten-point list of recommendations for governments, urging them to take action, including recommendations such as:
The report fell short of calling to break up Google and Facebook, but it recommends that governments take a stronger stance on universal access to digital services and protection of human rights, including “taking measures to disrupt the market” – essentially an indirect way of saying that.
The report contains recommendations for businesses, too, urging them to:
While these recommendations are aimed at Google and Facebook, as well the others in the Big Six – Apple, Amazon, IBM, and Microsoft – this is a warning call for everyone collecting and using customer, consumer, patient, partner, and employee data, which is effectively every organization these days.
With the general public’s increasing awareness of how their personal, private, and in many cases intimate data is being used and abused – including manipulation, exploitation, discrimination, restricted access to information and economic opportunities, and weakened civic society and democratic institutions – the pressure is increasing on governments to get more active and proactive in controlling and, in some cases, restricting application of artificial intelligence (AI) and machine learning technologies and harmful business practices.
The race to control data privacy, ownership, and usage in AI applications is just starting, and it is only going to intensify. Is your organization getting ready?
The UN Human Rights Council states that “companies have a responsibility to respect all human rights.” And the UN Guiding Principles on Business and Human Rights require companies to “take ongoing, proactive, and reactive steps to ensure they do not cause or contribute to human rights abuses – a process called human rights due diligence.”
To get educated on data-related risks with AI and machine learning and get started with this due diligence, download our blueprint Mitigate Machine Bias.
Transparency, explainability, and trust are pressing topics in AI/ML today. While much has been written about why these are important and what organizations should do, no tools to help implement these principles have existed – until now.
Recently I attended the inaugural Emotion AI conference, organized by Seth Grimes, a leading analyst and business consultant in the areas of natural language processing, text analytics, sentiment analysis, and their business applications. So, what is emotion AI, why is it relevant, and what do you need to know about it?
SortSpoke’s novel approach to machine learning answers a longstanding problem in financial services – how to efficiently extract critical data from inbound, unstructured documents at 100% data quality.
Amazon is offering its cashierless store technology to other retailers. The technology known as “Just Walk Out” eliminates checkout lines, offering an “effortless” shopping experience and shifting store associates to “more valuable activities”.
As the COVID-19 pandemic is shutting down whole countries, a few of you may be wondering whether AI can help create a vaccine for the virus responsible. After all, AI is magic, right?
Alphabet is facing backlash from its shareholders over its approach to digital privacy, reports the Financial Times. And not for the first time. This time, however, things will need to change.
The EU plans to invest €6 billion to build a single European data space, reports EURACTIV. The envisioned space will house personal, business, and “high-quality industrial data” and create the infrastructure for data sharing and use across businesses and nations.
“Facebook quietly acquired another UK AI startup and almost no one noticed,” reported TechCrunch on February 10. We looked into why.
In a landmark ruling, a Dutch court has ordered an immediate halt to the government’s use of an automated system for detection of welfare fraud.