We’ve been writing for some time about digital privacy and how Google and the other Big Six (and some startups) threaten privacy and other fundamental human rights (see “Want to know more?” below). As consumers and governments are waking up to the risks of surveillance-based business models, Google/Alphabet has been at the center of this debate. Now Alphabet is facing a backlash from its shareholders over this issue, reports the Financial Times (FT).
Ten large Alphabet shareholders have co-filed a resolution which calls on the company to strengthen its oversight of the risks to human rights created by its business model. They are also concerned about bias and how Google reinforces, amplifies, and systematizes discrimination, disinformation, hate speech, and violence. The investors include AXA Investment Managers, Aviva Investors, the Church of England, and several others who preferred to be unnamed, all of them together owning more than US$2.4 trillion in assets.
The resolution calls on Alphabet to set up an independent committee at the board-of-director level to monitor human rights risks in its products and value chain, reports FT. It will be put to a vote at Alphabet’s annual meeting in June.
This is the culmination of an ongoing struggle with a wider group of Alphabet investors who voiced concerns last year. At that time, Alphabet dismissed them, forcing the activists to join forces.
Alphabet can no longer afford to ignore this issue, not when its own shareholders – typically a more moderate contingent than, say, technology users or civil rights activists – start demanding that it pay attention. The overall climate has changed; different winds are blowing now. Regulators are actively working to rewrite privacy protection laws: from the Canadian Office of the Privacy Commissioner modernizing PIPEDA (the country’s law governing how businesses collect, use, and disclose personal information) to the proposed Algorithmic Accountability Act of 2019 in the US to European initiatives and many more. Will Alphabet listen this time? It remains to be seen, but we are hopeful.
Transparency, explainability, and trust are pressing topics in AI/ML today. While much has been written about why these are important and what organizations should do, no tools to help implement these principles have existed – until now.
Recently I attended the inaugural Emotion AI conference, organized by Seth Grimes, a leading analyst and business consultant in the areas of natural language processing, text analytics, sentiment analysis, and their business applications. So, what is emotion AI, why is it relevant, and what do you need to know about it?
SortSpoke’s novel approach to machine learning answers a longstanding problem in financial services – how to efficiently extract critical data from inbound, unstructured documents at 100% data quality.
Amazon is offering its cashierless store technology to other retailers. The technology known as “Just Walk Out” eliminates checkout lines, offering an “effortless” shopping experience and shifting store associates to “more valuable activities”.
As the COVID-19 pandemic is shutting down whole countries, a few of you may be wondering whether AI can help create a vaccine for the virus responsible. After all, AI is magic, right?
The EU plans to invest €6 billion to build a single European data space, reports EURACTIV. The envisioned space will house personal, business, and “high-quality industrial data” and create the infrastructure for data sharing and use across businesses and nations.
“Facebook quietly acquired another UK AI startup and almost no one noticed,” reported TechCrunch on February 10. We looked into why.
In a landmark ruling, a Dutch court has ordered an immediate halt to the government’s use of an automated system for detection of welfare fraud.
Databricks, a data processing and analytics platform with a strong focus on AI and ML, has partnered with Immuta to deliver automated end-to-end data governance for AI, data science, and ML projects.