Tweets relating to Ferguson after Michael Brown was shot. Map based on mentions of the city and other related key words. Via The Huffington Post.
Algorithms are ruling an ever-growing portion of our lives.
They are adopted by health insurances to assess our chances to get sick, by airlines to make our flights safer, by social media companies to attract our attention to ads, by governments to predict criminal activity.
They can guess with great accuracy a lot of things about us, such as gender, sexual orientation, race, personality type – and can also be applied to influence our political preferences, control what we do, target what we say and, in extreme cases, limit our freedom.
This is not to say that the computational algorithm model should have an evil reputation. Both algorithms and human judgement can be beneficial, malicious, biased – and even wrong. The main difference between them is that over the years (centuries) we developed a pretty good understanding of how human judgement works, while, when it comes to algorithms, we’re just starting to get to know each other.
The 2-day event “The Ethics of Algorithms”, hosted by the Centre for Internet and Human Rights and joined by a cross-disciplinary group of professionals from civil society, industry, technology, policy making and academia, looked into the role of algorithms in relation to two sensible domains: freedom of expression – and its troubles with social media platforms and radical content; and society – and the ethical challenges it faces.
This post collects some of the questions and reflections emerged during the conference, as well as some additional resources, aiming to support further steps into upcoming conversations.
"The way algorithms are right or wrong is very different from the way humans are right or wrong. We're so not ready for this." #EOA2015
— Frederike Kaltheuner (@fre8de8rike) March 10, 2015
Notes and emerging questions
- Clarifying terms
We realised that discussions on the relations between algorithms, policy and freedom of expression often end up in confusion and unclarity due to assumptions on the meaning of key concepts. For example:
- what do we mean by freedom of expression? In different countries, this has different meanings.
- what do we mean by threat? And how do we make a distinction between threat to citizens and to the state?
- what do we mean by radical content? Both terrorist and extremist content end up being labelled as radical, but while terrorist content is illegal, extremists content isn’t (while still having the potential to have dangerous consequences).
- how do we define terrorism, and can violence be a designator to define that, if we don’t have a clear definition of what violence itself is yet? Should we shift the focus from studying how closely a group matches our definition of terrorism, to concentrating primarily on the actual harmful effects the group could cause?
"[Terrorist and extremist] content providers are using the same techniques honed by spam & scam people over the last decade." #EoA2015
— Zeynep Tufekci (@zeynep) March 9, 2015
- State, social media companies and freedom of expression: a look at policy, legal and technical challenges.
Policy-wise, if social media companies adopt algorithms to flag radical content, this can interfere with freedom of expression, state policies and human rights standards (this can also be avoided if companies collaborate with experts in the field to create less interfering policies – but how to identify the most suitable experts to collaborate with is a whole other delicate issue). In recent news, an example showing the confusion between roles of state and companies has been given by the Lee Rigby report, stating that UK intelligence agencies couldn’t have prevent a crime, but that Facebook should have alerted authorities about the publication of extremist messages online.
A number of questions emerge then:
- what does it mean that so much of our life is taking place publicly, but with so much private intermediation from companies?
- how does intermediary liability pose challenges for company responses to violent extremism?
- what’s the difference between state policy and terms of service agreements of companies?
- what’s the level of public acceptance of state intervention?
Furthermore, since defining online content as terrorist has a political nature, what does it mean when a social media company does that?
We can refer to the case of the anti-Islamic video Innocence of Muslims, posted on Youtube in 2012. News report that the White House asked Google to take it down, and “Google refused, citing its own guidelines regarding hate speech (though it later took down the video in Egypt and Libya, due to what it called the “very difficult situation” in those countries)”. Meaning that a company based in a Northern American country took a political decision about what was right for the people living in two Northern African countries?
From a legal point of view, debates and interpretations of terminology make the waters even murkier.
First of all: social media companies and material support. In 2011, Glenn Greenwald speculated that the US Department of Justice “could consider Twitter’s providing of a forum to a designated Terrorist organization to constitute the crime of ‘material support of Terrorism.’”
Material support is defined as “any property, tangible or intangible, or service […]”.
Is social media a service? Short answer: yes. And as such, services have to provide material support. But can social media companies be liable for terrorist content under material support law? Material support law would only apply if there was some form of coordination. Is this the case?
And in addition to this, as noted by Emily Goldberg Knox in her article Social Media Companies and Material Support: “It is also unclear whether satisfying the coordination requirement is sufficient to satisfy the concerted activity requirement.” And: “Additionally, despite the potential threat that results from terrorist groups using social media, other factors, such as counter-terrorism value and the First Amendment, warrant consideration.”
As she concludes: “How courts, legislators, and the executive branch will weigh these factors remains to be seen.”
From a technical point of view, the automation of the algorithms adopted by social media companies presents a wide range of controversies.
On one side, automation, when managed correctly, can help a social media companies provide customised engagement and recommendations which can enhance their popularity among users. On the other side, when automated algorithms are used by the very same companies to monitor fundamental rights – such as freedom of expression, right to privacy, freedom of belief – this presents issues involving a number of domains, from human rights, to science and ethics. Is it time to develop an epistemic foundation for ethics of algorithms and people that develop them?
We need a framework for policing data and algorithm. "Framework" came up a lot in the two days discussions #EOA2015
— Mohamad محمد (@monajem) March 10, 2015
- Citizens action
We’re about to enter a world where machines will take a lot of decisions for us – also due to mass surveillance. What are we going to do?Deciding not be comfortable with this might be our best bet. The safest nations are those with active populations, rallying and fighting for democracy – which, to make things harder, as freedom of expression, is not a given in many countries.
As citizens we have the responsibility to look at what our legal structure says and advocate for what we want to change. Hold our governments accountable, for them to be transparent about policies handling radical content and our freedom.
In order to do this, being objective (as in: removing the fear of terrorism from our reflection) would help us understand what we want freedom of expression to mean, at a global level and not on a case by case level.
- Media pluralism
Media pluralism is a prerequisite for freedom of expression. Independent and pluralistic media are essential to any society to ensure freedom of opinion and expression and the exercise of other human rights.
It’s our responsibility to defend plurality – but this doesn’t come without a set of challenges. Lack of universal access to media, content restrictions on the Internet and inconsistent approaches of states towards Internet freedom, online pluralism and the relevance of international legal standards on freedom of expression to Internet-based media, endanger plurality – and therefore, democracy.
Governments should recognise the relevance of international human rights principles to media pluralism and adopt a rights-based approach to policies regarding freedom of expression.
- From ‘at risk’ to ‘a risk’: the stigmatic potential of predictive policing
Using data scraped by years worth of crime reports, algorithms can identify areas with high probabilities for certain types of crime and groups likely to commit them.
This practice can help the work of law enforcement agencies, but it’s of course also raising concerns about privacy, surveillance and how much power should be given over to algorithms. Predictive policing can create categorical and biased suspicion of people in predicted crime areas, and lead to unnecessary questioning or excessive searching.
Considering this:
- what is the police expected to do with the data? The output is not entirely clear.
- the original data are collected by people, which means that they could be skewed as there could be a discriminating practice at the bottom. So if datasets can’t be considered reliable, and decisions about their use are so subjective: shouldn’t not only the algorithms, but also the datasets and the decisions taken about them, be transparent?
- following too closely the results of an algorithm, we can incur the risk to limit our analysis to details while losing the big picture.
- there’s a fine line between using predictive policing to target someone who’s an activist and deciding that that person represents a threat. This quickly translates into stigmatisation, exclusion, discrimination and undiscriminated surveillance of a community.
- predictive policing is a political decision and it’s ultimately a matter of power. For example, we have a lot of data about the poor, because power is exercised to force them to provide way more information than it’s asked to wealthier citizens (see: concerns around the adoption of biometric analysis in development).
Does surveillance become 'privacy protecting' by virtue of greater accuracy, more precise targeting? #EOA2015
— becky kazansky˙ ͜ʟ˙ (@pondswimmer) March 10, 2015
- A quantitative approach to the analysis of war crimes
Algorithms and statistics can help us analyse human rights violations, and even prove responsibilities behind war crimes and highlight violence patterns able to constitute proof in genocide trials.
The example we focused on was the quantitative reflection on state violence in Guatemala between 1960-1996 by Patrick Ball, Paul Kobrak and Herbert F. Spirer of the Human Rights Data Analysis Group. A genocide requires patterns: to kill a big group of people, knowledge about the group’s behaviour is needed. So:
- can algorithms identify violence patterns?
- and if so, how can we decrypt them?
Information Control and Strategic Violence – Anita Gohdes, 31th Chaos Communication Congress [31c3] (December 28, 2014)
- Reputation, search, and finance
In The Black Box Society, Frank Pasquale identifies three aspect of our lives which are heavily monitored and influenced by algorithms:
- reputation: the portrait that everything we click, browse, watch, listen to, paints about us, and that can be used to evaluate us during a hiring process or to target us as potential customers for anything from a new car to a pregnancy test;
- search: we look for information online and what we find is what the search engine we’re using wants us to find. Search engines use ranking algorithms to provide results of our search, and the result we get is a combination of both the company’s and the algorithm’s biases applied to answer our search question;
- finance: algorithms are known to hide financiers’ moves very well. To mention an example from recent years, it was algorithms that made it possible for banks to combine sub-prime mortgages into respectable looking investments, contributing to the financial crisis of 2007-2008.
What’s next
As we speak, key international events are convening representatives from civil society, industry, policy making and academia, to keep working on the hardest challenges presented by the digital age we’re just getting into. To name a few: Circumvention Tech Festival (Spain) and the upcoming Responsible Data Forum (multiple locations), RightsCon (Philippines) and Global Conference on CyberSpace (The Netherlands). It’s essential that we make the most of this momentum, and join forces to think about how we want the world’s rights to look like, now and for the new generations to come.
It’s clear that we need a new and multi-disciplinary understanding of how the Internet and the algorithms keeping it in motion work, and what does this mean from a global, intersectional perspective. It’s a matter of human rights, and exercise of power, and it’s crucial for our societies to work eagerly on underlining that freedom of expression, plurality and privacy are fundamental rights we all need to fight for and defend.
Additional resources:
- The Slippery Slope of Material Support Prosecutions: Social Media Support to Terrorists – Emily Goldberg Knox, Hastings Law Journal (August 22, 2014)
- The (ab)uses of social media for understanding international conflict – Thomas Zeitzoff, John W. Kelly and Gilad Lotan, The Washington Post (February 25, 2015)
- More Surveillance Won’t Protect Free Speech – Jillian York, Gizmodo (January 13, 2015)
- What Happens to #Ferguson Affects Ferguson: Net Neutrality, Algorithmic Filtering and Ferguson – Zeynep Tufekci, The Message (August 14, 2014)
- Why hasn’t #OccupyWallStreet trended in New York? – Megan Garber, Nieman Lab (October 17,2011)
- We Can’t Trust Uber – Zeynep Tufekci, Brayden King, The New York Times (December 7, 2014)
- Beware the Smart Campaign – Zeynep Tufekci, The New York Times (November 16, 2012)
- Facebook Could Decide an Election Without Anyone Ever Finding Out – Jonathan Zittrain, New Republic (June 1, 2014)
- Facebook Wants You to Vote on Tuesday. Here’s How It Messed With Your Feed in 2012 – Micah L. Sifry, Mother Jones (October 31, 2014)
- Technology and Social Control: The Search for the Illusive Silver Bullet Continues – Gary T. Marx, Encyclopedia of the Social & Behavioral Sciences (forthcoming)
- State Violence in Guatemala, 1960-1996: A Quantitative Reflection – Patrick Ball, Paul Kobrak, Herbert F. Spirer, American Association for the Advancement of Science (1999) [pdf – english] [pdf – español]
- The Algorithmic Self – Frank Pasquale, The Hedgehog Review (Spring 2015)
- Racial Bias, Even When We Have Good Intentions – Sendhil Mullainathan, The New York Times (January 3, 2015)
4 thoughts on “The Ethics of Algorithms: notes, emerging questions and resources”