An intersectional take on technology, rights and justice

There’s a word – which is an entire multi-faceted concept in itself – which comes to my mind very often, whether I’m reading the news, working, talking with loved ones or following someone’s train of thoughts online.

Intersectionality.

The concept it expresses has always been at the core of my perspective of the world and of my work, exploring how technology can most effectively serve justice and rights.

So I decided to write about it, as it might turn out to be useful for others as well – next time you’re scraping data to investigate the patterns behind an issue, supporting a group in building their advocacy strategy, or making up your own mind before going to the polls.

Continue reading An intersectional take on technology, rights and justice

Newsletter #9: List and Listen

Newsletter #9: sent!
(and archived if you missed it)

Work-wise: publishing a curated list of podcasts contributing to widen representation and democracy in the media space; following a hashtag frenzy about all things journalism, web, movement building and resistance; giving a final touch to articles about to be published.

Links-wise: gun violence, slow violence, the FBI admits flaws in hair analysis over decades, tools to avoid snoopers online, wireless routers spying on our breathing, Rihanna breathing it and out, and Cher.

If you are not yet a subscriber: subscribe now!

The Ethics of Algorithms: notes, emerging questions and resources

Tweets relating to Ferguson after Michael Brown was shot. Map based on mentions of the city and other related key words. Via The Huffington Post.

Algorithms are ruling an ever-growing portion of our lives.
They are adopted by health insurances to assess our chances to get sick, by airlines to make our flights safer, by social media companies to attract our attention to ads, by governments to predict criminal activity.
They can guess with great accuracy a lot of things about us, such as gender, sexual orientation, race, personality type – and can also be applied to influence our political preferences, control what we do, target what we say and, in extreme cases, limit our freedom.

This is not to say that the computational algorithm model should have an evil reputation. Both algorithms and human judgement can be beneficial, malicious, biased – and even wrong. The main difference between them is that over the years (centuries) we developed a pretty good understanding of how human judgement works, while, when it comes to algorithms, we’re just starting to get to know each other.

Continue reading The Ethics of Algorithms: notes, emerging questions and resources