While the first generations of tech-for-good work took a Solutionist approach to addressing existing problems with new technology, scholars and activists are driving growing awareness of the problems with technology itself. By exposing the negative consequences, intended or otherwise, of tech, these communities draw attention to issues with tech-centric approaches. Not all of the projects here adopt an ethics lens in their work, but we use it here for simplicity’s sake.
Technologists’ and others applying tech to critical societal areas, from hospital data to policing to public engagement, risk introducing novel negative externalities in their attempts to ‘change the world’. More and more people are paying attention to the negative externalities that tech itself brings to our lives.
The cooperative organizational model has been around for centuries. Platform coops offer an alternative to extractive gig-economy labor platforms by providing workers with equity in the platform and/or decision-making power.
The decentralization movement aims to limit the increasing centralization of the internet and digital resources into a small number of powerful platform companies. In doing so, they hope to promote free expression and resilience.
These projects open “black box” algorithms up to expert and/or public audit by sharing inputs, rules, and other components of how the system makes its decisions.
Ethical Tech Resources
1. Caitlin E McDonald, PhD provides an update on the burgeoning, but disorganized, field of Digital Ethics:
- “Conceptual frameworks for senior business leaders and board-level executives to consider possible future impacts of digitization or automation. The concerns of these tools are broadly in line with the types of other concerns that senior leaders have: how will proposed changes affect their relationships with customers and other stakeholders? What might a change do to core business models, and how does it compare to the current or potential future competitive landscape?
- Conceptual frameworks or governance toolkits for teams which are closer to the development of these tools: these are often more specifically about what might happen to or because of particular user groups (eg. what if a malicious actor were to use this new tool/feature? What safeguards need to be built in?)
- Emergent fairness, transparency, and anti-bias tooling that is built directly into the AI/ML developer workflow, increasingly becoming part of a standard library of components with similar benefits and risks to other standard software components.”
2. The next generation of data ethics tools, a short, qualitative research project by Open Data Institute and Consequential (2021):