I spend a lot of time talking to people about how to use technology and data strategically to evolve their teams, supercharge their ideas, and experiment with new opportunities.
In the course of these conversations, I ask questions to stretch out the space between idea and design decision, so we can explore the grey areas. These grey areas are where a choice of tool, process, technology, community, and language can alter the outcomes of a project or the work of an organisation.
And while exploring grey areas is a critical phase, technology and design is hard coded. A project design choice manifests for a community, a user, and audience.
So how do I think about weighing the pros, cons, and impossible-to-knows?
I’ve developed a responsible data design and decision-making aid that helps me carefully work through considerations.
At RightsCon in February and at Data & Society in May, I walked through the framework using a design choice that Amnesty International confronted in its Decoders initiative as an example.
Should the Decoders microtasking site require log-in information for volunteers participating in processing imagery?
- Risk: Does the technical choice minimise risk for the user?
- Log-in information can be used against users who are volunteering from devices in countries where affiliation with Amnesty or a particular project might be politically risky.
- Utility: Does the technical choice accomplish an important component of the project’s strategy?
- Log-in information increases the virtuous cycle of gamification as it allows for users to compare their performance with others.
- Log-in information makes it easier for Amnesty to follow-up with users to alert them to other opportunities to support Amnesty’s work.
- Usability: Is the technical choice intuitive for the target user?
- Log-ins are common in interfaces.
- Log-ins can turn off some users who might have otherwise been willing to process a few images.
- Agency: Does the technical choice allow for agency of the user?
- Log-ins allow for attribution. A volunteer that has no log-in information also isn’t credited for the energies they invest.
- Not allowing the choice for a log-in makes the decision about risk model for the volunteer.
There are obviously many more aspects in each of these consideration categories, but this is to give you an idea of what they mean. There is also no formula for deciding based on this framework, but it is a clear way of exploring a design choice.
And if you were wondering, Amnesty International chose to include an optional log-in feature. It maximised agency by allowing for users to determine risk; maximised usability by allowing for users to proceed with microtasks without logging-in should they choose; balanced utility with other categories by allowing for users to decide; and they made supplementary design choices to minimise risk but chose to allow the users to decide about log-ins in particular.
Does this framework make sense to you?
We’d love to hear your thoughts on frameworks that you use to think through responsible data decisions.