We live in a world where technology is evolving and changing us at an incredible speed, information overload is everywhere, while our capacities stay the same. This clash might lead us to a problem, and two panels from SXSW 2022, highlighted some of these aspects. So, should we put back the HUMAN in our technologies? Should we design technologies that take into consideration our humanity?
Dr. Renée Richardson Gosline talked about AI and the power of frictionless systems on us, and ways to oppose it. Tristan Harris (Co-founder, Center For Humane Technology) highlighted that the way we design technology has to change, and start to have some more wisdom. We highlighted some of the ideas presented by the speakers.
An IBM training document from the 1970s put it, “a computer can never be held accountable, therefore a computer must never make a management decision”. But today, AI is becoming used more and more in different industries, such as hiring, healthcare decisions, credit-line, investments, and so on.
Moreover, 50 billion dollars was spent in 2020, according to McKenzie, and that is expected to double this year. In healthcare, AI generates excitement, 70% is purchased from vendors – they did not build it, so it is a pre-fab AI. 65% of the executives said that “I have no idea how this works”.
Shortly, AI is used in management decisions more pervasive, while many managers have no idea how it works, while millions of people’s lives are affected.
Dr. Renée Richardson Gosline [lead Research Scientist in the Human-AI Interaction group at MIT’s Initiative on The Digital Economy], debated on these issues, with a highlighter of frictionless-mantra used in many Ai systems
THE FRICTIONLESS FEVER
Frictionless become a key word that different industries adopted it; we have frictionless shopping, frictionless finance, travel, investments, experiences, basically frictionless-everything. But, why?
Information is growing, and humans on average make 34.000 decisions per day – some trivial (how to dress) or heavy. This leads to cognitive load.
The inability to process everything at the same time, makes humans like frictionless.
Moreover, when you are vulnerable – tired, exhausted, having a strong desire not to mess things up- begin to outsource your cognitive tasks, your mind more easily.
For example, we have a pressure that up to 70% of Americans feel that the healthcare system is difficult to navigate; and when it comes to taxes -for most of the people, this is so confusing, that many ask somebody to do it….
And there are executives all over the place, who caught this frictionless fever, ready to deploy a system that puts frictionless at its core.
It is assumed that there are two systems involved in decision making, one autonomous, low fiction, immediate decision, the process automatically (System 1), and one deliberative, high friction (System 2).
Behavioral economics spreads the frictionless ideas – if we want people to make decisions fast in complex systems, then probably we should design it such that System 1 can make that choice quickly. But, frictionless fever assumes that System 1 is the one that you should leverage. This is not an appropriate strategy for AI, considers Dr. Renée Richardson Gosline.
100% of AI is biased because it is trained on historical data. But, what happened in the past might not necessarily happen in the future, especially if the group you are in was banned from certain access.
Secondly, AI systems are programmed by biased individuals. But even without mal intent, a machine can go down a biased road.
When we think about this, frictionless AI is problematic, because oftentimes, AI is assumed to be neutral. And it is not. Because for example AI might be trained on sample historical data, on patients with lighter skin, for example, can’t predict well enough performance or disease.
Meanwhile, executives are buying AI systems, and have no idea how these work.
The black box of AI might be new and shiny, but not sure how they work. But, an executive at a company should know how it works. “We should be thinking more about friction because we might be creating the perfect storm: AI bias meeting Human bias; ”frictionless applied can create catastrophe” considered Dr. Renée Richardson Gosline.
If decisions go wrong, who is responsible? An experiment was made by Dr. Renée Richardson Gosline. and her team involving 3 conditions scenarios/ conditions for decision-making:
A human person (Jerry) makes a decision by himself…
A human consults another human being…
A human receives inputs from an algorithm…
Jerry always makes the same decision. But the question is Jerry will be held more or less accountable for his decision, by subjects participating in the experiment?
[The finding suggests that when you incorporate AI into the decision, Jerry is held less accountable!]
|There is a phenomenon called “diffusion of responsibility”.|
If you make a decision in a group, you are not fully responsible, in fact, you are responsible according to the number of people in the room: if the group is formed of 4 people, you are responsible for 25%. It turns out that AI systems can diffuse responsibility. You can say: The algorithm made me do it!
ALGORITHM LOWERING YOUR VALUE?
“I question the algorithm because it would have questioned me!”, says Dr. Renée Richardson Gosline boldly.
She admits that according to her profile, and background, you would never expect her to be on the stage. She is a daughter of first generation migrants from a small, poor city in the Caribbean, the first person from her family that would go to a college, being raised in Brooklyn.
Algorithms would have never predicted that she would be on this stage, that she would be at MIT, graduate from Harvard. “So, I take this very personally. But you should too!”
Dr. Renée Richardson Gosline encouraged everybody to ask her/himself: “What data point in your profile, would make a rational algorithm lower your value? Gender, race, sexuality, pre-existing condition, age? Should all these be frictionless? Should all of these be cooking in the background, without you knowing it?”
PHYSICS APPROACH TO FRICTION
In physics, friction is neither good, nor bad. Can we take a physics approach and ask where we can get rid of bad friction and where we can add good friction?, reflects Dr. Renée Richardson Gosline.
What if we looked at friction, not in terms of volume (we need more frictionless) but as a journey that has multiple touchpoints and at each touchpoint, humans make decisions, and we should look at times when those decisions should be frictionless and times when we should stop and think again (like predictive policing).
What if we thought of automaticity in a new way?
What if we did not take the human out of it, but we actually center the human in these processes?
These were some of the important questions that all working in AI should reflect on.
At the end, Dr. Renée Richardson Gosline, gave 5 step to eliminate the bad friction and add good friction
Always question: Should AI be doing this? And even: can it?
Can AI predict accurately who is going to commit a crime? Spoiler: NO!
Should AI make life or death decisions? Answer: NO!
Look at the journeys, deconstruct them and audit it, and ask which touch-points would benefit from added friction. Analyze: when we can deploy AI, what AI can we deploy? And cut down on the bias.
Model cards system was developed at Google.
Read more about is here: https://ai.googleblog.com/2020/07/introducing-model-card-toolkit-for.html
Embrace friction in the form of inclusivity – only 6% care about diversifying their development team to combat bias. But we know from research that heterogeneous groups are far better for these tasks.
Build a culture of experimentation and test the model, before releasing it. Sometimes, the model is released and than it is “oops, the algorithm did it!”.
Eliminate dark patterns - manipulative tricks
– What is a digital lobster trap? A lobster trap functions as such: You put the bait in, the lobster takes the bait, tries to leave, but it is extremely difficult to get out of there!
Digital lobster traps are used with AI to make people get in easily, give up their data, and privacy, and make it hard to get out.
Similar traps are used by companies for example if you want to modify your cookie preferences: difficult, time consuming, processing time is long, you accept the cookies easily, now if you have to opt-out, they will make it difficult for you…
As general advice, when it comes to Human AI interaction, Dr. Renée Richardson Gosline advises to practice acts of inconvenience: “get up instead of having the surveillance tool do it for you”. In addition, demand better, more responsible AI.
Anything we develop should be human first. “Let's not program out the humans, but make AI human-centered.”
Dr. Renée Richardson Gosline’s research examines HOW TECHNOLOGY AFFECTS HUMAN JUDGMENT AND PERFORMANCE (as featured in her Tedx talk, “The Outsourced Mind”).
She conducts experiments on how the presence of AI affects the judgment of human decisions, and the interplay of human and AI bias.
Her projects have examined: how cognitive style predicts preference for AI versus human input; the interaction of digital brand status and placebo effects in performance; how consumers determine “real” from “fake” products; the circumstances under which customers perceive value in platforms; and the effects of storytelling in social media on trust and persuasion.
“The real problem of humanity is the following: We have paleolithic emotions, medieval institutions, and God-like technology”- E.O. Willis
Tristan Harris, Co – Founder & President at Center For Humane Technology, considers, more than ever, we need the wisdom to match the power of our God-like technology.
The complexity of the world is increasing, while our ability to respond to this increasing technology is not matching up. At the same time, technology is both eroding our ability to make sense of the world, and increasing the complexity of the issues we face.
The gap between our sense-making ability and issue complexity is what we call the “wisdom gap.”
FIRSTLY Technology is increasing the complexity of the world: Some of the principle that we agreed, can have new nuances. For example “protect privacy”. But, for example, technology can figure out personality traits, just by looking at how people’s mouse is moving… So, what does privacy mean when the tech keeps moving?
SECONDLY , technology has been undermining human weakness, and our ability to make sense of complexity. Technology can hack, and when it hacks into:
Social validation => we got social influencer culture,
Outrage => we got polarization in society
Dopamine system => that give us addiction
Confirmation bias => it gave us fake news
Basic heuristic what to trust => we got deep fakes
Moreover, all these are not separate dots, but they are inter-linked into a system that mutually reinforce each other. And when you take that system, you see this is decreasing the capacity to make sense of the world.
Humans have limited cognitive ability. And our system are also “cognitively impenetrable” – this means that even if you know how the magic tricks works, you still can’t escape the bias you have., If someone is telling you ‘don’t think of an elephant’, you start thinking of an elephant.
Knowing the limits of humans of how we work, it is about self-awareness, and humility about inquiring hope we work so we can design technology in a more deep ergonomic way
Being able to see in terms of systems and root causes.
Understanding the interconnectedness of these problems
One powerful strategy or leverage point worth investigating is the power to transcend paradigms – look at the paradigm that brought us to a certain point. In the case of the tech industry, we could look at the system of what a person’s mind is (and as former-facebook employee Tristan Harris has some experience).
So, what do tech industry people think?
1 We give users what they want – sugar, doughnuts, etc.
2 Every tech has good and bad, that’s just how things are. We always had moral panics about TV, games that will make people violent, etc
3 We want to personalise content, we want to give people the most relevant content.
4 Technology is neutral, it is not up to us to let people choose how to use it. Who are we to choose what is good for people? we don’t, we don’t want to be the “Ministers for good’
5 We should grow at all cost, get that growth-hacking matrix working
6 We should obsess over matrix, maximise what is working,
7 We should capture attention
These are the principles of people working in the tech industry, highlighted Harris. But, working as such, you will have fake news, over-polarisation, etc, because you are giving people what they want, etc… This is a direct digital fall-out from those core beliefs, and if that was the paradigm that got us here, what it will take to upgrade that paradigm.
He recommends to keep pointing out the failures of the old paradigm, keep pushing a new paradigm, and keep pushing people with a new paradigm.
Recommended Book: Thinking in Systems, by Donella H. Meadows.
What would a human technologist do?
Instead of just giving people what they want, we would respect human vulnerabilities.
We have to actively minimize harmful externalities.
Instead of personalising, we would need to create shared understanding – but, now, algorithms get you more followers when you are aggressive, and not when you try to build bridges between people that are disagreeing.
Instead of saying technology is neutral, We need to actively support fairness and justice
Instead of saying who are we to choose, when you are not choosing, you basically choose an set of unconscious values that might take us into places where we don’t want to go
Instead of obsessing on matrix, we need to help people thrive.
These are just some core people, and every point needs more details, but Tristan Harris and his team developed a whole free course Humane Technology
In conclusion, Tristan Harris added a few new things to the problems highlighted by E.O. Willis:
“We need to embrace our paleolithic emotions, upgrade our medieval institutions, and have the wisdom to wield that God-like technology”