Designing for Trust: What Everyday Users Taught Us About the Future of Social Media
At Internet User Behavior Lab (IUBL), our research typically spans large-scale datasets, literature reviews, and empirical studies. This time, we turned to a qualitative human-centred design approach. In November 2024, we hosted aworkshop with ten everyday internet users from diverse ancestral backgrounds (Europe, Africa, North America, and Asia) balanced across gender, and aged 18 to 44.
Our aim? To delve further into our call for algorithmic literacy with users for a ThinkShop to understand how we can imagine the types of online tools and environments everyday internet users want, so our work has real-world, meaningful impact.
The discussions centred around trust, transparency, misinformation, and user control; key ingredients in building better online experiences. The insights we gained were both grounded and eye-opening, reminding us that while social media holds promise, it also demands responsibility, balance, and deeper accountability.
In a world where scrolling and sharing have become second nature, we rarely pause to ask: What kind of online spaces do we actually want? Or more poignantly: What kind of digital experiences do we want for the next generation?
At this workshop, participants, everyday users, not tech insiders, reflected critically and constructively on what social media could and should be. Their responses painted a complex, sometimes contradictory, yet deeply human portrait of the digital world we all inhabit.
What Did We Learn?
The workshop had two central goals:
To listen to participants’ hopes for the internet of tomorrow.
To gather feedback on six proposed features for a tool designed to make online content more trustworthy and inclusive.
During open conversation, participants raised strong concerns about the opacity of algorithms. Many wanted more agency and some imagined an AI “community node” that could follow them across the internet, helping distinguish fact from opinion. One feature, a “facticity report” ranking content by how evidence-based or biased it is, sparked interest.
A common theme emerged: a desire for digital spaces that support—not replace—real life. Participants called for more respectful dialogue, not just algorithmic amplification of what they already believe. One shared, “I no longer use social media for news,” while another emphasized that platforms still provide important visibility for marginalised voices.
Testing New Ideas: The MoSCoW Analysis
In the second part of the workshop, participants developed user personas to evaluate six potential Online Social Networks (OSNs) tool feature conceptualized by IUBL, including transparency dashboards, behavioural insights, and fact-checking tools.
Using the MoSCoW Method [that consists of ranking features the believe we; Must-, Should-, Could-, Won’t- have]. Initial favourites included User-Controlled Filters and a Pop-Up Bot to flag misinformation. But as feedback evolved, the Facticity Rated Report emerged as the most valued feature. Could this shift suggests that, with time and reflection, users may increasingly favor tools that help build trust in the information they encounter.
One potential feature was shown to be more divisive; real-time behavioural insights. Some participants found the idea unsettling—“It’s scary to be that tracked.” Others welcomed the introspection, likening it to an end-of-year review. This range of reactions reflects how deeply personal our relationships with digital tools have become.
What’s Next?
The workshop didn’t give us a one-size-fits-all solution. But it did raise critical questions: How do we build tools that give users greater agency without overwhelming them? That inform without manipulating?
What became clear is the demand for transparent, user-driven, and trustworthy features. Our next steps are to refine these tools, test them with more diverse user groups, and keep building systems that reflect the complexity—and the humanity—of online life.
Final Reflection
One participant captured the workshop’s mood with striking clarity:
“We’re in a fishbowl. The fishbowl is dirty and polluted.”
That image stayed with us. Today’s digital spaces aren’t just chaotic; they’re murky. But perhaps by inviting users into the design process and truly listening, we can begin to clean the water.
This workshop gave us more than ideas—it gave us direction. It validated that we’re on a path that resonates with the very people these tools are meant to serve.
We appreciate that the methodology had limitations. Not all participants engaged with the exercises the same way, and some feedback required careful interpretation. But the richness of the qualitative insight surpassed what a standard survey could offer.
Looking ahead, our goal is: to create tools that don’t just inform, but that include, heal, and restore agency to everyday internet users. To digital citizens.
A Philosophical Afterthought
The 17th-century Czech philosopher Jan Amos Komenský (Comenius) wrote:
“All the world is a school—and all humans merely pupils.”
He wasn’t just speaking pedagogically; he was imagining transformation. Comenius aimed to show how the pupil becomes a “speaking mirror of totality.”
Nearly two centuries later, Kierkegaard warned in 1846:
“A revolutionary age is an age of action; ours is the age of advertisement and publicity. Nothing ever happens, but there is immediate publicity everywhere.”
That date, 1846, gives one pause for thought. We're still grappling with the same tension: What happens when the media shapes perception more than reality?
Returning to today, we might borrow from psychotherapeutic language. The idea of “cleansing one’s algorithm” feels less like a digital detox, and more like a healing act.
Personally, I often want to blame something when my algorithm goes awry. But what if it’s not something out there that’s to blame?
What if my digital trace, my likes, habits, and interactions, are a reflection of the totality I’ve helped construct?
Let us leave you with a paradox: Is social media disliking what one likes?
By Theo Richardson-Gool
On behalf of the Internet User Behavior Lab (IUBL)
Read IUBL’s pre-publication: A Call for Promoting Algorithmic Literacy by Bryan C. Boots, Alex Krause Matlack, Theo S. Richardson-Gool :: SSRN