Phase 1 Co-creation

A group of six people in a well-lit meeting room, gathered around a large rectangular table. Laptops, microphones, and other devices are scattered on the table, suggesting a collaborative work session. A guide dog is resting on the floor next to one participant. The group appears to be engaged in a discussion, with one person standing while others are seated, listening attentively.

In the work leading up to our first co-creation session with Chris, Bruno and Raphael, our team was busy pulling together the first prototype of a spatialized audio translation of a website and imagining how exactly to try “co-creating.” Up until that meeting on the 9th of August, Alyssa Gersony’s role was to reach out to artists, disability organizations and others in the field of assistive technology and to explain our curiosities and research questions while staying open to what might emerge in the process. Through Constant vzw in Brussels, an institutional partner on the Screen-to-Soundscape (STS) project, an incredible network of arts workers began taking shape around the project’s mission: Staging Access (an accessibility consultancy based in Belgium), Josefien Cornette (artist, researcher, writer), the European Disability Forum (advocacy organization in Brussels), Eqla (a blind and low vision training center and organization in Wallonia), the Flemish Radio and Television Broadcasting Organisation (VRT), among many others. The overwhelming response to the project ignited an excitement and enthusiasm about working together across the fields of art, technology, disability, advocacy and design.

A close-up shot of a participant seated at the table, holding a smartphone and interacting with the device. Other attendees can be seen in the background, focused on their work. The table is filled with various items such as laptops, headphones, water bottles, and snacks.

From this excitement we managed to speak with three screen reader users who were interested in joining us for more experimental prototyping sessions: Bruno, Chirs and Raphael. We brought in a French interpreter and prepared the co-creators for an unconventional approach to “testing” as it is typically formulated with focus groups and big technology companies. Transparency has been a priority for our core team since the start of the project, and staying honest about “not knowing” the direction of the project luckily didn’t turn them away. It was decided that we would meet in a town called Louvain-la-Neuve for our first session - a slightly obscure choice, and with a unique Belgian history - but one that would allow a short commute for Bruno and Chris, and where we could use a space that provides the wheelchair access we needed.

After the hustle of finding a space and recruiting co-creators, we were ready to try out our first co-creation session. Led by Dan and Colette, two designers/engineers on the STS team, they guided our group through a series of experiences: a choreography of listening, demo-ing, critiquing and learning more about the nuances of screen reader usability. We followed a deep listening practice of “making wind” guided by Colette that dropped us into our bodies and connected our voices together in an improvised soundscape. We learned how Bruno uses single-hand keyboard commands to navigate iOS’ built-in screen reader “Voice Over,” how Raphael relies on a blended (visual and auditory) access to his smartphone, and Chris demonstrated his iPad and keyboard setup as another possible device combination. 

A participant in a BLue shirt types on a laptop, while two others stand beside him, observing the screen and discussing the content. The atmosphere is collaborative, and there are several laptops, notebooks, and water bottles scattered around the table.

As a vision rehabilitation therapist, Alyssa, who has often worked on increasing access to assistive technology for people who are blind and visually impaired, Alyssa was struck by the range of approaches taken by Raphael, Bruno and Chris in their individualized use of screen readers. Alyssa took this as an indication and reminder of the need to address subjectivity in the context of designing an “open source” tool, which is a part of the vision and mission for this project. Alyssa’s interest in looking toward the next co-creation session would be to keep leaning into our intersubjectivity, and to continue cultivating a curiosity toward everyone on the team: What are our desires with this technology and why? How do we relate to the content of the information being centralized in this project? Where do our daily jobs and specific interests intersect with questions of access and disability advocacy? 

The ah-ha! moment arrived toward the end of the session when Chris began speculating around the use of an immersive audio experience as being particularly useful when imagining spatial or visual information online - a branch of content that doesn’t fare well with screen readers. In my own experience of working with different researchers in orientation and mobility, I had known that gathering spatial information is some of the hardest to come by (so by extension it only made more sense that screen readers would encounter the same lack of input). Chris spoke about the need for understanding the layouts of maps, and the textures of images, which elicited excitement from others on the team: Could we create this navigational system for map descriptions? Could we program soundscapes into images as alternatives to alt-text descriptions or in addition to them? Raphael and Bruno both spoke of “feeling lost” in the prototype as it had no way of announcing boundaries, and as a user you never knew once you’d hit the end of the page - they advocated for more description, more auditory cues that were explained to the user so that they could make use of its deeper functionalities. These were clear openings that we took forward into the next draft of development with the prototype and our approach to co-creating. 

A participant in a pink striped shirt sits at the table with a laptop, speaking or presenting something to the group. Two other participants stand nearby, listening intently. The table has multiple laptops, a water bottle, and some snacks, indicating an ongoing work session or presentation.

The same group of people continues their discussion, but from a different angle. One participant is holding a smartphone and showing the screen to others, who are gathered closely around. Laptops, notebooks, and microphones are visible on the table, indicating an interactive and hands-on meeting.

On September 20th Colette, Dan and Alyssa co-facilitated our next co-creation session which we decided to begin with a question: Can you share a story of a time in your life when you experienced disorientation, or when you were lost? With this prompt our intention was to open up a round table discussion where each member of the team could explore how the role of navigation plays a part in our lives. We were interested in this intersubjective exchange as a way to draw connections and parallels between virtual and physical space - and notice the themes around how, why or when disorientation occurs for each other. We spoke about the lack or presence of light, languages that we speak and understand, familiar or unfamiliar spaces and how the specificity around each informed when or why we got lost. 

Pairing this with our second prototype draft, which included more audio landmarks and keyboard commands, Raphael, Chris, Bruno (and a new additional team member Joris) tested and provided more feedback. Although they were Impressed and validated the new updates, a lingering critique surfaced: hearing multiple texts read at once, no matter how “far” away they were spatially, created too much auditory complexity and made it difficult to focus on the primary text being read. This critical observation took us into deeper reflection with our core group around questions that had been surfacing since the start of the Processing Fellowship: to what extent are we prioritizing aesthetics over function - and at which point do they overlap? How can working with spatial information (rather than linear, list-based texts) enable us to make a practical contribution to the technological access for people who are blind or visually impaired? Rather than reinventing screen reader navigation, how can we approach reorienting screen reader navigation to include add-ons or additional reading features?

A group of four people are seated around a meeting table in a well-lit room, working collaboratively. One person in a light blue and yellow shirt is leaning forward, focused on a laptop in front of them. Another participant is pointing at their own laptop, while others are watching attentively. The table is filled with various items such as glasses, a microphone, a water pitcher, and a bowl of snacks, indicating an active and engaging work session.

Five people are gathered around a table in a meeting room, with one participant in a striped shirt actively working on a laptop. Another person stands close by, assisting or observing, while others are seated and engaged in discussion. A smartphone is placed on the table, and several glasses, a bowl of snacks, and a water pitcher are visible, suggesting a collaborative working environment. One participant in the foreground appears to be taking notes or capturing something with their phone.

A view from behind, showing a participant in a pink shirt working on a laptop, with another person standing nearby and looking at the screen. The table is filled with other laptops, papers, a water bottle, and a snack plate, capturing an ongoing team discussion or review.