0.jpeg

Accessiversity Blog

How the Sausage Gets Made

On July 20, as part of the second day of this year’s Sakai Con virtual conference, I was asked to lead one of the general sessions focused on accessibility in Sakai.

Billed as a “hands-on session to demonstrate how to perform common tasks on the Sakai LMS using keyboard navigation to simulate the experience of screen reader users, while describing how regular keyboard and screen reader testing continues to serve as an integral part of Sakai’s accessibility strategy” my goal was to facilitate a 30 minute session that would be both interactive and informative, and for the most part, I believe that’s what I was able to deliver, even if it wasn’t as easy as I would have liked, and things didn’t exactly go as planned.

You see, as a person with a disability, pulling off a presentation like this requires a ton of planning and prep work, much of which goes unknown and unseen, lost on the oblivious conference attendee who cares not how the sausage gets made.

I, however, as the maker of the sausage, care very much about its quality control, as well as not looking stupid in front of an international audience, so I go to great lengths to prepare myself, know my material inside and out, and try to think through every possible contingency, recognizing how unpredictable live, virtual presentations can be, and how quickly things can fly off the rail.

While the information I presented on keyboard navigation provides some great insight into what its like for blind/low-vision users to interact with web sites like Sakai, and I of course encourage people to check out the recording of the session, for this blog, I actually want to focus on everything that happened in the weeks and days and hours leading up to Sakai Con, to give you an idea of all of the planning and preparation that goes into pulling off a presentation like this.

If you were not able to attend Sakai Con and/or missed my general session on accessibility in Sakai , recordings from the SakaiCon general sessions and showcases can be found on the Sakai YouTube channel! Check out this playlist of all the SakaiCon 2022 recordings.

2 Months Out

I first learned about the plans for the accessibility session at Sakai Con way back on May 18, the day I registered for the conference.

As I was scrolling through the Event Bright page, I zipped through the conference schedule and made a mental note about the general session focused on accessibility in Sakai, suspecting that I might be asked to be part of the session.

A few hours later, my colleague Shawn and I received an email from the chair of our Sakai Marketing Committee, Josh, asking if we would jointly lead a Sakai Con session focused on accessibility, to which I jokingly replied that I had come across the information about the session on the conference schedule, and had wondered what suckers they could have possibly lined up to facilitate that session.

The purpose of the Sakai Con session, as it was explained to us, was to provide a fun learning opportunity that focuses on best practices, with the goal of being understandable to Sakai newbies and offer some cool tips for Sakai experts.    

Now, being that Sakai Con was going to be a virtual conference, one of the first things I asked the conference organizers was what web conferencing solution we would be using to present. Of course, I was secretly hoping it would be Zoom, since I find it to be one of the more accessible, user-friendly solutions out there, especially because it has a bunch of super handy keyboard commands, for example, to turn screenshare on and off which is something I figured I would need to be able to do as a co-presenter. The point here is that as a screen reader user, you can’t ever afford  to take technology for granted, it’s another unknown, this other layer of complexity that you have to plan for, right from the start.

With the technology question seemingly addressed for the time being, I shifted my focus to coming up with a topic for our session.

My initial thought was to have Shawn, Gonzalo and I (as representatives of the Sakai “WAM-A11Y” team) co-present. We would do an interactive A11Y quiz, basically, come up with several scenario-based user experiences with maybe three options for how to approach the accessibility/usability of a particular tool/feature. We would present the topic along with the three different options and ask people to “vote” on which option they thought was best from an accessibility/usability perspective. Then we would play short, pre-recorded videos to demonstrate how each of the proposed fixes work (or don’t work) for screen reader users. But after discussing the idea with Shawn and Gonzalo, we came to the conclusion that this might be too technical for the typical Sakai Con attendee, so we decided to go back to the drawing board.

Next, we came up with a concept that would highlight how the community-sourced nature of Sakai and its accessibility efforts have improved teaching and learning for the Sakai LMS. While the involvement of the accessibility group in the development and design of new tools and features is a compelling story, we ultimately decided to go in another direction, because this just seemed like your standard-issue presentation, lacking any sort of interactive component or creative flare.

Around this same time, work really started to pick up for Shawn and Gonzalo at their respective institutions, so I told them that I would take one for the (“WAM-A11Y”) team, that I would let them off the hook and handle the Sakai Con presentation by myself.

That being said, heading into the Sakai Con brainstorming session that had been scheduled for June 16, I still only had a half-baked idea for an “Accessibility 101” session that I would facilitate by utilizing the Sakai Tests & Quizzes tool.

So, just to recap, we were less than a month into our planning, and I was already on my third or fourth different concept, with no definitive decision yet about what direction we should take for our session.

1 Month Out

My latest concept received a lukewarm reception from participants of the Sakai Con brainstorming session, which is just about what I expected for an idea that was still only half baked. But before our Zoom meeting had even ended, my colleague Chuck was texting me, saying that he had an idea that he wanted to float by me.

Chuck’s idea, which is the concept that we ultimately decided to go with, was to facilitate a hands-on session focused on basic keyboard navigation. While we had apparently solved our biggest problem, coming up with the topic of our session, we (and by “we”, I mean “I”) still needed to develop an overall approach for presenting the material, to properly frame the topic and provide context for how keyboard navigation fits into our overall accessibility strategy for Sakai.

So I got to work creating content and developing a basic framework for my presentation.

I knew that the main part of the session would be an interactive demo focused on keyboard navigation, but I felt that I needed to start by properly setting things up for the audience.

Somehow, I had to connect all of the dots, explain how blind/low-vision users rely on screen readers, which means that they  almost exclusively have to use keyboard navigation to interact with sites like Sakai. Once I drove this point home, then hopefully attendees would be able to  use the basic keyboard commands I was showing them, to themselves simulate the experience of a blind/low-vision user, at least that was my working theory.

First, I needed to cover some screen reader basics.

I was operating under the assumption that many of the conference attendees would have little to no knowledge about assistive technology, so I started to write as if I was explaining screen readers to someone who knew absolutely nothing about them.

Several paragraphs later, I had constructed a rather detailed summary of screen reader technology, but in the process, I also created these lengthy passages that I would have to somehow memorize.

But then I got an ingenious idea.

I would start by adding the text to the Try Sakai test site that I would be using for my demo. Then with the computer audio enabled in screenshare, I would use Zoom to pre-record a video of JAWS (who else?) presenting the screen reader basics portion of the session. Then, on the day of the conference, I could simply open File Explorer on my desktop and click on the video file that I would have all cued up and ready to play.

I was especially proud of myself for coming up with such a clever misdirection. Not only would this get me off the hook so I wouldn’t have to figure out how to  regurgitate a sizeable chunk of material, the JAWS narrated video would give conference attendees at least a little taste of what the average screen reader user goes through.

The final touch was to write-up JAWS’ part to make it seem like  I was handing things over to an esteemed colleague, as if my screen reader program was just another human helping to co-present. I wanted his A.P. (Artificial Personality) to shine through, making sure that I had him regularly refer to himself in the first person, and even going as far as to have him crack a joke at my expense.

After recording several versions of the JAWS screen reader basics video, I decided to trim down  the content, worrying that people would start to zone out after listening to JAWS’ hypnotic voice for more than a few minutes. 

It’s worth noting here that over my nearly three decades of using the JAWS screen reader, I have never adjusted the speaking rate, or verbosity level, or any other default setting that comes standard with the out-of-the-box software. I use JAWS’ same synthesized voice today, as I did way back in 1995 when I was first introduced to him, despite the product evolving over the years to offer multiple synthesized voice options, similar to how consumers can now choose a different voice for their in-home virtual assistant or the GPS navigation in their car. So why do I do this? Well, for one, in nearly three decades of knowing him, I have grown accustom to his measured, monotone way of speaking, so when I listen, I’m not hearing some third-rate robotic speech, I am hearing the voice of a friend. The other reason for doing this is more strategic. Ever since I got into doing accessibility consulting and testing, I felt it was important to focus on the lowest common denominator, whether  testing with JAWS to conduct an assessment, or creating a video to demonstrate how to use JAWS to perform some common task on a web site, the thought being that most average users make very few changes, if any, to the out-of-the-box settings for their assistive technology. So, as it relates to the Sakai Con presentation, I wanted conference attendees to get to know the JAWS that I know, with all of his quirks and imperfections.

I ended up choosing to remove the last part of the screen reader basics segment from the JAWS video, which cut it down to about 3 ½ minutes, opting to instead cover the additional material myself before I moved onto the interactive portion of the session.

Coming up with a series of activities for the interactive portion of the session proved to be the hardest part to plan. Mostly this was due to the logistics of having multiple Instructors trying to manipulate the same Course site in Sakai at the exact same time, which was along the lines of what I was originally proposing to do. But after coming to the conclusion that it would be easiest to have users logged in as Students on the demo site, then it just became a matter of thinking through some of the  common tasks performed in Sakai by Students, and deciding which ones to try and highlight.

3 Weeks Out

With only three weeks until Sakai Con, I was still communicating back-and-forth with the conference organizers to work through some technical logistics, which is par for the course when planning a virtual presentation like this.

The last remaining technical hurdles were to have the site admin create a test Student account for me to use during the interactive portion of my session (my other user account was already set up with Instructor privileges so that I could create content) and to ensure the user IDs for all conference attendees would be able to access the sample A11Y site that we would be using on “Try Sakai” for our demo.

Next, I got to work building out the content for the sample A11Y site.

As far as Sakai sites go, I chose to keep mine pretty basic, deciding to focus on the bare minimum I would need for my demo.

I created a Lessons page to house the text that JAWS would need to read for his screen reader basics video, and then included a bunch of additional accessibility-related information and resources that I thought conference attendees might find useful.

I also used the Tests & Quizzes tool to create a simple, four question assessment that I planned to have conference attendees follow along with during the demo.

My intention was to keep things super simple while trying my best to not overwhelm anyone. For the demo, I thought it made sense to start by highlighting several of Sakai’s built-in keyboard shortcuts, and then showcase a few other tools if possible.

While this all seems pretty straight forward, it was more complicated than you would think, as I had to carefully script things out and ensure that everything worked as described.

2 Weeks Out

By early July, I had a general idea of the things I wanted to focus on during the demo.

But just when I was starting to feel confident, maybe even a little complacent, doubt started to creep in, forcing me to re-examine my entire approach.

The cause for my mounting consternation was how to account for varying user experiences, knowing that different Operating Systems may require different keystrokes for performing the same keyboard commands.

For instance, I recognized that the keyboard commands I perform as a Windows user on my PC differ from those used by people on Macs. The bigger problem was that I hadn’t been able to spend enough time on my MAC Book Air lap-top to assess exactly how much these keyboard commands differ, and I just wouldn’t have the time to do a deep dive on the Mac to figure it out.

Luckily, I had an ace in the hole, a Mac user versed in screen reader and keyboard navigation that I could turn to for help.

I reached out to Gonzalo and set up a time to meet with him on Zoom. I walked him through everything I planned to cover during my interactive session, asking him to verify that he was able to complete the tasks using keyboard navigation on his Mac, while having him point out any differences in the specific keystrokes used to perform each of the keyboard commands.

While Gonzalo adeptly predicted that some Mac users, particularly those  on older Operating Systems that maybe haven’t been updated in a while, might find that they need to go into their settings to “enable full keyboard access” to  replicate portions of the simulation, we came to the conclusion that it would be futile to try and plan for every possible variable, and to instead just come out and say that the demo is primarily geared for a Windows user, which is ultimately what we decided to do.

1 Week Out

With just a handful of days left before the conference, and sensing that the finish line was near, I was finally ready to pull everything together.

There’s no secret as to how someone like me, who is statutorily blind, has to prepare for presenting 30 minutes worth of content. To me, it all boils down to repetition.

Of course it’s not difficult to ramble on for a half an hour, for those of you who know how chatty and verbose I can be, you’d agree that I am more than capable of filling 30 minutes of air time. The trick is how to talk non-stop for a half an hour while staying on point, and touching on all of the things that you need to cover, for a blind person, pulling off a feat like this is easier said than done.

Once I had my main talking points scripted, and the other parts of my presentation in place, I started using Zoom to record myself doing a dry-run of the presentation.

This turned out to be a long, time-consuming process. Each time I created another video, I would have to wait 10-15 minutes for Zoom to convert the recording, before I could watch the playback to verify that screenshare was working properly, identify any parts of the presentation that would need to be tightened up, and make sure I was keeping the total presentation to approximately 25 minutes to leave time at the end for Q&A.

Between recordings, I also had to go back into the sample A11Y site and reset the questions in the assessment I was demoing so that it would be ready for the next take.

Take after take, I got progressively better at presenting the material, but still I struggled to touch on all of my key points without inadvertently leaving something out.

That’s when I came up with my next Wyle E. Coyote-worthy idea…

In the past, I have tried leaving a MS Word document open in the background that I would toggle over to for referencing my talking points, but this was always an awkward solution, and never seemed to work as well as I would have liked.

Because I would be using my headset for the presentation, I figured out that I could conceal one of my Apple Air Pods beneath the bulky, foam earpad of my headset. In this way, I could have a version of my talking points pulled up on my iPhone, so that I could use my left hand to discretely swipe line-by-line through my notes to prompt me on the next point I would need to cover.

This relatively simple hack is one of those things that in retrospect, makes me wonder why I hadn’t thought of it sooner. For whatever reason, compartmentalizing the different presentation functions across different technologies worked perfectly. Doing it this way allowed me to listen to JAWS through the left ear piece of my headset as I used my computer keyboard to drive the main presentation, while I simultaneously swiped away at the notes on my iPhone to make Voiceover speak through the small wireless Air Pod neatly concealed in my right ear.

3 Days Out

After ten complete takes of running through the full presentation, I felt I had sufficiently practiced my spiel.

Before heading out to Dr. Chuck’s lake house to partake in some of the pre-conference festivities, I took care of some last-minute housekeeping items.

I created a Google Drive folder and uploaded one of the better versions of the dry-rehearsal video, along with the JAWS screen reader basics video and a copy of my talking points. Then I sent the conference organizers an email invite to grant them access to the folder and materials, just as back-up in the event something happened that would prevent me from being able to present the material live on the day of the conference.

I also took advantage of this last opportunity to ask whether there was a way to disable the chat feature in Zoom, as I was worried that listening to a bunch of messages coming through the chat might be too distracting as I was trying to concentrate on my presentation.

3 Hours Out

On the morning of my session, I methodically worked through a series of  mental checklists.

First, I had to make sure all of my devices were fully powered up.

Next, I verified that my lap-top was still able to connect to the University of Michigan’s guest network.

I started to get things situated in the adjacent classroom where I would be presenting, while making several trips to the bathroom down the hall to empty my bladder of that morning’s large Starbuck’s coffee.

Finally, I made arrangements to have one of my colleagues hang out in the room with me, just in case I ran into any technical problems that I would need their help with.

10 Minutes Out

With just ten minutes before the session was scheduled to begin, I clicked on the link that had been emailed to the panelists, and entered into the Zoom meeting room for the very first time.

To my relief, I was immediately joined by Derek from Longsight, who was providing the back-end support for the session.

I quickly went through a series of checks with Derek to make sure that I was able to work screenshare, and that he could see my computer screen, and hear my JAWS screen reader when the computer audio in screenshare was enabled.

I doublechecked that I had File Explorer opened in the background with the JAWS screen reader basics video cued up, and another window opened with the Try Sakai site.

Lastly, I made sure that I had closed out of Outlook and any other programs that could be disruptive if they were running in the background.

And then, after all of the planning and preparation, it was finally time to present.

Lights, Camera, Action

I would like to say that my presentation went off without a hitch, that everything worked perfectly, but that wasn’t the case.

Despite all of the planning and preparation that I had put in, there were still problems, and that’s kind of the point.

You can’t plan for every possible contingency, there are just too many things that are out of your control.

What you can do, is focus on those things that you CAN control, so that when things start to fly off the rail, you can keep your cool, adjust and improvise, and above all else, avoid looking stupid.

Admittedly, it’s not perfect, but it’s an approach that I have found works for me.

It’s not about prescribing to some fool proof formula for success, it’s a much messier process than that.

It’s just how the sausage gets made.

Andrea Kerbuski