Since our target users for SylViA are the professors and students at SIMS, it was impossible for us to not recruit friends, or at least acquaintances, as testers. But we did make an effort to choose testers that we are less close to. We chose two professors, one more technical and one less technical, as well as one student, none of whom have been subjects for earlier parts of the project. The student has experience with both content management systems like SylViA and usability testing writ large. Neither professor had much experience with similar applications or with prototype/usability testing. Also, a caveat. We were so excited to get started with testing that we neglected to carefully review the assignment before we started. As such, we failed to get written consent forms from our subjects. We felt it was better to admit this than to get forms signed ex post facto. We promise to use consent forms in all future tests.
Our first task was to have the users login. We felt it would be nice to have a simple task that would get everything off on the right track. Then, once they were looking at the My SylViA home page, we asked them to add a syllabus. This would involve clicking the add syllabus button, choosing "new syllabus" on the next page, and then entering the required course information like title, number, and department listings. Once that was completed, they were taken to the schedule/syllabus screen. Now we wanted them to add information about the professor, which would first require that they locate, in the left nav bar, the link to view course info rather than schedule. Once on this page, they would find the button to add teaching team member and go through that editing process.
Next, we asked them to return to the schedule view and enter the titles for their two topic areas. For model and architecture reasons, SylViA enters default topics when a syllabus is created, so the user had to figure out that they could edit these. Next we had them enter class title and lecturer information for one class in each topic area. Then we had them add readings to each class, which required that they first select a reading type from the main schedule page (since different reading types include different fields on the form). Now, we pretended that they had to reschedule the second class to the day of the first class so that they could experience our class rearrangement conflict handling. Next, we had them move a reading from one class to another. Finally, we asked them to save their information without making it live (testing the "save a draft" versus "save and publish" buttons).
When our testers arrived, Lisa greeted them, thanked them for their time, and began to explain the prototype testing process. Carolyn, the computer, had her screens spread out on a side table in some semblance of order. Sarai, our tragically ill note taker, sat a bit further back with her laptop in an attempt not to spread germs. Lisa introduced the testers to George Wilson, the persona on whom the prototype was based. This was necessary since we had a very data-input intensive interface, and we had predetermined information for course, professor, readings, etc. in order that our prototype screens could flow predictably. Once we had introduced the persona and scenario, Lisa began walk them through the script, though we did not give them a copy of the script to read themselves.
Each session evolved in a very distinct way. Our first tester, the student with experience in usability testing, was very reliable in terms of sticking to the script and thinking aloud and elaborating on ideas. Also, this user tended to look at the whole page every time a new one was loaded, giving us thoughts about general layout and not just the task at hand. Our second tester, the less technical professor, was very agreeable and also very fast, moving through tasks quickly and focusing on just the areas that supported a task. From this tester, we got fewer suggestions for changes, but the overall impression of the system was positive, and we were pleased that a user who focused on efficiency was able to navigate the system with few major hurdles. Our final tester, the more technical professor, was the least committed to the script but gave us the most ideas about changes of not just layout and navigation but interaction flow writ large. This user often triggered our 404 page by clicking on buttons that we had not built out in the prototype. When this user asked to deviate from the script, however, we decided to see what the user would do. This proved to be very productive, as we were ultimately able to back up most of the actions with our prototype, but in a different order than we had envisioned.