Everybody makes mistakes. That’s just a fact of life. And we all know, even small mistakes can have big consequences. That’s why we believe testing is a very important part of our e-learning development process here at The Learning Hub in order to avoid mistakes as much as possible.

Different testing moments

Agile testing is our way of assuring that every e-learning module we deliver is of the same high quality. From the early beginning of our projects, every part gets tested or reviewed by colleagues before it’s sent out to the client: from storyboard through style proposal to the actual developed e-learning module. Ever since we started using this agile strategy to develop e-learning, there are a lot more testing moments in our projects.

In our development process, we distinguish five different testing moments:

  1. Developers test their own work
  2. Colleagues test each other’s work and provide feedback
  3. Developers present their results in a general review meeting
  4. Clients review the delivered content
  5. Clients perform a pilot test in their organization

Step 1 through 3 are part of our internal quality assurance system. The agile development team decides in a weekly review meeting if the quality is on par with our standards. If so, we send the content to the clients for further testing as an external quality check.

Test scenarios

Think of an e-learning as a large container with various components, like for example:

  • a style
  • a story
  • buttons and navigation
  • different layers (a subpage that isn’t visible at first, but pops up after clicking on a hotspot)
  • feedback texts and visualizations
  • scoring
  • progress tracking (in the module but also externally, for example on an LMS)

It is impossible to test each page entirely while taking into account all the components mentioned above.  As a solution, we work with different test scenarios, which makes it possible to test in several rounds. The scenarios might differ based on project type, but the main idea remains the same.

For example, there is a test scenario where we only focus on navigation.  In this scenario, the testers don’t look at the global style or the texts or the questions or the progress tracking. No, they single out one component and go completely wild while testing it. They click the buttons back and forth – similar to the procession of Echternach: two steps forward, one step back.

Another tester only focusses on the textual aspect: is it a logical story, is it easy to read, comprehensible and without spelling and typing errors.

When the e-learning contains a test, we typically include 3 extra scenarios:

  1. The good flow: only select the correct answer – focus on scoring and if the correct feedback text appears.
  2. The bad flow: as you might have guessed, we only select the wrong answers in this scenario – focus on scoring and if the matching feedback text appears.
  3. The random and slightly crazy flow: some bugs only appear when you perform illogical actions. For example, clicking outside a text box twice and then clicking another text box might give you a strange result. In this flow, the tester goes all out and clicks absolutely wherever he or she wants on the page. Answer are selected randomly, but we keep track of the amount of correct and incorrect answers in order to verify if the scoring works properly.

Prototypes & feedback

We strive to create a functioning prototype as early as possible in the project.

In the beginning, the prototype contains only a small part of the e-learning. Each prototype is tested separately and in unison with the previously developed prototypes. In most cases, we create the content in Articulate Storyline and we use the Review Tool to gather the testers’ feedback. This review tool allows colleagues and clients to (re)view the e-learning and simultaneously leave comments. The tool provides us with a good overview of all comments per slide and makes the implementation of feedback a lot more time efficient.

Involve your end user

When developing an e-learning, it’s important to remember your target audience. Their opinion of the e-learning is what’s going to decide if the learning will be a success or not. That’s why we believe it’s important to involve the end user in the process. When you have the opportunity to do a pilot test with a small group of end users that is representative for the complete population, it’s a must! They will provide you with valuable feedback whether the e-learning matches their needs, which is crucial for a positive end user experience.

Conclusion

We are aware that testing is sometimes perceived as time-consuming, redundant or even unnecessary. However, based on our experiences, we can guarantee that it is absolutely worth the effort. If you break up the testing process in smaller scenarios, you will be assured that every component of the e-learning gets tested and each test round will take a lot less time compared to testing the entire module.

Another nice benefit of testing someone else’s work is the enhanced involvement of the entire team.  You also learn a lot by looking at a someone else’s product and getting constructive feedback.

When you take testing seriously, common mistakes are a rarity, we promise.

Have fun testing!

Let's talk!