In general aiming for compatibility with multiple browsers,
firefox, chrome and phantomjs to be more specific.
* Removing hardcoded waits
* Adding @_alert, @_switch_window and @_switch_frame tags,
to label actions that different drivers have problems with.
* Adding missing @_files_upload and @_only_local tags to features that
uploads files.
* Fixing a few wait for page ready what specified miliseconds.
* New methods to ensure elements (usual selectors), sections and editors
are ready to interact with
* Changing the select an option implementation to deal with the different
drivers implementations when listening to JS events.
When you followed the link to the question bank, the checkbox for the
"Show question text in the question list" option correctly reflected the
user's preference, but what was actually shown in the question bank did
not.
Teachers were typing patterns like
********************************<em>****************************</em>
which translates into a pattern like .*.*.*.*, which is very inefficient
to try to match, althought it is equivalent ot a single .*. At a certain
point preg was just giving up.
Since people actually do this, we should simplify the regex by treating
runs of * like a single *.
Plain responses, without files, were getting messed up by the
fix for MDL-39980. Something that looked like an HTML comment was being
appended.
This fix works by avoiding appending anything if there are no files. The
new unit test (which was failing before I fixed the code) confirms that
this works. The other tests should be enough to verify that there are no
regressions.
The problem was with the alignment of:
* tables inside the choices.
* Lists inside the choices.
* The specific feedback, when it spanned mulitple lines.
The problem was introduced by MDL-39420.
Certainty -1 has never been used in standard Moodle, but is
used in Tony-Gardiner Medwin's patches to mean 'No idea' which
we intend to implement: MDL-42077. In the mean time, these changes
avoid errors for people who have used TGM's patches.
We now compute the average CBM score, accuracy, CBM bonus and enhanced
accuracy, both for the entire quiz, and for just the questions answered.
Note that these calculations must work correctly in the presence of
descriptions, ungraded questions, and manually graded questions. For
example, imagine a essay added at the end of the quiz "Summarise what
you learned attempting this exercise." This might have max mark zero or
non-zero. The CBM statistics just ignores questions like that.