This class would belong more appropriately within the 'user' API
(core_user) instead of within the 'core' API, since it is
directly related to user data.
Since the class has only just been added to Moodle, now is a good
time to move it.
In all cases changes have been kept to a minimum while not making
the code completely horrible. For example, there are many instances
where it would probably be better to rewrite a query entirely, but
I have not done that (in order to reduce the risk of changes).
To support transitions from one search engine to a different one, or
to a different installation of the same kind, this feature allows for
queries to use a different search engine from indexing. So you can
reindex (and do all other search operation) on one server, while
user queries are unaffected on a different server.
This feature supports changing between search engine types, and also
between two Solr installations.
The optimize feature in Solr is usually considered harmful, especially
prior to Solr 7.5.
This change simply removes the optimize implementation from the Solr
engine.
Adding documents in batches instead of one at a time can make
indexing using Solr significantly faster.
This adds new API functions for search engines, including
add_document_batch() to add a batch of documents,
supports_add_document_batch(), get_batch_max_documents() and
get_batch_max_content().
Adds new API support within search engines for optional methods to
delete data for courses and contexts, and implements this for the
two core search plugins (simpledb and solr).
The new API is automatically called when courses or contexts are
deleted. When a whole course is deleted, it only sends the course
delete rather than sending 1,000 separate context deletions as
each activity/block is deleted.
When searching using mock results (the 'global search expects the
query' step), the result count is not correctly set. As a result, the
page incorrectly reports that there are no results and doesn't
correctly show the first page of multi-page results.
Additionally, some of the core Behat tests can now be moved to use
real searching with the simpledb engine, rather than using mock
results at all. This gives better tests.
Unfortunately it was not possible to move all of the core Behat tests
and deprecate the mock step, because some of the tests are related to
the UI for 'special' features searching by user or group, neither of
which are supported by the simpledb engine.
In MDL-59039 we introduced changes to add_documents and should return an extra $partial boolean.
We still supported implementations returning 4 elements since then,
but this issue is about removing this 4 returned elements compatibility.
Added static caching of classes to reduce load times and reduce calls to `get_component_classes`
by altering to accept a null component value to search classmap only once.
Creates a new 'Users' field in the search filters form. This field
requires new JavaScript and, to implement this, a new AJAX-callable
web service to search for users by name, with detailed restrictions
based on the current user's access to view profiles.
When restoring content, this adds it to a queue for indexing. If the
restored content was then deleted before the indexing takes place,
this caused an exception in the scheduled task.
This change makes it continue safely past missing contexts.
Implements a mechanism by which search engines can provide different
result orderings, and implements a 'by location' ordering within the
Solr search engine (available whenever the user starts their search
from within a course or activity).
Adds group support to the core search API and the Solr search engine.
This allows for:
* User searching by group (in the API only, no interface yet)
* Automatically restrict search results by group (in some cases like
separate-groups forums)
Adds a new 'Gradual reindex' link to the search areas page for each
area. When clicked, this takes you to a confirm prompt, and then
adds each context from that search area to the indexing queue.
The search areas page now displays the 'Additional indexing queue'
(if it is non-empty). The table shows the first 10 items in the
queue, and it also indicates the total number in case there are
more. (I don't think people really need to see the entire
contents of it, so I didn't implement paging.)
Adds indexpriority field to the database table which holds a queue of
indexing requests. This allows for potentially large area reindexes
to have a lower priority, so as not to halt the special indexes that
run after a course restore.
This new API returns a list of contexts for each search area. This
allows the areas to be reindexed in a sensible order (roughly
speaking, newest first) and also allows this to be controlled by
each area.
An implementation in the forum module means that forums are ordered
by the date of the most recent discussion, so that active forums
will be reindexed early even if they were created a long time ago.
Without this change its possible that the unit tests will fail at any time.
Before this change the indexing time is measured by real-time, not fake time,
making all index timings 0.
This happens as PHP offers no guarantee around the sort-order of an array for
any given two members that equate as equal. It just happens to pass for the
current array of search areas in vanilla Moodle.
The recordsets used for search indexing sometimes return results
which are invalid (e.g. cannot be found in database). When this
happens, the result in the iterator for the recordset will be
false. Due to a bug, the iterator used to stop when it encountered
a false value, which prevented indexing from getting past the
problematic record.
In addition, the iterator that skips future data resulted in the
current() function of its parent indicator being called twice per
entry, which meant that search indexing called get_document()
twice as many times.