Most searching these days is incremental (e.g. one letter is typed and added to the search term at a time).
If the Search.query() is enhanced to maintain state from the prior queryStr and queryResult (but really just row/col indexes), then each subsequence search would not need to search all cells, just the ones from the prior search.
Example: searching for “fo” followed by typing another ‘o’ to get “foo” would, by definition, mean that any “foo” results are a subset of “fo”.
As it stands now, the complexity is alwaysrows*columns and as more letters are typed, the slower it gets instead of faster.
Clearing a search should be faster and maybe even a separate call. With caching incremental searches, the only cells that need to be cleared are the ones last matched. Clear() would be called when a incremental search hits a combo breaker.
Bonus speedup: because of the typing behavior, most users might mistakenly type one letter (e.g when looking for foobar, they type “food” vs “foob”). They would then delete one character after the typo. So, “food” becomes “foo”. If the search results of “foo” are cached in a history, then it’s an instance result set lookup.
OK. I actually coded this up and the speed improvement wasn’t that noticeable. Ends up that building the results into arrays is so much more expensive than read-only access searching all cells that building a cache kills most of performance gained from using it.
I think the poor performance I noticed was trying to hide rows as the user types a search term using “hiddenRows” plugin.
I believe that this issue has been already raised here https://github.com/handsontable/handsontable/issues/3428 but maybe you can add there a comment from your investigation. It looks like you have really digged deeper than using a single demo