The trick to optimization is not "doing faster" but "doing less". I already feel rg is missing a ton of results I want to see because it has a very large ignore list by default.
I have open sourced the fastest code search implementation. Comprehensive SDK for both file finder and grep file search that is over 100x faster than ripgrep
Advertised as "ColGREP Semantic code search for your terminal and your coding agents",
I haven't put it in any harness yet but I probably should.
https://github.com/lightonai/next-plaid/tree/main/colgrep
I've also tried astgrep (also known as sg) but llms really mess up on them. I think you'd need to fine tune.
If anyone has cracked that case I'd love to hear about it
I have a lot of use for something that can search ~1GB of text "instantly", but so far nothing beats rg/ag after the data has been moved into RAM.
You should add a link to the GitHub repo for the project itself, at first I wasn't even sure what it was called.
I found this link https://github.com/dmtrKovalenko/fff.nvim
- it has regex, so the title is weird - it definitely wouldn't be 100x faster than rg - its an sdk, so its apples to oranges anyway