The OCR thing is it's own task but for just searching a string in PDFs, pdfgrep is very good.
pdfgrep -ri CoolNumber69 /path/to/folder
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.
Rules:
Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.
No spam posting.
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.
Don't duplicate the full text of your blog or github here. Just post the link for folks to click.
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
No trolling.
Resources:
Any issues on the community? Report it using the report flag.
Questions? DM the mods!
The OCR thing is it's own task but for just searching a string in PDFs, pdfgrep is very good.
pdfgrep -ri CoolNumber69 /path/to/folder
That works magnificently. I added -l so it spits out a list of files instead of listing each matching line in each file, then set it up with an alias. Now I can ssh in from my phone and search the whole collection for any string with a single command.
Thanks again!
Glad to hear that!
Interesting; that would be much simpler. I'll give that a shot in the morning, thanks!
In case you are already using ripgrep (rg) instead of grep, there is also ripgrep-all (rga) which lets you search through a whole bunch of files like PDFs quickly. And it's cached, so while the first indexing takes a moment any further search is lightning fast.
It supports a whole truckload of file types (pdf, odt, xlsx, tar.gz, mp4, and so on) but i mostly used it to quickly search through thousands of research papers. Takes around 5 minutes to index everything for my 4000 PDFs on the first run, then it's smooth sailing for any further searches from there.
For the OCR process you can probably wrangle up a simple bash pipeline with ocrmypdf and just let it run in the background once until all your PDFs have a text layer.
With that tool it should be doable with something like a simple while loop:
find . -type f -name '*.pdf' -print0 |
while IFS= read -r -d '' file; do
echo "Processing $file ..."
ocrmypdf "$file" "$file"
# ocrmypdf "$file" "${file%.pdf}_ocr.pdf" # if you want a new file instead of overwriting the old
done
If you need additional languages or other options you'll have to delve a little deeper into the ocrmypdf documentation but this should be enough duct tape to just whip up a full OCR cycle.
That's a neat little tool that seems to work pretty well. Turns out the files I thought I'd need it for already have embedded OCR data, so I didn't end up needing it. Definitely one I'll keep in mind for the future though.
Try paperless-ngx. It can do OCR and has search.
You might want to check out Docspell - it's lighter than paperless-ngx but still handles PDF indexing and searching realy well, plus it can do basic OCR on those image-based PDFs without much setup.
@Darkassassin07 Did you already considered https://pdfgrep.org/?
With pdfgrep --ignore-case --recursive "text" **/*.pdf for example you can search a directory hierarchy of pdf files for "text".
In windows you may need to add an ifilter. Adobe’s is pretty good. Then windows search will be able to search contents.