this post was submitted on 19 Aug 2025
32 points (97.1% liked)

Selfhosted

52649 readers
761 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

I have a pile of part lists for tools I'm maintaining, in pdf format; and I'm looking for a good way to take a part number, search through the collection of pdfs, and output which files contain that number. Essentially letting me match random unknown part numbers to a tool in our fleet.

I'm pretty sure the majority of them are actual text you can select and copy+paste, so searching those shouldn't be too difficult; but I do know there's at least a couple in there that are just a string of jpgs packed in a pdf file. They will probably need OCR, but tbh I can probably live with skipping over those altogether.

I've been thinking of spinning up an instance of paperless-ngx and stuffing them all in there so I can let it index the contents including using OCR, then use it's search feature; but that also seems a tad overkill.

I'm wondering if you fine folks have any better ideas. What do you think?

top 11 comments
sorted by: hot top controversial new old
[–] tofu@lemmy.nocturnal.garden 26 points 2 months ago (2 children)

The OCR thing is it's own task but for just searching a string in PDFs, pdfgrep is very good.

pdfgrep -ri CoolNumber69 /path/to/folder

[–] Darkassassin07@lemmy.ca 6 points 2 months ago (1 children)

That works magnificently. I added -l so it spits out a list of files instead of listing each matching line in each file, then set it up with an alias. Now I can ssh in from my phone and search the whole collection for any string with a single command.

Thanks again!

[–] tofu@lemmy.nocturnal.garden 3 points 2 months ago

Glad to hear that!

[–] Darkassassin07@lemmy.ca 5 points 2 months ago (1 children)

Interesting; that would be much simpler. I'll give that a shot in the morning, thanks!

[–] hoppolito@mander.xyz 11 points 2 months ago

In case you are already using ripgrep (rg) instead of grep, there is also ripgrep-all (rga) which lets you search through a whole bunch of files like PDFs quickly. And it's cached, so while the first indexing takes a moment any further search is lightning fast.

It supports a whole truckload of file types (pdf, odt, xlsx, tar.gz, mp4, and so on) but i mostly used it to quickly search through thousands of research papers. Takes around 5 minutes to index everything for my 4000 PDFs on the first run, then it's smooth sailing for any further searches from there.

[–] hoppolito@mander.xyz 5 points 2 months ago (1 children)

For the OCR process you can probably wrangle up a simple bash pipeline with ocrmypdf and just let it run in the background once until all your PDFs have a text layer.

With that tool it should be doable with something like a simple while loop:

find . -type f -name '*.pdf' -print0 |
    while IFS= read -r -d '' file; do
        echo "Processing $file ..."
        ocrmypdf "$file" "$file"
        # ocrmypdf "$file" "${file%.pdf}_ocr.pdf"   # if you want a new file instead of overwriting the old
    done

If you need additional languages or other options you'll have to delve a little deeper into the ocrmypdf documentation but this should be enough duct tape to just whip up a full OCR cycle.

[–] Darkassassin07@lemmy.ca 2 points 2 months ago

That's a neat little tool that seems to work pretty well. Turns out the files I thought I'd need it for already have embedded OCR data, so I didn't end up needing it. Definitely one I'll keep in mind for the future though.

[–] lIlIllIlIIIllIlIlII@lemmy.zip 4 points 2 months ago

Try paperless-ngx. It can do OCR and has search.

[–] MysteriousSophon21@lemmy.world 3 points 2 months ago

You might want to check out Docspell - it's lighter than paperless-ngx but still handles PDF indexing and searching realy well, plus it can do basic OCR on those image-based PDFs without much setup.

[–] db_geek@norden.social 1 points 2 months ago

@Darkassassin07 Did you already considered https://pdfgrep.org/?

With pdfgrep --ignore-case --recursive "text" **/*.pdf for example you can search a directory hierarchy of pdf files for "text".

[–] Brkdncr@lemmy.world 1 points 2 months ago

In windows you may need to add an ifilter. Adobe’s is pretty good. Then windows search will be able to search contents.