this post was submitted on 10 Mar 2026
94 points (96.1% liked)

Selfhosted

56957 readers
1230 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

  7. No low-effort posts. This is subjective and will largely be determined by the community member reports.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

Frigate is NVR software with motion detection, object detection, recording, etc.. It has matured a lot over the past couple of years and I'm really happy with it.

I've been running Frigate for a while, but with version 0.17.0 it sounded like things have changed enough for me to update how I do things. I'm writing all of the following in case anyone else is in the same boat. There's a lot to read, but hopefully it helps make sense of the options.

Keeping my camera feeds the same, I was interested in switching my object detector from a Google Coral to the embedded graphics in my 13th gen Intel CPU. The main reason for this was because the Google Coral was flaky and I was having to reboot all the time. Maybe because I run Frigate in a virtual machine in Proxmox, so the Coral has to be passed through to the VM? Not sure.

I also wanted to figure out how to get the camera streams to work better in Home Assistant.

Switching from Google Coral to OpenVINO

This was relatively straight forward. I mostly followed these directions and ended up with:

detectors:  
  ov:  
    type: openvino  
    device: GPU  

Switching from the default to YOLOv9

Frigate comes with some default ability to detect objects such as person and car. I kept hearing that YOLOv9 was more accurate, and they even got YOLOv9 working with Google Coral devices, just with a limited set of objects. So, I wanted to switch.

This took me a minute to wrap my head around since it's not enabled out of the box.

I added the following to my config based on these directions :

model:  
  model_type: yolo-generic  
  width: 320 # <--- should match the imgsize set during model export  
  height: 320 # <--- should match the imgsize set during model export  
  input_tensor: nchw  
  input_dtype: float  
  path: /config/model_cache/yolo.onnx  
  labelmap_path: /labelmap/coco-80.txt  

... except for me the yolo file is called yolov9-t-320.onnx instead of yolo.onnx... but I could have just as easily renamed the file.

That brings us to the next part -- how to get the yolo.onnx file. It's a bit buried in the documentation, but I ran the commands provided here. I just copied the whole block of provided commands and ran them all at once. The result is an .onnx file in whatever folder you're currently in.

The .onnx file needs to be copied to /config/model_cache/, wherever that might be based on your Docker Compose.

That made me wonder about the other file, coco-80.txt. Well, it turns out coco-80.txt is already included inside the container, so nothing to do there. That file is handy though, because it lists 80 possible things that you can track. Here's the list on github.

I won't go over the rest of the camera/motion configuration, because if you're doing this then you definitely need to dive into the documentation for a bunch of other stuff.

Making the streams work in Home Assistant

I've had the Frigate integration running in Home Assistant for a long time, but clicking on the cameras only showed a still frame, and no video would play.

Home Assistant is not on the same host as Frigate, by the way. Otherwise I'd have an easier time with this. But that's not how mine is set up.

It turns out my problem was caused by me using go2rtc in my Frigate setup. go2rtc is great and acts as a re-streamer. This might reduce bandwidth which is important especially for wifi cameras. But, it's optional, and I learned that I don't want it.

go2rtc should work with Home Assistant if they're both running on the same host (same IP address), or if you run the Docker stack with network_mode: host so it has full access to everything. I tried doing that, but for some reason Frigate got into a boot loop, so I changed it back to the bridge network that I had previously.

The reason for this, apparently, is that go2rtc requires more than whatever published ports they say to open in Docker. Maybe it uses random ports or some other network magic. I'm not sure.

The downside of not having go2rtc is that the camera feeds in the Frigate UI are limited to 720p. I can live with that. The feeds in Home Assistant are still full quality, and recordings are still full quality.

By removing go2rtc from my config, Home Assistant now streams directly from the cameras themselves instead of looking for the go2rtc restream. You may have to click "Reconfigure" in the Home Assistant integration for the API to catch up.

Hope this helps. If not, sorry you had to read all of this.

you are viewing a single comment's thread
view the rest of the comments
[–] walden@wetshav.ing 2 points 3 days ago (1 children)

I don't have an external GPU either, just the onboard Intel graphics is what I use now. Also worth mentioning to use integrated graphics your Docker Compose needs:

devices:
      - /dev/dri/renderD128:/dev/dri/renderD128

I'm not using substreams. I have 2 cameras and the motion detection doesn't stress the CPU too much. If I add more cameras I'd consider using substreams for motion detection to reduce the load.

Your still frames in Home Assistant are the exact problem I was having. If your cameras really do need go2rtc to reduce connections (my wifi camera doesn't seem to care), you might try changing your Docker container to network_mode: host and see if that fixes it.

Here's my config. Most of the notations were put there by Frigate and I've de-identified everything. Notice at the bottom go2rtc is all commented out, so if I want to add it back in I can just remove the #s. Hope it helps.

config.yaml

mqtt:
  enabled: true
  host: <ip of Home Assistant>
  port: 1883
  topic_prefix: frigate
  client_id: frigate
  user: mqtt username
  password: mqtt password
  stats_interval: 60
  qos: 0

cameras:     # No cameras defined, UI wizard should be used
  baby_cam:
    enabled: true
    friendly_name: Baby Cam
    ffmpeg:
      inputs:
        - path: 
            rtsp://user:pw@<ip-addr>:554/cam/realmonitor?channel=1&subtype=0&unicast=true&proto=Onvif
          roles:
            - detect
            - record
      hwaccel_args: preset-vaapi
    detect:
      enabled: true # <---- disable detection until you have a working camera feed
      width: 1920 # <---- update for your camera's resolution
      height: 1080 # <---- update for your camera's resolution
    record:
      enabled: true
      continuous:
        days: 150
      sync_recordings: true
      alerts:
        retain:
          days: 150
          mode: all
      detections:
        retain:
          days: 150
          mode: all
    snapshots:
      enabled: true
    motion:
      mask: 0.691,0.015,0.693,0.089,0.965,0.093,0.962,0.019
      threshold: 14
      contour_area: 20
      improve_contrast: true
    objects:
      track:
        - person
        - cat
        - dog
        - toothbrush
        - train

  front_cam:
    enabled: true
    friendly_name: Front Cam
    ffmpeg:
      inputs:
        - path: 
            rtsp://user:pw@<ip-addr>:554/cam/realmonitor?channel=1&subtype=0&unicast=true&proto=Onvif
          roles:
            - detect
            - record
      hwaccel_args: preset-vaapi
    detect:
      enabled: true # <---- disable detection until you have a working camera feed
      width: 2688 # <---- update for your camera's resolution
      height: 1512 # <---- update for your camera's resolution
    record:
      enabled: true
      continuous:
        days: 150
      sync_recordings: true
      alerts:
        retain:
          days: 150
          mode: all
      detections:
        retain:
          days: 150
          mode: all
    snapshots:
      enabled: true
    motion:
      mask:
        - 0.765,0.003,0.765,0.047,0.996,0.048,0.992,0.002
        - 0.627,0.998,0.619,0.853,0.649,0.763,0.713,0.69,0.767,0.676,0.819,0.707,0.839,0.766,0.869,0.825,0.889,0.87,0.89,0.956,0.882,1
        - 0.29,0,0.305,0.252,0.786,0.379,1,0.496,0.962,0.237,0.925,0.114,0.879,0
        - 0,0,0,0.33,0.295,0.259,0.289,0
      threshold: 30
      contour_area: 10
      improve_contrast: true
    objects:
      track:
        - person
        - cat
        - dog
        - car
        - bicycle
        - motorcycle
        - airplane
        - boat
        - bird
        - horse
        - sheep
        - cow
        - elephant
        - bear
        - zebra
        - giraffe
        - skis
        - sports ball
        - kite
        - baseball bat
        - skateboard
        - surfboard
        - tennis racket
      filters:
        car:
          mask:
            - 0.308,0.254,0.516,0.363,0.69,0.445,0.769,0.522,0.903,0.614,1,0.507,1,0,0.294,0.003
            - 0,0.381,0.29,0.377,0.284,0,0,0
    zones:
      Main_Zone:
        coordinates: 0,0,0,1,1,1,1,0
        loitering_time: 0

detectors: # <---- add detectors
  ov:
    type: openvino
    device: GPU

model:
  model_type: yolo-generic
  width: 320 # <--- should match the imgsize set during model export
  height: 320 # <--- should match the imgsize set during model export
  input_tensor: nchw
  input_dtype: float
  path: /config/model_cache/yolov9-t-320.onnx
  labelmap_path: /labelmap/coco-80.txt

version: 0.17-0


#go2rtc:
#  streams:
#    front_cam:
#      - ffmpeg:rtsp://user:pw@<ip-addr>:554/cam/realmonitor?channel=1&subtype=0&unicast=true&proto=Onvif
#    baby_cam:
#      - ffmpeg:rtsp://user:pw@<ip-addr>:554/cam/realmonitor?channel=1&subtype=0&unicast=true&proto=Onvif

[–] Kupi@sh.itjust.works 2 points 2 days ago (1 children)

I followed your steps for removing go2rtc and It fixed my issues in home assistant. I was a little worried about the results because I’m running 4 WiFi cameras and 1 dual camera (tapo c240d) and thought it might be too much for my setup, but everything works perfectly fine. If anything, it works better now because I can have my cameras in home assistant using the advance camera card. It also seemed to fix my issue with not being able to view clips from the C240d. Not sure how or why, but the clip review just works now.

I haven’t tried switching my detection model yet, but that’s my next goal. Thank you for helping me with this.