Replies: 6 comments 6 replies
-
Yes, Frigate supports however many corals you want to give it.
It is a complex subject, doubletake is still the easiest way to integrate with Frigate then you can pick CPAI or CompreFace. In 0.16 Frigate will support native face recognition.
Not quite sure what you mean, Frigate already has NVR features. You can already install Frigate as an app https://docs.frigate.video/configuration/pwa and Frigate 0.15 has implemented semantic search. |
Beta Was this translation helpful? Give feedback.
-
Ah thanks for the reply. Can I run deepstack on a windows enterprise VM as that's got my gpu passed through. Is there a lack of development in TPU's? Seems to be expectation to run everything on one machine for each unless you have an expensive nvidia that can passthrough to multiple vms. My issue is I have a 4ru I am trying to run Frigate, Blue Iris + stable diffusion + windows stuff that has gpu), frigate has the coral + igpu. Sadly finances won't allow a second server, most I could do is a second 4060ti, I thought the hailo8 would have taken off. Also running a truenas on that machine. Australia is expensive. also quick question I believe my docker compose has an error, the /config stores the database volume path so updating to a new release doesn't delete db? I think I had that missing and it was inside the docker when I updated to rc recently, oops. My heads not thinking clearly as dog is sick but /media would be my /mnt/frigatehdd mounting hdd right? This is in ram i presume with the docker-compose file? Ignore this?
Remove the dev/sha? Thanks for an awesome product. |
Beta Was this translation helpful? Give feedback.
-
Ah thanks, I guess the yolov9 might make it take off (hailo8), Not sure if deepstack etc have support for it. Sorry for the confusion, thanks for the help too. I'll try do some digging around. Is most AI/TPU stuff going towards dedicated GPU solutions vs nvme based or usb like coral? |
Beta Was this translation helpful? Give feedback.
-
Ok it's working perfect except go2rtc deleted my config. Can I put it outside the docker in the op frigate config folder with config yaml? Dunno why it randomly deleted the streams twice. Frigate still works fine. Just wasn't sure originally locations of files. Lucky I save all Configs in notepad++ on laptop and use mobaxterm |
Beta Was this translation helpful? Give feedback.
-
ah sweet i just found that earlier. GPT i dont think is up to date as Config folder outside of docker now hosts the database so each rebuild doesnt blow away the detections, etc. HDD is storing recordings, snapshots, etc. Just this final thing im hoping exists but no big issue if I remember to make a password/user for frigate hass card. Webrtc is working perfect with but I'm using the main 4k streams which are slower on haos interface especially mobile, was thinking of going to the substreams (go2rtc streams both 4mp and 1080p(detection stream)) so what I'd really love is detection streams or super fast loading of webrtc views in their small cards, click enlarge to access a 4mp image or 8mp for the tioc camera. Apologies for so many questions, I went through a very rough few months, lucky to be alive. I love this project and when I get my final 4 cams I'll start the frigate+ subscription. oh very random question, theoretically I could just stream direct from camera or go2rtc > youtube if I ever do my backyard chickens youtube live feed lol. still have to get https enabled but that will be later to do the webapp thing, I really wanna thank you and the others from the bottom of my heart as a disabled person for making such an amazing project and lowering cost of entry into all of this. I'm understanding a lot more now of how it all works. I hope your bosses give you many raises and life brings you fortune and good health. This and other homelab projects have kept me more sane during a difficult time with health and I truly apologize for missing stuff in the documents. I hope one day to learn coding enough to contribute to things like this. Docker-compose scared me a lot for a while and ttecks scripts I used but now docker isn't so scary, gpt helps me a lot in understanding how it works. Having my configs and understanding the directory structure helped so much, I can get up n running quickly now. I think this even allows me to change the ghcr line to new versions without much difficulty to get updates and keep my recordings, etc? |
Beta Was this translation helpful? Give feedback.
-
Not sure what's going on, when I added frigate integration it just worked
with a password. very weird.
…On Thu, Feb 6, 2025 at 10:36 PM Nicolas Mowen ***@***.***> wrote:
You need to use qsv on that (assuming you're running 0.15)
Frigate card doesn't use frigates auth, are you sure you're not talking
about home assistants auth
—
Reply to this email directly, view it on GitHub
<#16179 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/A2VSOIOVQXERO6WIU32ZVKD2ONJMPAVCNFSM6AAAAABV7R77QGVHI2DSMVQWIX3LMV43URDJONRXK43TNFXW4Q3PNVWWK3TUHMYTEMBYGE2DMOI>
.
You are receiving this because you authored the thread.Message ID:
***@***.***
com>
|
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
I'm not sure how feasible this is, I truly wish I knew enough about coding to add features.
Just some random thoughts - Hailo8, would love to see it in frigate+ one day? Is it on the horizon? When I get 4 more cameras hopefully in a month or two I'll start the snapshots and get a frigate+ depending on my health.
Masking - Trees make a lot of movement, wind, etc, a colour range mask would be awesome.
Range - I guess this is gonna be super super hard due to focal length differences, camera distortion, for detecting say a cat at I dunno what a cat's bounding box is, 20,000pixels? If it moves closer that will be 40,000 right? (guestimate number). The edge of lens bends objects, camera distortion correction would be cool at like 2.8 but I dunno if that has much effect on tracking and I don't want the devs to be overloaded. A tape measure and a plane of reference To mark out in metric (Cmon USA upgrade to metric), mark out each 1 meter interval from camera to say 20-30m and the bounding box for a cat or person varies on where they are standing. I'm guessing this is extremely hard. And a reference angle of tilt to azimuth? as objects might be wider at the top vs bottom. This is seriously overthinking it so if it's not feasible drop to the bottom of list of ideas or don't worry. I'm not sure if dahua has for the varifocal a way to determine it's current FOV. Each cam I guess would need that from what I've read on chatgpt on feasability for calculating how to size the box.
Bugs love IR - I dunno how this gets handled except floodlights off camera in IR, they make motion width. No idea how to fix this apart from disable onboard IR, floodlight IR (how far away? ). I've been trying to find a mouse in shed that's somewhat cluttered, I see they eat poison but never catch them on cam. Might be too hard.
Thank you sooooo much to the team though, person detection seems very accurate. I've still got lots to learn. I did have a question if it's ok to ask here on - I can add a second usb coral, my gpu is passed through to a windows vm for testing blueiris and stable diffusion, what options and what's best to do face tracking and pass through to home assistant Joe Bloe detected?
Is Dahua/empire tech 5542 s3 meta data still a mystery to all third parties? It's 2k to get the nvr and will I gain much from it? Easier phone app maybe or face detect, acupick, etc? Is this stuff that probably won't be reverse engineered?
I was thinking frigate + blue Iris + CPAI or deepstack? or something, I am so disappointed I can't share gpus to other vm's on consumer hardware.
Finally is frigate going to be an addon to nvrs or later add nvr functions like android app, better search (either way is fine, it's your app and I am privileged to use it). When my health gets better I'd like to learn some coding, eventually add to apps like this, what languages does frigate use and timescale to learn?
Beta Was this translation helpful? Give feedback.
All reactions