Been using a self hosted instance of SearXNG but recently went away from SearXNG in gerneral. Why? Search results more than often enough ended in timeouts from the search engines. It was frustrating and they never fixed it. In terms of privacy it’s top notch though.
I went away from it because it seemed to have a memory leak and the docker container would eventually crash. I never truly investigated what was causing it.
I don’t like Whoogle because of their UI for image searches. Imo it’s really bad but that’s just my opinion. The image search is also the reason why I don’t use Brave Search because it redirects you to Google or Bing. What’s the point in being “a privacy respecting search engine” when you get redirected to Google and Bing which are the worst search engines in terms of privacy?
It wasn’t much though and it never bloated, even when running for over a whole month.
Did you actually check on it, or did you just not notice a problem so you’re stating this? No offense, but you haven’t verified anything, and seem to just be recalling there wasn’t a problem.
I mean, what is “not much,” to you? Because I asked someone else what it was using who also thought it was running well for them, and after only 1 day of uptime it was idle at 350MB of RAM… which is way too high for an idle search engine in my opinion. Another thing, if you were running it in a docker container and had it set to restart=unless-stopped it could have been restarting without you even knowing about it.
No, I didn’t check because it never got so big to the point where it would become suspicious to me that something might be wrong. Maybe it uses more RAM than other self-hosted search engines but it never leaked memory so it used more and more RAM the longer the instance ran. Just because it uses much RAM it doesn’t mean it’s leaking memory. It might just be developed badly, not very caring about your ressources.
I’m mobile for the holiday right now, I’m not 100% sure on utilization. But it’s running on a 2 core 4 GB Ubuntu machine along with Caddy and Redis. I also have a Lemmy instance on that same machine so Lemmy, Lemmy-UI, Postgres, and pictr all fit on that machine and work for my daily use.
I guess it is the largest consumer of memory. Unfortunately I rebooted yesterday, while setting up lemmy I noticed there was a decent amount of OS security updates. Otherwise I probably would have had stats from like 6 months of uptime. I’ll keep an eye on it and see if it balloons.
Yeah, even that, I just don’t understand why a search engine, sitting idle, consumes that much memory. The largest consumers of memory for me are: Omada Controller, Airsonic-Advanced, HomeAssistant, Lidarr, Sonarr, Paperless… all things that process a lot of data, or are written in Java, so it’s to be expected they use more resources.
I would see it grow to 600MB+ and occasionally crash, so I just decided to use public instances.
It’s better than google but It’s not good. Only options is self hosted searxng
Been using a self hosted instance of SearXNG but recently went away from SearXNG in gerneral. Why? Search results more than often enough ended in timeouts from the search engines. It was frustrating and they never fixed it. In terms of privacy it’s top notch though.
I went away from it because it seemed to have a memory leak and the docker container would eventually crash. I never truly investigated what was causing it.
I’ve been meaning to give whoogle a try:
https://github.com/benbusby/whoogle-search
That’s weird, I never had memory leak problems.
I don’t like Whoogle because of their UI for image searches. Imo it’s really bad but that’s just my opinion. The image search is also the reason why I don’t use Brave Search because it redirects you to Google or Bing. What’s the point in being “a privacy respecting search engine” when you get redirected to Google and Bing which are the worst search engines in terms of privacy?
How much RAM is yours using?
None - I deleted it because I don’t use it anymore. It wasn’t much though and it never bloated, even when running for over a whole month.
Did you actually check on it, or did you just not notice a problem so you’re stating this? No offense, but you haven’t verified anything, and seem to just be recalling there wasn’t a problem.
I mean, what is “not much,” to you? Because I asked someone else what it was using who also thought it was running well for them, and after only 1 day of uptime it was idle at 350MB of RAM… which is way too high for an idle search engine in my opinion. Another thing, if you were running it in a docker container and had it set to restart=unless-stopped it could have been restarting without you even knowing about it.
No, I didn’t check because it never got so big to the point where it would become suspicious to me that something might be wrong. Maybe it uses more RAM than other self-hosted search engines but it never leaked memory so it used more and more RAM the longer the instance ran. Just because it uses much RAM it doesn’t mean it’s leaking memory. It might just be developed badly, not very caring about your ressources.
And that’s all I needed to know, that is exactly what I thought, you didn’t take the time to verify anything.
Interesting, my instance has been rock solid. When did you give it a try?
It was probably about a year ago now, so I probably should spin it up again. I think I’ll do that today.
How much RAM does yours use?
I’m mobile for the holiday right now, I’m not 100% sure on utilization. But it’s running on a 2 core 4 GB Ubuntu machine along with Caddy and Redis. I also have a Lemmy instance on that same machine so Lemmy, Lemmy-UI, Postgres, and pictr all fit on that machine and work for my daily use.
I can get exact utilization later tonight.
docker stats --no-stream CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS dd2a774ad1a6 lemmy_lemmy-ui_1 0.00% 42.5MiB / 3.82GiB 1.09% 418kB / 7.24MB 2.65MB / 0B 15 718629b5514f lemmy_lemmy_1 0.03% 6.82MiB / 3.82GiB 0.17% 1.52MB / 1.48MB 864kB / 0B 5 0c944dccc1e1 lemmy_postfix_1 0.00% 4.762MiB / 3.82GiB 0.12% 3.74kB / 0B 0B / 762kB 7 7f939790561c lemmy_postgres_1 0.00% 46.45MiB / 3.82GiB 1.19% 1.09MB / 1.44MB 24.6kB / 2.16MB 9 14c7db5ae7ec lemmy_pictrs_1 0.08% 23.36MiB / 690MiB 3.39% 3.81kB / 0B 0B / 0B 13 3695b8a0b67a caddy 0.00% 9.984MiB / 3.82GiB 0.26% 0B / 0B 34.1MB / 12.3kB 9 12c8bd7c1cdf redis 0.21% 3.555MiB / 3.82GiB 0.09% 101kB / 78.8kB 7.06MB / 0B 5 f03c3298de46 searxng 0.01% 349.9MiB / 3.82GiB 8.94% 9.21MB / 3.82MB 61.4MB / 61.4kB 25
I guess it is the largest consumer of memory. Unfortunately I rebooted yesterday, while setting up lemmy I noticed there was a decent amount of OS security updates. Otherwise I probably would have had stats from like 6 months of uptime. I’ll keep an eye on it and see if it balloons.
Thanks for reporting back on it.
Yeah, even that, I just don’t understand why a search engine, sitting idle, consumes that much memory. The largest consumers of memory for me are: Omada Controller, Airsonic-Advanced, HomeAssistant, Lidarr, Sonarr, Paperless… all things that process a lot of data, or are written in Java, so it’s to be expected they use more resources.
I would see it grow to 600MB+ and occasionally crash, so I just decided to use public instances.