autonomoususer

joined 2 years ago
[–] autonomoususer@lemmy.world 7 points 5 days ago

Google must be dismantled before they do this again.

[–] autonomoususer@lemmy.world 1 points 1 week ago (1 children)
[–] autonomoususer@lemmy.world -1 points 1 week ago (3 children)

Someone else's computer.

[–] autonomoususer@lemmy.world 1 points 1 week ago* (last edited 1 week ago) (5 children)

We can end-to-end encrypt mail but dudes are out here raw dogging VPSs like it’s 1998.

[–] autonomoususer@lemmy.world 1 points 1 week ago* (last edited 1 week ago)

That don't stop others seeing what you do on their computers. If you don't care, no one's stopping you.

[–] autonomoususer@lemmy.world 0 points 1 week ago* (last edited 1 week ago) (1 children)

You don't know the S in HTTPS?

[–] autonomoususer@lemmy.world 0 points 1 week ago* (last edited 1 week ago) (3 children)

Privacy? Anonymity? lmao, it's their computer. They see everything you do.

[–] autonomoususer@lemmy.world -1 points 1 week ago

Their computer, their data.

[–] autonomoususer@lemmy.world -1 points 1 week ago* (last edited 1 week ago) (16 children)

lmao, people out here on VPSs acting like they went from renting to owning, when really they just rented the motherboard, the case, the fans 🤣🤣

OP get an end-to-end encrypted remote backup app.

[–] autonomoususer@lemmy.world 5 points 1 month ago* (last edited 1 month ago)

Great! We do not control GitHub, anti-libre software.

Libre software is unstoppable.

[–] autonomoususer@lemmy.world -5 points 2 months ago* (last edited 2 months ago) (3 children)

Open WebUI is proprietary software. 🚩

 

I really don’t get why so many people are turning this into a privacy versus anonymity debate when the real problem is censorship.

Yes, Signal needs a phone number to sign up, but replacing that with an email or username doesn’t make it anonymous. The real issue is that governments are blocking the registration SMS, so people can’t even sign up for the app in the first place.

Sure, there are workarounds, but most people aren’t going to jump through all those extra hoops just to use an app. If we want to spread privacy, how do we do that when Signal's phone number requirement is actively working against us?

Instead of arguing over privacy versus anonymity, shouldn’t we focus on making sure everyone can access Signal without issues? What do you think?

 

I’ve been seeing this more and more in comments, and it’s got me wondering just how big this issue really is. A lot of people feel trapped in apps like Discord, WhatsApp, and Instagram, but can’t get their friends to leave.

It’s really annoying when you suggest trying something new, whether it’s a different app or just not using these platforms so much but sometimes it can feel like no one wants to go first.

So I’m curious, what apps do you feel most trapped in? And have you tried convincing your friends to leave them? What happened? Is it an issue for you, or are you just going along with the flow?

Looking forward to hearing if this is as common as it feels!

 

cross-posted from: https://lemmy.world/post/28493612

Open WebUI lets you download and run large language models (LLMs) on your device using Ollama.

Install Ollama

See this guide: https://lemmy.world/post/27013201

Install Docker (recommended Open WebUI installation method)

  1. Open Console, type the following command and press return. This may ask for your password but not show you typing it.
sudo pacman -S docker
  1. Enable the Docker service [on-device and runs in the background] to start with your device and start it now.
sudo systemctl enable --now docker
  1. Allow your current user to use Docker.
sudo usermod -aG docker $(whoami)
  1. Log out and log in again, for the previous command to take effect.

Install Open WebUI on Docker

  1. Check whether your device has an NVIDIA GPU.
  2. Use only one of the following commands.

Your device has an NVIDIA GPU:

docker run -d -p 3000:8080 --gpus all --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:cuda

Your device has no NVIDIA GPU:

docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main

Configure Ollama access

  1. Edit the Ollama service file. This uses the text editor set in the $SYSTEMD_EDITOR environment variable.
sudo systemctl edit ollama.service
  1. Add the following, save and exit.
[Service]
Environment="OLLAMA_HOST=0.0.0.0"
  1. Restart the Ollama service.
sudo systemctl restart ollama

Get automatic updates for Open WebUI (not models, Ollama or Docker)

  1. Create a new service file to get updates using Watchtower once everytime Docker starts.
sudoedit /etc/systemd/system/watchtower-open-webui.service
  1. Add the following, save and exit.
[Unit]
Description=Watchtower Open WebUI
After=docker.service
Requires=docker.service

[Service]
Type=oneshot
ExecStart=/usr/bin/docker run --rm --volume /var/run/docker.sock:/var/run/docker.sock containrrr/watchtower --run-once open-webui
RemainAfterExit=true

[Install]
WantedBy=multi-user.target
  1. Enable this new service to start with your device and start it now.
sudo systemctl enable --now watchtower-open-webui
  1. (Optional) Get updates at regular intervals after Docker has started.
docker run --rm --volume /var/run/docker.sock:/var/run/docker.sock containrrr/watchtower --run-once open-webui

Use Open WebUI

  1. Open localhost:3000 in a web browser.
  2. Create an on-device Open WebUI account as shown.
 

cross-posted from: https://lemmy.world/post/27088416

This is an update to a previous post found at https://lemmy.world/post/27013201


Ollama uses the AMD ROCm library which works well with many AMD GPUs not listed as compatible by forcing an LLVM target.

The original Ollama documentation is wrong as the following can not be set for individual GPUs, only all or none, as shown at github.com/ollama/ollama/issues/8473

AMD GPU issue fix

  1. Check your GPU is not already listed as compatibility at github.com/ollama/ollama/blob/main/docs/gpu.md#linux-support
  2. Edit the Ollama service file. This uses the text editor set in the $SYSTEMD_EDITOR environment variable.
sudo systemctl edit ollama.service
  1. Add the following, save and exit. You can try different versions as shown at github.com/ollama/ollama/blob/main/docs/gpu.md#overrides-on-linux
[Service]
Environment="HSA_OVERRIDE_GFX_VERSION=10.3.0"
  1. Restart the Ollama service.
sudo systemctl restart ollama
38
submitted 8 months ago* (last edited 7 months ago) by autonomoususer@lemmy.world to c/selfhosted@lemmy.world
 

cross-posted from: https://lemmy.world/post/27013201

Ollama lets you download and run large language models (LLMs) on your device.

Install Ollama on Arch Linux

  1. Check whether your device has an AMD GPU, NVIDIA GPU, or no GPU. A GPU is recommended but not required.
  2. Open Console, type only one of the following commands and press return. This may ask for your password but not show you typing it.
sudo pacman -S ollama-rocm    # for AMD GPU
sudo pacman -S ollama-cuda    # for NVIDIA GPU
sudo pacman -S ollama         # for no GPU (for CPU)
  1. Enable the Ollama service [on-device and runs in the background] to start with your device and start it now.
sudo systemctl enable --now ollama

Test Ollama alone

  1. Open localhost:11434 in a web browser and you should see Ollama is running. This shows Ollama is installed and its service is running.
  2. Run ollama run deepseek-r1 in a console and ollama ps in another, to download and run the DeepSeek R1 model while seeing whether Ollama is using your slow CPU or fast GPU.

AMD GPU issue fix

https://lemmy.world/post/27088416

Use with Open WebUI

See this guide: https://lemmy.world/post/28493612

view more: next ›