Meanwhile, in the engineering dungeon
Meanwhile, in the engineering dungeon
Asrock has done me well for budget builds, asus is what I happened to upgrade to for midrange. Honestly being dramatic, just haven’t cared for GB historically.
Good detective work! Adding liquefying thermal pads as a reason to avoid Gigabyte.
The pattern says liquid but the colors say heat damage. Both?
cinnamon, gnome, xfce? Many flavors of Mint
Ah, thanks. It is my AMD card causing crashes with SD in my experience. NVIDIA is native to CUDA hence the stability.
What a treat! I just got done setting up a second venv within the sd folder. one called amd-venv the other nvidia-venv. Copied the webui.sh and webui-user.sh scripts and made separate flavors of those as well to point to the respective venv. Now If I just had my nvidia drivers working I could probably set my power supply on fire running them in parallel.
I had that concern as well with it being a new card. It performs fine in gaming as well as in every glmark benchmark so far. I have it chalked up to amd support being in experimenntal status on linux/SD. Any other stress tests you recommend while I’m in the return window!? lol
Thank you!! I may rely on this heavily. Too many different drivers to try willy-nilly. I am in the process of attempting with this guide/driver for now. Will report back with my luck or misfortunes https://hub.tcno.co/ai/stable-diffusion/automatic1111-fast/
version for whatever reason. Does anyone know the current best nvidia driver for
I might take the docker route for the ease of troubleshooting if nothing else. So very sick of hard system freezes/crashes while kludging through the troubleshooting process. Any words of wisdom?
Since only one of us is feeling helpful, here is a 6 minute video for the rest of us to enjoy https://www.youtube.com/watch?v=lRBsmnBE9ZA
I started reading into the ONNX business here https://rocm.blogs.amd.com/artificial-intelligence/stable-diffusion-onnx-runtime/README.html Didn’t take long to see that was beyond me. Has anyone distilled an easy to use model converter/conversion process? One I saw required a HF token for the process, yeesh
How bad are your crashes? Mine will either freeze the system entirely or crash the current lightdm session, sometimes recovering, sometimes freezing anyway. Needs power cycle to rescue. What is the DE you speak of? openbox?
Well I finally got the nvidia card working to some extent. On the recommended driver it only works in lowvram. medvram maxes vram too easily on this driver/cuda version for whatever reason. Does anyone know the current best nvidia driver for sd on linux? Perhaps 470, the other provided by the LM driver manager…?
So what was the conspiracy theory around tpm requirements, bitlocker and copilot? Some new privacy nightmare?
Are we going to war or is the author bad at writing?
Because some smart TVs will up and brick themselves by irreparably filling their storage with various updates to the point of no longer being able to install or even update anything on the TV whatsoever THANK YOU Samsung)
I’ll definitely be keeping my nvidia card for ai/ml /cuda purposes. It’ll live in a dual boot box for windows gaming when necessary (bigscreen beyond, for now). II am curious to see what 16gb of amd vram will let me get up to anyway.
We are not alone then. Thanks for your input!
checks notes Shareholders?!