

If you dislike decentralization, then that’s like being on an instance that’s banned every other instance.
Do you mean that you dislike defederation?
If you dislike decentralization, then that’s like being on an instance that’s banned every other instance.
Do you mean that you dislike defederation?
Wow, there isn’t a single solution in here with the obvious answer?
You’ll need a domain name. It doesn’t need to be paid - you can use DuckDNS. Note that whoever hosts your DNS needs to support dynamic DNS. I use Cloudflare for this for free (not their other services) even though I bought my domains from Namecheap.
Then, you can either set up Let’s Encrypt on device and have it generate certs in a location Jellyfin knows about (not sure what this entails exactly, as I don’t use this approach) or you can do what I do:
On your router, forward port 443 to the outbound secure port from your PI (which for simplicity’s sake should also be port 443). You likely also need to forward port 80 in order to verify Let’s Encrypt.
If you want to use Jellyfin while on your network and your router doesn’t support NAT loopback requests, then you can use the server’s IP address and expose Jellyfin’s HTTP ports (e.g., 8080) - just make sure to not forward those ports from the router. You’ll have local unencrypted transfers if you do this, though.
Make sure you have secure passwords in Jellyfin. Note that you are vulnerable to a Jellyfin or Traefik vulnerability if one is found, so make sure to keep your software updated.
If you use Docker, I can share some config info with you on how to set this all up with Traefik, Jellyfin, and a dynamic dns services all up with docker-compose services.
Why should we know this?
Not watching that video for a number of reasons, namely that ten seconds in they hadn’t said anything of substance, their first claim was incorrect (Amazon does not prohibit use of gen ai in books, nor do they require its use be disclosed to the public, no matter how much you might wish it did), and there was nothing in the description of substance, which in instances like this generally means the video will largely be devoid of substance.
What books is the Math Sorcerer selling? Are they the ones on Amazon linked from their page? Are they selling all of those or just promoting most of them?
Why do we think they were generated with AI?
When you say “generated with AI,” what do you mean?
And what’s the result? Are the books misleading in some way? That’s the most legitimate actual concern I can think of (I’m sure the people screaming that AI isn’t fair use would disagree, but if that’s the concern, settle it in court).
Look up “LLM quantization.” The idea is that each parameter is a number; by default they use 16 bits of precision, but if you scale them into smaller sizes, you use less space and have less precision, but you still have the same parameters. There’s not much quality loss going from 16 bits to 8, but it gets more noticeable as you get lower and lower. (That said, there’s are ternary bit models being trained from scratch that use 1.58 bits per parameter and are allegedly just as good as fp16 models of the same parameter count.)
If you’re using a 4-bit quantization, then you need about half that number in VRAM. Q4_K_M is better than Q4, but also a bit larger. Ollama generally defaults to Q4_K_M. If you can handle a higher quantization, Q6_K is generally best. If you can’t quite fit it, Q5_K_M is generally better than any other option, followed by Q5_K_S.
For example, Llama3.3 70B, which has 70.6 billion parameters, has the following sizes for some of its quantizations:
This is why I run a lot of Q4_K_M 70B models on two 3090s.
Generally speaking, there’s not a perceptible quality drop going to Q6_K from 8 bit quantization (though I have heard this is less true with MoE models). Below Q6, there’s a bit of a drop between it and 5 and then 4, but the model’s still decent. Below 4-bit quantizations you can generally get better results from a smaller parameter model at a higher quantization.
TheBloke on Huggingface has a lot of GGUF quantization repos, and most, if not all of them, have a blurb about the different quantization types and which are recommended. When Ollama.com doesn’t have a model I want, I’m generally able to find one there.
I recommend a used 3090, as that has 24 GB of VRAM and generally can be found for $800ish or less (at least when I last checked, in February). It’s much cheaper than a 4090 and while admittedly more expensive than the inexpensive 24GB Nvidia Tesla card (the P40?) it also has much better performance and CUDA support.
I have dual 3090s so my performance won’t translate directly to what a single GPU would get, but it’s pretty easy to find stats on 3090 performance.
The above post says it has support for Ollama, so I don’t think this is the case… but the instructions in the Readme do make it seem like it’s dependent on OpenAI.
stuck with the GPL forever
If you accept a patch and don’t have the ability to relicense it, you can remove it and re-license the new codebase. You can even re-implement changes made by the patch in many cases, whether those changes are bug fixes or new features.
If you re-implement the change, you do need to ensure this is done in a way that doesn’t cause it to become a derivative work, but it’s much easier if you have copyright to 99% of a work already and only need to re-implement 1% or so. If you’ve received substantial community contributions and the community is opposed to relicensing, it will be much harder to do so.
A clean room implementation - where the person rewriting the code doesn’t look at the original code, and is only given a description of the functionality - which can include a detailed description of the algorithm - is the most defensible way to perform such a rewrite and relicense, but it’s not the only option.
You should generally consult an attorney when relicensing and shouldn’t just do it casually. But a single patch certainly doesn’t mean you’re locked in forever.
16 GB of RAM, though? Is it even optimized for the Ryzen 9950X3D?
And a 4 TB SSD - not even necessarily NVME?
Doesn’t seem high powered to me.
Are you saying that NAT isn’t effectively a firewall or that a NAT firewall isn’t effectively a firewall?
Is there a way to use symlinks instead? I’d think it would be possible, even with Docker - it would just require the torrent directory to be mounted read-only in the same location in every Docker container that had symlinks to files on it.
It’s more likely that this is being done to either:
Depending on setup this can be true with Jellyfin, too. I have a domain registered, use dynamic DNS, and have Traefik direct a subdomain to my Jellyfin server. My mobile clients are configured using that. My local clients use the local static IP.
If my internet goes down, my mobile clients can’t connect, even on the LAN.
On the other hand it is a conduit for censorship. If an admin doesn’t like what you post on another instance, then they can censor you everywhere.
Such a user can
If you do not understand why this is inappropriate you are the problem.
Was it inappropriate for Elon Musk’s company to not properly secure the data of its customers? I would say so. It would therefore be appropriate for anyone harmed as a result to sue Tesla - or Elon directly - for any damages they suffered.
Further, “Whether another user actually downloaded the content that Meta made available” through torrenting “is irrelevant,” the authors alleged. “Meta ‘reproduced’ the works as soon as it made them available to other peers.”
Is there existing case law for what making something “available” means? If I say “Alright, I’ll send you this book if you want, just ask,” have I made it available? What if, when someone asks, I don’t actually send them anything?
I’m thinking outside of contexts of piracy and torrenting, to be clear - like if a software license requires you to make any changed versions available to anyone who uses the software. Can you say it’s available if your distribution platform is configured to prevent downloads?
If not, then why would it be any different when torrenting?
Meta ‘reproduced’ the works as soon as it made them available to other peers.
The argument that a copyrighted work has been reproduced when “made available,” when “made available” has such a low bar is also perplexing. If I post an ad on Craigslist for the sale of the Mona Lisa, have I reproduced it?
What if it was for a car?
I’m selling a brand new 2026 Alfa Romeo 4E, DM me your offers. I’ve now “reproduced” a car - come at me, MPAA.
Wasn’t the estimated delivery date much sooner when you first placed the order? Per Amazon’s stated policy, you should be eligible for refund three days after that date.
Obviously it would be preferable for you to get it even sooner, but that’s still a lot better than two months from now.
If you have an email or any record of the original estimated date, contact Amazon CS and reference that. Don’t even mention the changed delivery - that’s not your problem, as you didn’t agree to a changed delivery date; you were promised delivery three days ago and haven’t received it.
Where did I contest your point?
The president won [the] popular vote
Only if you ignore the huge amounts of voter suppression. If you don’t, then he lost the popular vote and the electoral vote - netting 45.8% of the popular vote to Kamala’s 52.7%, and he earned at most (and probably less than) 252 electoral votes to Kamala’s 286.
If you’re in the US, automatic is fine. Manuals make up like 1 percent of new cars and maybe 4% of used cars here. It doesn’t hurt to know how to drive one, but it doesn’t benefit you much, either. I drove a manual once, but it was a rental in another country. I’ve never been faced with needing to - or even having the opportunity to - drive a manual in the US.
However, learning on a manual does make it easier to understand certain ways of how cars work, even on automatics (less so on CVTs), so if you like understanding things more, I recommend manual even in the US. You can still get that understanding driving automatics, though - just a bit more effort.
Outside the US, most places I know of manual is the default. If manuals make up even 30 percent or so of cars where you live, I strongly suggest learning to drive one.
Retrieval-Augmented Generation (RAG) is probably the tech you’d want. It basically involves a knowledge library being built from the documents you upload, which is then indexed when you ask questions.
NotebookLM by Google is an off the shelf tool that is specialized in this, but you can upload documents to ChatGPT, Copilot, Claude, etc., and get the same benefit.
If you self hosted, Open WebUI with Ollama supports this, but far from the only one.