I do the majority of my Lemmy use on my own personal instance, and I’ve noticed that some threads are missing comments, some large threads, even large quantities of them. Now, I’m not talking about comments not being present when you first subscribe/discover a community to your instance, in this case, I noticed it with a lemmy.world thread that popped up less than a day ago, very well after I subscribed.
At the time of writing, that thread has 361 comments. When I view the same thread on my instance, I can see 118, that’s a large swathe of missing content for just one thread. I can use the search feature to forcibly resolve a particular comment to my instance and reply to it, but that defeats a lot of the purpose behind having my own instance.
So has anyone else noticed something similar happening? I know my instance hasn’t gone down since I created it, so it couldn’t be that.
This arises from the good ol issue of everybody just migrating to the same three or four big servers which end overloaded with their own users and can’t send updates to other instances.
I remember the same happening to Mastodon during the first few exodus until a combination of people not staying, stronger servers and software improvements settled the issue.
I can barely get updates from lemmy.ml or lemmy.world
Beehaw seems to perform better.
About half of the communities on lemmy.ml I subscribed tomare on “Subscribe Pending” and have been since I started this server.
I seriously thought I’m alone with this issue, but it seems it’s fairly common for people hosting on their own. Same as you guys, it won’t sync everything, some communities are even “stuck” with posts from a day back, even though there were many new ones posted.
Kind of off topic question, but I guess it’s related? Is there anyone that can’t pull a certain community from an instance? I seem to can’t pull !asklemmy@lemmy.world or anything from that community, that includes posts and comments. No matter how many times I try, it won’t populate on my instance.
EDIT: Caught this in my logs:
lemmy | 2023-06-20T08:48:21.353798Z ERROR HTTP request{http.method=GET http.scheme="https" http.host=versalife.duckdns.org http.target=/api/v3/ws otel.kind="server" request_id=cf48b226-cba2-434a-8011-12388c351a7c http.status_code=101 otel.status_code="OK"}: lemmy_server::api_routes_websocket: couldnt_find_object: Failed to resolve actor for asklemmy .world
EDIT2: Apparently it’s a known issue with !asklemmy@lemmy.world, and a bug to be fixed in a future release.
I’ve noticed something similar on my instance in some cases as well. Nothing obvious logged as errors either. It just seems like the comment was never sent. In my case cpu is minimal so it doesn’t seem like a resource issue on the receiving side.
I suspect it may be a resource issue on the sending side. Potentially, not able to keep up with the number of subscribers. I know there was some discussion from the devs around the number of federation workers needing to be increased to keep up, so another possibility there.
It’s definitely problematic though. I was contemplating implementing some kind of resync this entire post and all comments via the Lemmy API to get things back in sync. But, if it is a sending server resource issue, I’m also hesitant to add a bunch more API calls to the mix. I think some kind of resync functionality will be necessary in the end.
I haven’t noticed it happening. But haven’t checked much.
What I have noticed is that some of the overloaded and larger instances can be slow…to post comments….to subscribe to…to post threads on etc. especially from a separate federated instance.
Lemmy.world is easily one I have noticed along with lemmy.ml and occasionally…beehaw (but much less so).
My guess is that in general those instances may be slow to sync/update data or respond.
I’m also seeing this issue on the two instances you’ve mentioned. I’m not sure if it is just an overloaded issue, or if there’s more fundamental issue with the way I’m setting things up. One way around it is if I see a comment I really want to interact out of my own instance, I can copy the link from the fediverse icon, and then search for it. Then the comment (along with its parents) will pop up on my instance eventually. Not idea, as I’d still have to venture out of my own instance to discover the said comment chain, but at least it provides a way to interact, for now.
I would just give it time. I think those instances have some scaling issues and things take time to sync.
Do you have other users on your instance?
I noticed it took a day or two to “catch up” as I added and federated with new communities on these instances.
Again, I haven’t really dug in. They have seemed okay (I do have accounts on those instances too). It seems once everything is “caught up” and it’s just incremental it goes smoother.
But are you seeing any resource constraints on your instance? Like cpu or ram?
All by myself. Plenty of room for activity. We’ll see if it catches up or just end up creating a larger divergence! And yeah, I do have account on lemmy.world as well, so it’s just extra song and dance for now.
Oh I see your account is only 14 hours old. Yeah I would give it another 24-36 hours to do pulls and look then.
Everytime I add a community I start with all the links and all have 0 comments. Then after a while they sync up. I used fediverse.net to just start pulling all sorts of communities. But at this point it seems okay. My instance has been up for a few days now.
New instances are popping up all over so those bigger ones have a lot of servers syncing with them.
Yeah, there are ways around the de-sync (albeit super manual) for now, so I’m just waiting and seeing for the time being :)
@freeman @chiisana how do you pull instances from fediverse.net?
I’m not sure what you mean by pull instances. I’m really brand spanking new at this. Sorry!
So looking more. Yeah it may be more active threads are behind. Here’s an example that’s out of sync on my instance right now
I’ve noticed the same situation in some threads on my own instance too. But I’m under the impression that it might just be backlogged on the responsible instance that’s supposed to send out the federated content. I’ve noticed this when just having my home feed set to New and then suddenly seeing like thirty posts from lemmy.world come across all at once with widely varied timestamps.
I suppose the best way to test if this is the case would be to note down any threads that are missing substantial amounts of comments on your local server and then check back with that thread periodically to see if and when they start to fill in.
Does your server have enough power and workers to handle all the federated messages? Or is it constantly at 100% CPU?
The machine is a dedicated server with 6 cores, 12 threads, all of which are usually under 10% utilization. Load averages currently are 0.35 0.5 0.6. Maybe I need to add more workers? There should be plenty of raw power to handle it.
Yeah that sounds about enough to handle the load. How many workers do you use? And do you see any errors in your logs about handling messages? You could try to search for that particular thread to see if all replies are handled correctly?
Update: Did a
-f
watch of the logs for WARN messages while upping worker counts. Seems 1024 was the sweet spot. Upped further to 1500, and the warnings for expired headers have entirely stopped in large part. So it seems this was the solution.Thanks for your help!
I have the same issue and I also get the warning for the expired headers. I have tried increasing the federation.worker_count(to 99999) and nginx workers(to 10000), but the issue still occurs for me.
There is also a lot of missing comments for me on my own instance. Have to view this post on lemmy.world to get all the comments, since there are a lot missing on my own instance.