• 0 Posts
  • 19 Comments
Joined 1 year ago
cake
Cake day: June 18th, 2023

help-circle


  • The only issue with your second point is that it can eventually become a quagmire when you do need to upgrade it.

    I work for a very old company who held to that philosophy for many years. And while any individual component could be looked at and seen as running fine, when they did finally decide it was time to upgrade they were faced with needing to upgrade everything simultaneously.

    All of the tech was too old, so no current tech had the sort of backwards compatible bridge that helps you move forward. It’s like figuring out how to get your telegram system to also work on your WiFi network, nobody makes any interfaces for that.

    Instead of slowly and gradually replacing components over time, they’re faced with a single major overhaul that’s put the entire company at risk because they have to completely shut down for over a month.




  • Until TV is setup the same way Spotify/YouTube music/apple music is where you just pick one you like and listen to the same music the other platforms have, they’ll continue to have pirating problems.

    I currently pay more per month for the various components needed for highly effective pirating than I would for cable and that’s purely because it offers a better experience. I can’t buy a plex-like experience anywhere for any price legally.

    Fix that and I’ll go legit just like I did for music.




  • Struggling to sort out my thoughts on this one.

    I’m not really sure comparing AI to a human artist learning and being inspired by others quite fits. At least in the context of a commercial AI (one that a company charges others to use). It feels scummy for a company (for profit entity) to steal training data from others without consent, and then turn around and charge people for the product they built on that stolen content.

    That said, existing copyright law allows for ‘fair use’, which includes educational purposes. In that light, AI companies could be seen as a sort of AI school program. But the icky part to me, is that AI is not a person. It can’t choose to leave the school. That school can then profit off that student forever and ever.

    I feel like the fair use argument for education applies to humans, not AI (at least not till they actually gain sapience). AI are machines that can be leveraged and exploited by the few and powerful, and that power should come without us subsidizing their development.

    Though honestly it’s sort of a moot point, because it’s already done and we’re very unlikely to ever properly charge them now. And now that they have the start, they have a leg up on everyone else. So the morality of how it was built no longer really matters, unless we want to argue AI should all be open source or public domain.





  • My understanding from the beehaw defed is that more surgical moderation tools just don’t exist right now (and likely won’t for awhile unless the two Lemmy devs get some major help). Admins only really have a singular nuclear option to deal with other instances that aren’t able to tackle the bot problem.

    Personally I don’t see defederating as a bad thing. People and instances are working through who they want to be in their social network. The well managed servers will eventually rise to the top with the bot infested and draconian ones eventually falling into irrelevance.

    As a user this will result in some growing pains since Lemmy currently doesn’t offer a way to migrate your account. Personally I already have 3 Lemmy accounts. A good app front end that minimizes the friction from account switching would greatly help these growing pains.






  • The goal should be to find a way to destroy the subreddit without getting removed as mods. Which should focus on killing user engagement through draconian mod rules. Like an automod that bans everyone that comments.

    People laugh at the John Oliver thing for a few days, but the joke will get stale, that’s when they need to stick to their guns and keep running it into the ground. I’d also suggest limiting posts to once an hour or something like that. Mods need to focus on making Reddit boring.


  • To add more context, currently Lemmy doesn’t offer great moderation tools. So if a relatively open instance like lemmy.world interacts with beehaw, beehaw ability to shut down the ‘few bad apples’ coming from lemmy.world is rudimentary at best.

    At a certain point, admins just can’t keep up and have to make a judgment call. Either accept that trolls and bad actors are going to get through or cut off the source of the infection, regardless of whether or not that impacts regular users.

    Beehaw has already stated that they’re open to reconnecting once they have a better way of moderating and dealing with bad actors.