• 15 Posts
  • 52 Comments
Joined 2 months ago
cake
Cake day: April 4th, 2025

help-circle

  • It is very interesting to see how with Rust and Guix, there is some convergence between programming worlds which so far have been rather separate universes. For example, Rust makes it easy to write modern system libraries which previously would have been written in C, the Linux kernel is slowly adopting Rust, and Guix makes it easy to use such libraries in strong-dynamically typed languages like Guile, Racket, or Python.

    For the general programming community, the promise is that Guix kinda solves the packaging and dependency resolution problem for multi-language projects. And it is making good strides - Guix contains over 50,000 packages now, not counting the nonguix channels which add e.g. non-free firmware. (Just for convenience, here how to install the Guix package manager im Arch).




  • Oh, and there is also bup, which might be what you are looking for:

    https://bup.github.io/

    • it stores files in version-controlled copies which can be synced. Perhaps good for backing up photos and such, up to a few GB.

    Two more interesting solutions:

    1. Nix OS and Guix SD let you define a system entirely from single configuration file, so it is easy to re-create when needed.
    2. The Btrfs and ZFS file systems allow to take snapshots in an instant which can very efficiently store earlier versions of files. I used that when working with yocto/bitbake, which compiles an entire embedded system from source - it can handle much larger data volumes than git or bup, and is the right thing when handling versions of binary data.

    And one more, the rsync tool allows to store hard-linked copies of directory trees.

    The key question is however - what do you want?

    • being able to recover earlier versions is essential when working with source code
    • being able to merge such versions in text files is necessary when working on code cooperatively with others - and only source control systems can do this well
    • In 99.9% of the other cases, you just want to be able to re-create a single ground-truth version of all your data after a disaster, and keep that backup copy as current as possible.

    These are not the same requirements, especially the volume of data will differ.

    And also, while you might to want or need to go patch by patch through conflicting source code tree with 10,000 different lines, I guess that absolutely nobody is willing or has time to go through a tree with 10,000 conflicting photographs and match them.

    So the question back is: What is your specific use case and what exactly do you want to achieve?










  • Well, my main reason to use Zim Wiki and Gollum is that all the information stays on my computers -no sync service is needed, I sync via git + ssh to a Raspberry Pi that runs in my home. And this is a critical requirement for me since as a result of many experiences, my trust in commercial companies that collect data to respect data privacy has reached zero.

    The differences between Zim and Gollum are gradual: Zim is tailored as a Desktop Wiki, so each page is already in editing mode which is slightly quicker, while Gollum is more like a classical server-based wiki, which is normally accessed over the browser (but by default, without user authentication). The difference is a bit blurry since both just modify a git repo, and Gollum can be run in localhost, so it is good for capturing changes on a laptop while on the road, and syncing them later. A further difference is that Zim is a but better for the “quick but not (yet) organized” style of work, while Gollum is better for a designed and maintained structure.

    Both can capture media files and support different kinds of markup, while always storing in plain text. Gollum can also handle well things like PDFs which are displayed in the browser, and supports syntax highlighthing in many programming langages, which makes it nice for programming projects - it is perfect for writing outlines and documentation of software, and I often work by writing documentation first.







  • Ah still rolling out the old “stochastic parrot” nonsense I see.

    It is a bunch of stochastic parrots. It just happens frequently that the words they are parroting were orginally written by a bunch of intelligent people which were knowledgeable in their fields.

    Note this doesn’t makes the parrots intelligent - in the same way that a book written by Einstein to explain special relativity has any own intelligence. Einstein was intelligent, his words transport his intelligent ideas, but the book conveying them to other people (as, the printed pages with cardboard cover) is as dumb as a stone. You would not ask a piece of cardboard so solve a math problem, would you?


  • Reponding to another comment in opensource@lemmy.ml:

    Writing code is itself a process of scientific exploration; you think about what will happen, and then you test it, from different angles, to confirm or falsify your assumptions.

    What you confuse here is doing something that can benefit from applying logical thinking with doing science. For exanple, mathematical arithmetic is part of math and math is science. But summing numbers is not necessarily doing science. And if you roll, say, octal dice to see if the result happens to match an addition task, it is certainly not doing science, and no, the dice still can’t think logically and certainly don’t do math even if the result sometimes happens to be correct.

    For the dynamic vs static typing debate, see the article by Dan Luu:

    https://danluu.com/empirical-pl/

    But this is not the central point of the above blog post. The central point of it is that, by the very nature of LLMs to produce statistically plausible output, self-experimenting with them subjects one to very strong psychological biases because of the Barnum effect and therefore it is, first, not even possible to assess their usefulness for programming by self-experimentation(!) , and second, it is even harmful because these effects lead to self-reinforcing and harmful beliefs.

    And the quibbling about what “thinking” means is just showing that the arguments pro-AI has degraded into a debate about belief - the argument has become “but it seems to be thinking to me” even if it is technically not possible and also not in reality observed that LLMs apply logical rules, cannot derive logical facts, can not explain output by reasoning , are not aware about what they ‘know’ and don’t ‘know’, or can not optimize decisions for multiple complex and sometimes contradictory objectives (which is absolutely critical to any sane software architecture).

    What would be needed here are objective controlled experiments whether developers equipped with LLMs can produce working and maintainable code any faster than ones not using them.

    And the very likely result is that the code which they produce using LLMs is never better than the code they write themselves.


  • Writing code is itself a process of scientific exploration; you think about what will happen, and then you test it, from different angles, to confirm or falsify your assumptions.

    What you confuse here is doing something that can benefit from applying logical thinking with doing science. For exanple, mathematical arithmetic is part of math and math is science. But summing numbers is not necessarily doing science. And if you roll, say, octal dice to see if the result happens to match an addition task, it is certainly not doing science, and no, the dice still can’t think logically and certainly don’t do math even if the result sometimes happens to be correct.

    For the dynamic vs static typing debate, see the article by Dan Luu:

    https://danluu.com/empirical-pl/

    But this is not the central point of the above blog post. The central point of it is that, by the very nature of LKMs to produce statistically plausible output, self-experimenting with them subjects one to very strong psychological biases because of the Barnum effect and therefore it is, first, not even possible to assess their usefulness for programming by self-exoerimentation(!) , and second, it is even harmful because these effects lead to self-reinforcing and harmful beliefs.

    And the quibbling about what “thinking” means is just showing that the arguments pro-AI has degraded into a debate about belief - the argument has become “but it seems to be thinking to me” even if it is technically not possible and also not in reality observed that LLMs apply logical rules, cannot derive logical facts, can not explain output by reasoning , are not aware about what they ‘know’ and don’t ‘know’, or can not optimize decisions to multiple complex and sometimes contradictory objectives (which is absolutely critical to sny sane software architecture).

    What would be needed here are objective controlled experiments whether developers equipped with LLMs can produce working and maintainable code any faster than ones not using them.

    And the very likely result is that the code which they produce using LLMs is never better than the code they write themselves.


  • Are you saying that it is not possible to use scientific methods to systematically and objectively compare programming tools and methods?

    Of course it is possible, in the same way as it can be inbestigated whuch methods are most effective in teaching reading, or whether brushing teeth is good to prevent caries.

    And the latter has been done for comparing for example statically vs dynamically typed languages. Only that the result there is so far that there is no conclusive advantage.


  • What called my attention is that assessments of AI are becoming polarized and somewhat a matter of belief.

    Some people firmly believe LLMs are helpful. But programming is a logical task and LLMs can’t think - only generate statistically plausible patterns.

    The author of the article explains that this creates the same psychological hazards like astrology or tarot cards, psychological traps that have been exploited by psychics for centuries - and even very intelligent people can fall prey to these.

    Finally what should cause alarm is that on top that LLMs can’t think, but people behave as if they do, there is no objective scientifically sound examination whether AI models can create any working software faster. Given that there are multi-billion dollar investments, and there was more than enough time to carry through controlled experiments, this should raise loud alarm bells.