• 0 Posts
  • 47 Comments
Joined 1 year ago
cake
Cake day: June 18th, 2023

help-circle

  • Mojo’s starting point is absurdly complex. Seems very obviously doomed to me.

    Julia is a very clever design, but it still never felt that pleasant to use. I think it was held back by using llvm as a JIT, and by the single-minded focus on data science. Programming languages need to be more opportunistic than that to succeed, imo.



  • Out of the ones you listed I’d suggest Julia or Clojure. They are simple and have interactive modes you can use to experiment easily.

    Experienced programmers often undersell the value of interactive prompts because they don’t need them as much. They already have a detailed mental model of how most languages behave.

    Another thing: although Julia and Clojure are simple, they are also quite obscure and have very experimental designs. Python might be a better choice. From a beginner’s perspective it’s very similar to Julia, but it’s vastly more popular and lots of people learn it as their first language.

    Based on the languages you found, I’m guessing you were looking for something simple and elegant. I think Python fits this description too.




  • They are not stupid at all. Their interests are in conflict with the interests of tech workers and they are winning effortlessly, over and over again.

    The big tech companies are all owned by the same people. If these layoffs cause google to lose market share to another company, it’s fine because they own that company too.

    What matters is coordinating regular layoffs across the whole industry to reduce labour costs. It’s the same principle as a strike: if the whole industry does layoffs, workers gradually have to accept lower salaries. In other words, the employers are unionised and the employees are not.

    This process will probably continue for the next 20 years, until tech workers have low salaries and no job security. It has happened to countless industries before, and I doubt we are special.

    I’m sure the next big industries will be technology-focused, but that’s not the same as “tech”. They won’t involve people being paid $200k to write websites in ruby.


  • “As we’ve said, we’re responsibly investing in our company’s biggest priorities and the significant opportunities ahead,” said Google spokesperson Alex García-Kummert. “To best position us for these opportunities, throughout the second half of 2023 and into 2024, a number of our teams made changes to become more efficient and work better, remove layers, and align their resources to their biggest product priorities. Through this, we’re simplifying our structures to give employees more opportunity to work on our most innovative and important advances and our biggest company priorities, while reducing bureaucracy and layers”

    There was this incredible management consultant in france in the 18th century. Name eludes me, but if he was still around Google could hire him and start finding some far more convincing efficiencies.

    The guy was especially good at aligning resources to remove layers


  • porgamrer@programming.devtoProgramming@programming.dev...
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    2 months ago

    I believe Mercury is intended to be comparable to languages like Java, C# and Ocaml, in terms of the performance profile and generality. I don’t know what it’s like in practice though.

    I view it more as a fascinating proof of concept than a language I’d actually like to use. Really I just want new projects to steal ideas from it.


  • Datalog is sometimes used as an alternative to SQL. Prolog is used by researchers experimenting with rule systems (e.g. type systems, theorem provers, etc).

    Mercury has been used to write regular desktop software, with a couple of notable successes.

    One way to think about Mercury is that it’s like Haskell, except it’s so declarative that the functions can run backwards, generating arguments from return values! Obviously that comes with some pretty big caveats, but in many cases it works great and is extremely useful.


  • Prolog, Mercury, Datalog. Very of intrigued by Verse now that I know it has some logic programming features.

    Mercury is, roughly, a fusion of Haskell and Prolog. Bizarre and fascinating.

    Prolog and Datalog are great but not aimed at general purpose programming.

    Really I just want to see more people trying to adapt ideas from logic programming for general purpose use. Logic programming feels truly magic at times, in a way that other paradigms do not (to me at least).





  • 5 years ago everything was moving to TypeScript. Now everything has moved. Developers are still catching up, but it will be one-way traffic from here.

    I’m guessing your manager thinks TypeScript is like CoffeeScript. It is not like CoffeeScript.

    Also, TypeScript is only the beginning. In the halls of the tech giants most devs view TypeScript as a sticking plaster until things can be moved to webassembly. It will be a long time until that makes any dent in JS, but it will also be one-way traffic when it does.


  • Lol okay. Here are some concrete examples I don’t have:

    Templates as basic generics

    • Templates still show bizarre error messages far too deep into instantiation, despite at least three major features which provided opportunities to fix the problem (static_assert, type_traits, and then concepts)

    Templates for metaprogramming

    • 33 years after the introduction of templates, there are still many common cases in which the only good way to abstract a pattern is by using a macro, and many cases that neither macros or templates can solve
    • There is finally an accepted proposal to fix part of the problem, which will be introduced in C++26, and probably not usable in real code until 2030 at the earliest
    • In 2035, people will still be reluctantly using string macros and external code generation to solve basic problems in C++

    Safe union types

    • C++17, std::variant was introduced to provide a safe inline union type
    • The main API for accessing it is inexplicably slow, unlike in every competing language
    • The fast alternative is an eyesore that can’t integrate with switch statements except via weird, unmaintainable hacks
    • Everyone keeps using custom struct/union/enum combos instead
    • CVEs everywhere

    Error handling

    • C++ uses exceptions as the primary error handling mechanism
    • Exceptions are so slow and so riddled with problems that some companies ban them, and almost all consider them bad practice for representing common failure paths (e.g. failing to parse something)
    • std::expected eventually approved, similar to Rust’s Result type, but with no equivalent to the ‘?’ operator to make the code readable
    • Now there is a proposal to introduce “value type exceptions” instead, which is gathering momentum before std::expected has even stabilised, but will probably not be available for 10 years

    Subtype polymorphism deprecated

    • Now that virtual methods and inheritance are widely considered tools of last resort, they obstruct the introduction of better alternatives
    • Instead we have widespread use of specialised template structs and CRTP to replace instance inheritance with a kind of static inheritance designed for templates
    • It’s all a hack, and as a result it’s full of edge cases that don’t work very well, and poor tool support

    References

    • Good C++ programmers use references where possible, because pointers can be null
    • Good C++ programmers avoid unnecessary copies and allocations
    • Good C++ programmers avoid patterns that can permit unintended coercions with no error
    • Oh no, I accidentally assigned a reference to an auto variable in a template, and instead of giving a helpful type error it implicitly coerced a new copy of my vast memory buffer into existence
    • Okay fine I’ll pass pointers to avoid this, and just trust they won’t be null
    • Oh no, C++ has standardised the idea that raw pointers represent nullability even in all their newest standard library types (thanks again, std::variant)


  • I learned through three things:

    1. writing some basic functions in assembly code by hand for a course (not many)
    2. implementing a basic compiler back-end in llvm (any similar IR or assembly target would do)
    3. learning the principles other people were using to write fast code (in my case game engine developers)

    The first two things helped me understand how common code constructs are translated to assembly, so I can do a rough projection in my head when skimming a C function. Nowadays you can get quite far just by playing around on godbolt.

    The third thing helps surface the less visible aspects of CPUs. After learning how a few low-level optimisations work, all the principles and explanations start to repeat, and 90% of them apply to every modern architecture. You can set out with specific high-level questions, like:

    • why is iteration faster with an array than a linked list?
    • what does vectorisation mean?
    • what is a “struct of array” optimisation?
    • why does the ECS pattern make game engines fast?

    Very quickly you’ll find lots of insightful articles and comments explaining things like CPU caching, prefetching, branch prediction, pipelining, etc.

    I have no book recommendations for you. I’ve found all the best information is freely online in blogs and comment sections, and that the best way to direct my learning is to have a project (or get employed to do low-level stuff). Might be different for you though!