• 53 Posts
  • 2.29K Comments
Joined 1 year ago
cake
Cake day: October 4th, 2023

help-circle
  • NFS doesn’t do snapshotting, which is what I assumed that you meant and I’d guess ShortN0te also assumed.

    If you’re talking about qcow2 snapshots, that happens at the qcow2 level. NFS doesn’t have any idea that qemu is doing a snapshot operation.

    On a related note: if you are invoking a VM using a filesystem images stored on an NFS mount, I would be careful, unless you are absolutely certain that this is safe for the version of NFS and the specific caching options for both NFS and qemu that you are using.

    I’ve tried to take a quick look. There’s a large stack involved, and I’m only looking at it quickly.

    To avoid data loss via power loss, filesystems – and thus the filesystem images backing VMs using filesystems – require write ordering to be maintained. That is, they need to have the ability to do a write and have it go to actual, nonvolatile storage prior to any subsequent writes.

    At a hard disk protocol level, like for SCSI, there are BARRIER operations. These don’t force something to disk immediately, but they do guarantee that all writes prior to the BARRIER are on nonvolatile storage prior to writes subsequent to it.

    I don’t believe that Linux has any userspace way for an process to request a write barrier. There is not an fwritebarrier() call. This means that the only way to impose write ordering is to call fsync()/sync() or use similar-such operations. These force data to nonvolatile storage, and do not return until it is there. The downside is that this is slow. Programs that are frequently doing such synchronizations cannot issue writes very quickly, and are very sensitive to latency to their nonvolatile storage.

    From the qemu(1) man page:

             By  default, the cache.writeback=on mode is used. It will report data writes as completed as soon as the data is
           present in the host page cache. This is safe as long as your guest OS makes sure to correctly flush disk  caches
             where  needed.  If  your  guest OS does not handle volatile disk write caches correctly and your host crashes or
             loses power, then the guest may experience data corruption.
    
             For such guests, you should consider using cache.writeback=off.  This means that the host  page  cache  will  be
             used  to  read and write data, but write notification will be sent to the guest only after QEMU has made sure to
             flush each write to the disk. Be aware that this has a major impact on performance.
    

    I’m fairly sure that this is a rather larger red flag than it might appear, if one simply assumes that Linux must be doing things “correctly”.

    Linux doesn’t guarantee that a write to position A goes to disk prior to a write to position B. That means that if your machine crashes or loses power, with the default settings, even for drive images sorted on a filesystem on a local host, with default you can potentially corrupt a filesystem image.

    https://docs.kernel.org/block/blk-mq.html

    Note

    Neither the block layer nor the device protocols guarantee the order of completion of requests. This must be handled by higher layers, like the filesystem.

    POSIX does not guarantee that write() operations to different locations in a file are ordered.

    https://stackoverflow.com/questions/7463925/guarantees-of-order-of-the-operations-on-file

    So by default – which is what you might be doing, wittingly or unwittingly – if you’re using a disk image on a filesystem, qemu simply doesn’t care about write ordering to nonvolatile storage. It does writes. it does not care about the order in which they hit the disk. It is not calling fsync() or using analogous functionality (like O_DIRECT).

    NFS entering the picture complicates this further.

    https://www.man7.org/linux/man-pages/man5/nfs.5.html

    The sync mount option The NFS client treats the sync mount option differently than some other file systems (refer to mount(8) for a description of the generic sync and async mount options). If neither sync nor async is specified (or if the async option is specified), the NFS client delays sending application writes to the server until any of these events occur:

             Memory pressure forces reclamation of system memory
             resources.
    
             An application flushes file data explicitly with sync(2),
             msync(2), or fsync(3).
    
             An application closes a file with close(2).
    
             The file is locked/unlocked via fcntl(2).
    
      In other words, under normal circumstances, data written by an
      application may not immediately appear on the server that hosts
      the file.
    
      If the sync option is specified on a mount point, any system call
      that writes data to files on that mount point causes that data to
      be flushed to the server before the system call returns control to
      user space.  This provides greater data cache coherence among
      clients, but at a significant performance cost.
    
      Applications can use the O_SYNC open flag to force application
      writes to individual files to go to the server immediately without
      the use of the sync mount option.
    

    So, strictly-speaking, this doesn’t make any guarantees about what NFS does. It says that it’s fine for the NFS client to send nothing to the server at all on write(). The only time a write() to a file makes it to the server, if you’re using the default NFS mount options. If it’s not going to the server, it definitely cannot be flushed to nonvolatile storage.

    Now, I don’t know this for a fact – would have to go digging around in the NFS client you’re using. But it would be compatible with the guarantees listed, and I’d guess that probably, the NFS client isn’t keeping a log of all the write()s and then replaying them in order. If it did so, for it to meaningfully affect what’s on nonvolatile storage, the NFS server would have to fsync() the file after each write being flushed to nonvolatile storage. Instead, it’s probably just keeping a list of dirty data in the file, and then flushing it to the NFS server at close().

    That is, say you have a program that opens a file filled with all ‘0’ characters, and does:

    1. write ‘1’ to position 1.
    2. write ‘1’ to position 5000.
    3. write ‘2’ to position 1.
    4. write ‘2’ to position 5000.

    At close() time, the NFS client probably doesn’t flush “1” to position 1, then “1” to position 5000, then “2” to position 1, then “2” to position 5000. It’s probably just flushing “2” to position 1, and then “2” to position 5000, because when you close the file, that’s what’s in the list of dirty data in the file.

    The thing is that unless the NFS client retains a log of all those write operations, there’s no way to send the writes to the server in a way that avoid putting the file into a corrupt state if power is lost. It doesn’t matter whether it writes the “2” at position 1 or the “2” at position 5000. In either case, it’s creating a situation where, for a moment, one of those two positions has a “0”, and the other has a “2”. If there’s a failure at that point – the server loses power, the network connection is severed – that’s the state in which the file winds up in. That’s a state that is inconsistent, should never have arisen. And if the file is a filesystem image, then the filesystem might be corrupt.

    So I’d guess that at both of those two points in the stack – the NFS client writing data to the server, and the server block device scheduler, permit inconsistent state if there’s no fsync()/sync()/etc being issued, which appears to be the default behavior for qemu. And running on NFS probably creates a larger window for a failure to induce corruption.

    It’s possible that using qemu’s iSCSI backend avoids this issue, assuming that the iSCSI target avoids reordering. That’d avoid qemu going through the NFS layer.

    I’m not going to dig further into this at the moment. I might be incorrect. But I felt that I should at least mention it, since filesystem images on NFS sounded a bit worrying.



  • Honestly, I’m a little surprised that a smartphone user wouldn’t have familiarity with the concept of files, setting aside the whole familiarity-with-a-PC thing. Like, I’ve always had a file manager on my Android smartphone. I mean, ok…most software packages don’t require having one browse the file structure on the thing. And many are isolated, don’t have permission to touch shared files. Probably a good thing to sandbox apps, helps reduce the impact of malware.

    But…I mean, even sandboxed apps can provide file access to the application-private directory on Android. I guess they just mostly don’t, if the idea is that they should only be looking at files in application-private storage on-device, or if they’re just the front end to a cloud service.

    Hmm. I mean, I have GNU/Linux software running in Termux, do stuff like scp from there. A file manager. Open local video files in mpv or in PDF viewers and such. I’ve a Markdown editor that permits browsing the filesystem. Ditto for an org-mode editor. I’ve a music player that can browse the filesystem. I’ve got a directory hierarchy that I’ve created, though simpler and I don’t touch it as much as on the PC.

    But, I suppose that maybe most apps just don’t expose it in their UI. I could see a typical Android user just never using any of the above software. Not having a local PDF viewer or video player seems odd, but I guess someone could just rely wholly on streaming services for video and always open PDFs off the network. I’m not sure that the official YouTube app lets one actually save video files for offline viewing, come to think of it.

    I remember being absolutely shocked when trying to view a locally-stored HTML file once that Android-based web browsers apparently didn’t permit opening local HTML files, that one had to set up a local webserver (though that may have something to do with the fact that I believe that by default, with Web browser security models, a webpage loaded via the file:// URI scheme has general access to your local filesystem but one talking to a webserver on localhost does not…maybe that was the rationale).






  • PNG is really designed for images that are either flat color or use an ordered dither. I mean, we do use it for photographs because it’s everywhere and lossless, but it was never really intended to compress photographs well.

    There are formats that do aim for that, like lossless JPEG and one of the WebP variants.

    TIFF also has some utility in that it’s got some sort of hierarchical variant that’s useful for efficiently dealing with extremely-large images, where software that deals with most other formats really falls over.

    But none of those are as universally-available.

    Also, I suppose that if you have a PNG image, you know that – well, absent something like color reduction – it was losslessly-compressed, whereas all of the above have lossless and lossy variants.



  • I would guess that at least part of the issue there is also that the data isn’t all that useful unless it’s also exported to some format that other software can read. That format may not capture everything that the native format stores.

    In another comment in this thread, I was reading the WP article on Adobe Creative Cloud, which commented on the fact that the format is proprietary. I can set up some “data storage service”, and maybe Adobe lets users export their Creative Cloud data there. Maybe users even have local storage.

    But…then, what do you do with the data? Suppose I just get a copy of the native format. If nothing other than the software on Adobe’s servers can use it, that doesn’t help me at all. Maybe you can export the data, export to an open format like a PNG or something, but you probably don’t retain everything. Like, I can maybe get my final image out, but I don’t get all the project workflow stuff associated with the work I’ve done. Macros, brushes, stuff broken up into layers, undo history…

    I mean, you have to have the ability to use the software to maintain full use of the data, and Adobe’s not going to give you that.


  • You saw in Ukraine what can happen if you rely on large plants.

    So, I don’t disagree that, especially for some environments, bombing resistance is a legit concern.

    However, I’m going to go out on a limb and guess that if we find ourselves in a situation where China is bombing US power generation infrastructure, that probably means that World War III – not some kind of limited-scale fight, but a real all-in conflict – is on, and I think that the factors that determine what happens there probably aren’t mostly going to be “who has more power plants”.

    World War II was a multi-year affair, but a lot of that was constrained by distance and the ability to project power. From the US’s standpoint, the Axis had extremely-limited ability to affect the US. The US started with a very small army and no weapons that could, in short order, reach across the world. That meant that, certainly from a US standpoint, there was not going to be a quick resolution one way or another. There, industrial capacity was really important.

    Today’s environment is different.

    I’ve not read up on what material’s out there, but I’d guess that in a World War III, one of two things probably happens:

    • The war goes nuclear, in which case nuclear (weapons, not power generation) capabilities in large part determine the outcome.

    • The war remains conventional. One or both sides have the ability to pretty rapidly destroy the other side’s air and/or missile defenses and subsequently destroy critical infrastructure to the degree that the other side cannot sustain the fight. My bet is on the US being in a stronger position here, but regardless, I don’t think that what happens is each side keeps churning out hardware for multiple years and slugging the other with that hardware, being able to make use of their power generation capacity. Electrical generation capacity is a particularly important part of that, sure, but it’s not the whole enchilada. Water production and distribution, electrical distribution, bridges, industrial infrastructure.

    That doesn’t mean that power generation capacity doesn’t matter vis-a-vis military capacity. Like, let’s say that China has a really great way to convert electrical generation capacity into military capacity, right? Like, they have some fully automated mega-factory that churns out long range AI-powered fighter jets, has all the raw resources they need, just keeps pouring electricity into it. And China decides – in peacetime – that it wants to build an enormous fighter jet force like that. Say, I don’t know, a hundred thousand planes or something. Then the US, which in our hypothetical scenario doesn’t have such a fully-automated-mega-factory, has a hard decision: either attack China or wait and find itself in a situation where China could defeat it in conventional terms. The ability to expand military capacity does matter.

    But at the point that bombing is happening and the ability of power generation to passively-resist that bombing is a factor, you’re already in a war, and then I think that a whole host of other factors start to dramatically change the environment.


  • China is rapidly surpassing the U.S. in nuclear energy, building more reactors at a faster pace and developing advanced technologies like small modular reactors and high-temperature gas-cooled units.

    Okay, yes, very broadly-speaking, I agree that US nuclear power generation capability relative to China is something to keep an eye on. As well as ability to construct nuclear power generation capacity. There might be a way that China could leverage that in some scenario. However.

    At least some of that is tied to population; China has over four times our population. One would expect energy usage per-capita to tend to converge. And for that to happen, China pretty much has to significantly outbuild the US in generation capacity.

    If we in the US constrain ourselves to outpace China in expanding generation capacity, then we’re constraining ourselves to have multiple times the per-capita energy generation capacity.

    Now, okay, yes, there is usage that is decoupled from population size. AI stuff is in the news, and at least in theory – if maybe not with today’s systems, but somewhere along the road to AGI – I can imagine productivity there becoming decoupled from population size. If you have more electrical generation capacity, you can make effective use of that electricity, convert it to productive capacity.

    But a lot of it is going to be tied to population. Electrical heating and cooling. EV use. You’d have to have a staggering amount of datacenter or other non-tied-to-population power use to dominate that.

    These statistics aren’t from the same year, but they have a residential-industrial-commercial breakdown, and then a breakdown for each of those sectors.

    https://www.eia.gov/energyexplained/electricity/use-of-electricity.php

    Commercial use, residential use, and industrial use are, on that chart, each about a third of US electrical power consumption. Of the commercial category, computers and office equipment are 11%. So you’re talking maybe 3% of total US power consumption going to the most critical thing that I can think of that represents productive capacity and is potentially decoupled from population. And that’s all computer and office equipment use, not just stuff like AI. A lot of that is going to be tied to per capita usage, too.

    About half of commercial use of electricity is space cooling. Almost everything else is either cooling, lighting, or ventilation. Those are gonna be tied to population when it comes to productive capacity.

    If you look at residential stuff, about half of it is cooling, heating, or lighting, and my bet is that nothing in the residential category is going to massively increase productive capacity. Up until a point, on a per-capita basis, air conditioning increases productivity. Maybe it could provide an advantage in terms of quality of life, ability to attract immigration. But I don’t think that if, tomorrow, China had twice our per-capita residential electrical power generation capacity, that it’d provide some enormous advantage. And it definitely seems like it’d all be per-capita stuff.

    In industry, you have some big electricity consumers. Machinery, process heating and cooling, electrochemical processes. And with sufficient automation, the productive capacity of those can be decoupled from population size. Given enough electricity, you could run a vast array of, say, electric arc furnaces. But I think that “American industrial capacity vis-a-vis Chinese industrial capacity” is a whole different story, that it’s probably better-examined at a finer-grained level, and I think that there are plenty of eyeballs already on that. Hypothetically, you could constrain residential or other use, pour power capacity dedicated to it into industrial capacity in a national emergency, but I can’t think of any immediately-obvious area of industry where exploiting that is going to buy that much. Unless we expect some massively-important form of new heavy industry to emerge that is dependent upon massive use of electricity – like, throw enough electricity into a machine and you can get unobtanium – I’m probably not going to lose sleep over that.

    If your concern is that there might be ways in which China can leverage its population and so per-capita statistics matter, then sure, I get that, but again, I think that that’s probably better considered in terms of metrics of human capital rather than in terms of just energy generation capability. And I think that the constraining factors there, if you’re talking ability to increase existing capacity in percentage terms, are probably (a) fertility rate, (b) immigration rate. I am pretty sure that if we wanted to get power capacity built and tied into the grid, it could be done on a shorter timescale than we could get people to have children and then raise those children and provide them with a necessary skillset, so I don’t think that existing electrical generation capacity or ability to increase it in the short run is the bounding factor. Maybe we could do immigration at a higher rate than we could expand generation capacity, making electrical generation capacity the bounding factor, though there are – looking at popular irritation that drove voters to support Trump – some political limitations. The last time we were seriously looking at going balls-to-the-wall against another country was World War II versus principally Germany, and war plans included, after mobilizing large portions of the American population, hiring huge chunks of population out of Latin America to fill in the now-absent farm labor need in the US to keep US productive capacity ramping up. As World War II played out, Germany ultimately didn’t conquer the UK and then initiated a fight with the Soviet Union, so a lot of the levers never needed to be pulled. We ultimately only used it in a considerably-scaled-down form.

    https://en.wikipedia.org/wiki/Bracero_Program

    The Bracero Program (from the Spanish term bracero [bɾaˈse.ɾo], meaning “manual laborer” or “one who works using his arms”) was a U.S. Government-sponsored program that imported Mexican farm and railroad workers into the United States between the years 1942 and 1964.

    The program, which was designed to fill agriculture shortages during World War II, offered employment contracts to 5 million braceros in 24 U.S. states. It was the largest guest worker program in U.S. history.[1]

    But I think that that maybe provides insight into what the US would be willing to do in another situation where we wind up in a serious power struggle with another country. If we had to pull tens of millions of people from abroad into the US in short order in a balls-to-the-wall situation, I’m pretty sure that we would.

    So, okay. Maybe, if you think that you can make use of extremely-high-rate immigration capacity, you might want to have a certain amount of electrical generation capacity available or ability to ramp it up very quickly.

    However.

    China could do the same to some degree. But it’s also more-difficult for China due to her larger size relative to the pools abroad from which she might draw – if she wants to scale production proportionally to her population – and I suspect what China would be able to offer in terms of environment, if a contest were predicated on our respective abilities to draw labor from abroad.

    So, in summary:

    I’m not sure that I’d be concerned about “what China could do in the short run in terms of dramatically increasing her capabilities in a way that threatens the US if China had large amounts of electrical generation capacity in absolute terms, and then started a massive immigration program”. I don’t expect that that sort of contest would play to China’s strengths.

    In the sense that China could make use of more electricity to produce more industrial output, sure. China has significantly more steel production capacity today. That’s not really new, and I would expect that it’s been taken into account, that one doesn’t expect steel production capacity to be some sort of bounding factor that’s of special concern. Going back to World War II again, steel production over an extended period of time mattered there…but I’m skeptical that we’d find ourselves in some kind of sustained conflict with China where steel production capacity mattered. It’s too easy to knock out steel production infrastructure or the like in 2025. Maybe someone could identify some kind of concern there, but I don’t think that one would express it in terms of electricity. I don’t think that there’s some sort of way in which a country can translate steel into productivity in a peacetime environment to the degree that available steel is the limiting factor, either, where we’d say “Oh, no, China pulled ahead in steel capacity and steel is now mainly determining a country’s economic or military strength, and we cannot catch up.”

    Having electrical capacity might matter if it’s the bounding factor for something like AI, which potentially has productive capacity decoupled from the size of the labor pool. I think that keeping an eye on the critical resources governing AI capacity is going to be something to do moving forward in the years and decades to come. But as things exist today, usage there is a very small portion of electricity consumption. I don’t think that we’re looking at the limits imposed by electrical generation. Maybe if technology advances and we do enough buildout of capacity, that would change. But I think that we’re also some ways away from electricity being a serious constraint there.


  • I think the first filesystems had flat layout (no directories), but also had different file types for a library, an executable, a plaintext file. Then there were filesystems where directories could only list files, not other directories.

    The original Macintosh filesystem was flat, and according to WP, used for about two years around the mid-1980s. I don’t think I’ve ever used it, personally.

    https://en.wikipedia.org/wiki/Macintosh_File_System

    MFS is called a flat file system because it does not support a hierarchy of directories.

    They switched to a new, hierarchical filesystem, HFS, pretty soon.

    I thought that Apple ProDOS’s file system – late 1970s to early 1980s – was also flat, from memory. It looks like it was at one point, though they added hierarchical support to it later:

    https://en.wikipedia.org/wiki/Apple_ProDOS

    ProDOS adds a standard method of accessing ROM-based drivers on expansion cards for disk devices, expands the maximum volume size from about 400 kilobytes to 32 megabytes, introduces support for hierarchical subdirectories (a vital feature for organizing a hard disk’s storage space), and supports RAM disks on machines with 128 KB or more of memory.

    Looks like FAT, used by MS-DOS, early 1980s, also started out flat-file, then added hierarchical support:

    https://en.wikipedia.org/wiki/File_Allocation_Table

    The BIOS Parameter Block (BPB) was introduced with PC DOS 2.0 as well, and this version also added read-only, archive, volume label, and directory attribute bits for hierarchical sub-directories.[24]




  • The average person does not deal with files anymore. Many people use online applications for everything from multimedia to documents, which happily abstract away the experience of managing file formats.

    I remember someone saying that and me having a hard time believing it, but I’ve seen several people say that.

    https://www.theverge.com/22684730/students-file-folder-directory-structure-education-gen-z

    Catherine Garland, an astrophysicist, started seeing the problem in 2017. She was teaching an engineering course, and her students were using simulation software to model turbines for jet engines. She’d laid out the assignment clearly, but student after student was calling her over for help. They were all getting the same error message: The program couldn’t find their files.

    Garland thought it would be an easy fix. She asked each student where they’d saved their project. Could they be on the desktop? Perhaps in the shared drive? But over and over, she was met with confusion. “What are you talking about?” multiple students inquired. Not only did they not know where their files were saved — they didn’t understand the question.

    Gradually, Garland came to the same realization that many of her fellow educators have reached in the past four years: the concept of file folders and directories, essential to previous generations’ understanding of computers, is gibberish to many modern students.

    https://old.reddit.com/r/AskAcademia/comments/1dkeiwz/is_genz_really_this_bad_with_computers/

    The OS interfaces have followed this trend, by developing OS that are more similar to a smartphone design (Windows 8 was the first great example of this). And everything became more user-friendly (my 65+ yo parents barely know how to turn on a computer, but now, use apps for the bank and send emails from their phone). The combined result is that the younger generations have never learned the basic of how a computer works (file structure, file installation…) and are not very comfortable with the PC setup (how they prefer to keep their notes on the phone makes me confused).

    So the “kids” do not need to know these things for their daily enjoyment life (play videogames, watch videos, messaging… all stuff that required some basic computer skills even just 10 years ago, but now can be done much more easily, I still remember having to install some bulky pc game with 3 discs) and we nobody is teaching them because the people in charge thought “well the kids know this computer stuff better than us” so no more courses in elementary school on how to install ms word.

    For a while I was convinced my students were screwing with me but no, many of them actually do not know the keyboard short cuts for copy and paste. If it’s not tablet/phone centric, they’re probably not familiar with it.

    Also, most have used GSuite through school and were restricted from adding anything to their Chrome Books. They’ve used integrated sites, not applications that need downloading. They’re also adept at Web 3.0, creation stuff, more than professional type programs.

    As much as boomers don’t know how to use PCs because they were too new for them, GenZs and later are not particularly computer savvy because computers are too old for them.

    I can understand some arguments that there’s always room to advance UI paradigms, but I have to say that I don’t think that cloud-based smartphone UIs are the endgame. If one is going to consume content, okay, fine. Like, as a TV replacement or something, sure. But there’s a huge range of software – including most of what I’d use for “serious” tasks – out there that doesn’t fall into that class, and really doesn’t follow that model. Statistics software? Software development? CAD? I guess Microsoft 365 – which I have not used – probably has some kind of cloud-based spreadsheet stuff. I haven’t used Adobe Creative Cloud, but I assume that it must have some kind of functionality analogous to Photoshop.

    kagis

    Looks like off-line Photoshop is dead these days, and Adobe shifted to a pure SaaS model:

    https://en.wikipedia.org/wiki/Adobe_Creative_Cloud#Criticism

    Shifting to a software as a service model, Adobe announced more frequent feature updates to its products and the eschewing of their traditional release cycles.[26] Customers must pay a monthly subscription fee. Consequently, if subscribers cancel or stop paying, they will lose access to the software as well as the ability to open work saved in proprietary file formats.[27]

    shakes head

    Man.

    And for that matter, I’d think that a lot of countries might have concerns about dependence on a cloud service. I mean, I would if we were talking about China. I’m not even talking about data security or anything – what happens if Country A sanctions Country B and all of Country B’s users have their data abruptly inaccessible?

    I get that Internet connectivity is more-widespread now. But, while I’m handicapped without an Internet connection, because I don’t have access to useful online resources, I can still basically do all of the tasks I want to do locally. Having my software unavailable because the backend is unreachable seems really problematic.


  • I don’t run multiple monitors – I’m in the “the monitor should be what’s in front of the eyes, and if it’s not showing useful stuff, then the software needs to be changed to deal with that” camp. So this isn’t based on personal experience.

    However, I strongly suspect, from what reading I’ve done and the degree of involvement that the compositor has in VRR support, that it’s not just going to be Wayland, but also the compositor you use that’s a factor.

    kagis

    This is three years old, but at least at that point, it sounds like Sway and Plasma supported it, and others did not.

    https://old.reddit.com/r/linux_gaming/comments/q40xff/are_kde_wayland_and_sway_the_only_options_for/

    EDIT: Well, actually. Hmm. On second thought, I guess I do have a projector, and a head-mounted display, neither of which I think do VRR. Not sure if I’ve actually used them with my monitor when it had VRR enabled, though.

    My real interest for VRR is for matching video framerates exactly. mpv requires a certain amount of user configuration, depending upon how you have it set up. Like, if you want top-quality non-VRR playback, you may want some settings required for frame interpolation, IIRC, and with VRR, you don’t. So if you want to play back fullscreen videos on both monitors using both VRR and non-VRR using mpv, you might need to change config for them, at least.


  • Do you use a macro keyboard for shortcuts?

    No. I think that macro functionality is useful, but I don’t do it via the physical keyboard.

    My general take is that chording (pressing some combination of keys simultaneously) that lets one keep one hands on the home row is faster than pressing one key. So, like, instead of having separate capital and lowercase letter keys, it’s preferable to have “shift” and just one key.

    I think that the main arguments for dedicated keys that one lifts one hands for would be for important but relatively-infrequently-used keys that people don’t use enough to remember chorded combinations for – you can just throw the label on the button as a quick reference. Like, we don’t usually have Windows-Alt-7 on a keyboard power on a laptop, but instead have a dedicated power button.

    Maybe there’s a use to have keyboard-level-programmed macros with chording, as some keyboards can do…but to me, the use case seems pretty niche. If you’re using multiple software environments (e.g. BIOS, Windows, Linux terminal, whatever) and want the same functionality in all of them (e.g. a way to type your name), that might make some sense. Or maybe if you’re permitted to take a keyboard with you, but are required to use a computer that you can’t configure at the software level, that’d provide configurability at a level that you have control over.

    In general, though, I’m happier with configuring stuff like that on the computer’s software; I don’t hit those two use cases, myself.