Formerly /u/Zalack on Reddit.e

Also Zalack@kbin.social

  • 0 Posts
  • 31 Comments
Joined 1 year ago
cake
Cake day: August 3rd, 2023

help-circle








  • Formal licensing could be about things that are language agnostic. How to properly use tests to guard against regressions, how to handle error states safely.

    How do you design programs for critical systems that CANNOT fail, like pace makers? How do you guard against crashes? What sort of redundancy do you need in your software?

    How do you best design error messages to tell an operator how to fix the issue? Especially in critical systems like a plane, how do you guard against that operator doing the wrong thing? I’m thinking of the DreamLiner incidents where the pilots’ natural inclination was to grab the yoke and pull up, which unknowingly fought the autopilot and caused the plane to stall. My understanding was that the error message that triggered during those crashes was also extremely opaque and added further confusion in a life-and-death situation.

    When do you have an ethical responsibility not to ship code? Just for physical safety? What about Dark Patterns? How do you recognize them and do you have an ethical responsibility to refuse implementation? Should your accreditation as an engineer rely on that refusal, giving you systemic external support when you do so?

    None of that is impacted by what tech stack you are using. They all come down to generic logical and ethical reasoning.

    Lastly, under certain circumstances, Civil engineers can be held personally liable for negligence when their bridge fails and people die. If we are going to call ourselves “engineers”, we should bear the same responsibility. Obviously not every software developer needs to have such high standards, but that’s why software engineer should mean something.


  • My experience has often been the opposite. Programmers will do a lot to avoid the ethical implications of their works being used maliciously and discussions of what responsibility we bear for how our work gets used and how much effort we should be obligated to make towards defending against malicious use.

    It’s why I kind of wish that “engineer” was a regulated title in America like it is in other countries, and getting certified as a programming engineer required some amount of training in programming ethics and standards.



  • I actually think the radio signal is an apt comparison. Let’s say someone was trying to argue that the signal itself was a fundamental force.

    Well then you could make the argument that if you pour a drink into it, the water shorts the electronics and the signal stops playing as the electromagnetic force stops working on the pieces of the radio. This would lead you to believe, through the same logic in my post, that the signal itself is not a fundamental force, but is somehow created through the electromagnetic force interacting with the components, which… It is! The observer might not understand how the signal worked, but they could rule it out as being its own discreet thing.

    In the same way, we might not know exactly how our brain produces consciousness, but because the components we can see must be involved, it isn’t a discreet phenomenon. Fundamental forces can’t have parts or components, they must be completely discreet.

    Your example is a really really good one.



  • At a sketch:

    • We know that when the brain chemistry is disrupted, our consciousness is disrupted

    • You can test this yourself. Drink some alcohol and your consciousness will be disrupted. Similarly I am on Gabapentin for nerve pain, which works by inhibiting the electrical signals my nerves use to fire, and in turn makes me groggy.

    • While we don’t know exactly how consciousness works, we have a VERY good understanding of chemistry, which is to say, the strong and weak nuclear forces and electromagnetism (fundamental forces). Literally millions of repeatable experiments that have validated these forces exist and we understand the way they behave.

    • Drugs like Gabapentin and Alcohol interact with our brain using these forces.

    • If the interaction of these forces being disrupted disrupts our consciousness, it’s reasonable to conclude that our consciousness is built on top of, or is an emergent property of, these forces’ interactions.

    • If our consciousness is made up of these forces, then it cannot be a fundamental force as, by definition, fundamental forces must be the basic building blocks of physics and not derived from other forces.

    There are no real assumptions here. It’s all a line of logical reasoning based on observations you can do yourself.



  • I think the problem is that there is less often something to be said if you agree. Every now and then you might have something to add that fleshes out the idea or adds additional context, but generally if I totally agree with a comment I just upvote it.

    On the other hand, when you disagree with something your response will, by logical necessity, be different from the parent comment.

    So if you want to prioritize “adding something novel” there’s a logical bias towards comments that disagree since only some percentage of agreement will tick that box.

    Otherwise you end up with a bunch of comments that literally or figuratively add up to “this”.




  • While that’s true, we have to allow for the fact that our own intelligence, at some point, is an encoded model of the world around us. Probably not through something as rigid as precise statistics, but our consciousness is somehow an emergent phenomenon of the chemical reactions in our brains that on their own have no real understanding of the world either.

    I do have to wonder if at some point, consciousness will spontaneously emerge as we make these models bigger and more complex and – maybe more importantly – start layering specialized models on top of each other that handle specific tasks then hand the result back to another model, creating feedback loops. I’m imagining a nueral network that is trained on something extremely abstract like figuring out, from the raw input data, what specialist model would be best suited to process that data, then based on the result, what model would be best suited to refine that data. Something we train to basically be an executive function with a bunch of sub models available to it.

    Could something like that become conscious without realizing it’s “communicating” with us? The program executing the LLM might reflexively process data without any concept that it’s text, but still be emergently complex enough when reflecting its own processes to the point of self awareness. It wouldn’t realize the data represents a link to other conscious beings.

    As a metaphor, you could teach a very smart dog how to respond to certain, basic arithmetic problems. They would get stuff wrong the moment you prompted them to do something out of their training, and they wouldn’t understand they were doing math even when they got it “right”, but they would still be sentient, if not sapient, despite that.

    It’s the opposite side of the philosophical zombie. A philosophical zombie behaves exactly as a human would, but is a surface-level automaton with no inner life.

    But I propose that we also consider the inverse-philosophical zombie, an entity that behaves like an automation, but has an inner life that has not recognized its input data for evidence of an external world outside it’s own bounds. Something that might not even recognize it’s executing a program the same way we aren’t consciously aware of the chemical reactions our brain is executing to make us think.

    I don’t believe current LLMs are anywhere near complex enough to give rise to that sort of thing, but they are also still pretty early in their development and haven’t started to be heavily layered and interconnected the way I think they’ll end up.

    At the very least it makes for a fun Sci-fi premise.