• 0 Posts
  • 25 Comments
Joined 1 year ago
cake
Cake day: July 1st, 2023

help-circle
  • It has to be okay for people to die, because ALL PATHS FORWARD INVOLVE PEOPLE DYING. Any choice you make involves some hidden choice about who gets to suffer and die and who doesn’t.

    But no, that’s not what I was saying. Also, are you aware that extinction also involves lots of deaths? Have you thought about what does and doesn’t count as “death” to you? What about responsibility for that death? How indirect does it have to be before you’re free from responsibility? Is it better to have fewer sentient beings living better lives, or more beings living worse lives? Does it matter how much worse? Is there a line where their life becomes a net positive in terms of its contribution to the overall “goodness” of the state of the universe? Once we can ensure a net positive life for people should the goal to be for as many to exist as possible? Should new people only be brought into the world if we can guarantee them a net positive life?

    But hey, thanks for the very concrete example of how being in a decent local minima is very hard to break out of.


  • First, no alternative is required for something to be unacceptable to continue. This is a very common line of reasoning that keeps us stuck in the local minima. Leaving a local minima necessarily requires some backsliding.

    Capitalism is unsustainable because every single aspect of it relies on the idea that resources can be owned.

    If you were born onto a planet where one single person owned literally everything, would you think that is acceptable? That it makes sense that the choices of people who are long dead and the agreements between them roll forward in time entitling certain people to certain things, despite a finite amount of those things being accessible to us? What if it was just two people, and one claimed to own all land? Would you say that clearly the resources of the planet should be divided up more fairly between those two people? If so, what about three people? Four? Five? Where do you stop and say “actually, people should be able to hoard far more resources than it is possible for anyone to have if things were fair, and we will use an arbitrary system that involves positive feedback loops for acquiring and locking up resources to determine who is allowed to do this and who isn’t”.

    Every single thing that is used in the creation of wealth is a shared resource. There is no such thing as a non-shared resource. There is no such thing as doing something “alone” when you’re working off the foundation built by 90+ billion humans who came before you. Capitalism lets the actual costs of things get spread around to everyone on the planet, environmental harm, depletion of resources that can never be regained, actions that are a net negative but are still taken because they make money for a specific individual. If the TRUE COST of the actions taken in the pursuit of wealth were actually paid by the people making the wealth, it would be very clear how much the fantasy of letting people pursue personal wealth relies on distributing the true costs through time and space. It requires literally stealing from the future. And sometimes the past. Often, resources invested into the public good in the past can be exploited asymmetrically by people making money through the magic of capitalism. Your business causes more money in damage to public resources than it even makes? Who cares, you only pay 30% in taxes!

    There is no way forward long term that preserves these fantasies and doesn’t inevitably turn into extinction or a single individual owning everything. No one wants to give up this fantasy, and they’re willing to let humanity go extinct to prevent having to.



  • Because it’s objectively unsustainable? I don’t really get what it even means to be “pro capitalist” at this point. We know, for a fact, that capitalism will lead to disaster if we keep doing what we’re doing. Do you disagree with that? Or do you not care?

    What is your general plan for what we should do when we can see that something we currently do and rely on will have to stop in the near future? Not that we will have to choose to stop it, but that it will stop because of something being depleted or no longer possible.

    If you imagine that we’re trying to find the best long-term system for humanity, and that the possible solutions exist on a curve on an X/Y plane, and we want to find the lowest point on the function, capitalism is very clearly a local minima. It’s not the lowest point, but it feels like one to the dumbass apes who came up with it. So much so that we’re resistant to doing the work to find the actual minima before this local one kills literally everyone :)





  • It would not HAVE to do that, it just is much harder to get it to happen reliably through attention, but it’s not impossible. But offloading deterministic tasks like this to typical software that can deal with them better than an LLM is obviously a much better solution.

    But this solution isn’t “in the works”, it’s usable right now.

    Working without python:

    It left out the only word with an f, flourish. (just kidding, it left in unfathomable. Again… less reliable.)


  • AI alignment is a field that attempts to solve the problem of “how do you stop something with the ability to deceive, plan ahead, seek and maintain power, and parallelize itself from just doing that to everything”.

    https://aisafety.info/

    AI alignment is “the problem of building machines which faithfully try to do what we want them to do”. An AI is aligned if its actual goals (what it’s “trying to do”) are close enough to the goals intended by its programmers, its users, or humanity in general. Otherwise, it’s misaligned. The concept of alignment is important because many goals are easy to state in human language terms but difficult to specify in computer language terms. As a current example, a self-driving car might have the human-language goal of “travel from point A to point B without crashing”. “Crashing” makes sense to a human, but requires significant detail for a computer. “Touching an object” won’t work, because the ground and any potential passengers are objects. “Damaging the vehicle” won’t work, because there is a small amount of wear and tear caused by driving. All of these things must be carefully defined for the AI, and the closer those definitions come to the human understanding of “crash”, the better the AI is “aligned” to the goal that is “don’t crash”. And even if you successfully do all of that, the resulting AI may still be misaligned because no part of the human-language goal mentions roads or traffic laws. Pushing this analogy to the extreme case of an artificial general intelligence (AGI), asking a powerful unaligned AGI to e.g. “eradicate cancer” could result in the solution “kill all humans”. In the case of a self-driving car, if the first iteration of the car makes mistakes, we can correct it, whereas for an AGI, the first unaligned deployment might be an existential risk.



  • You’re at a moment in history where the only two real options are utopia or extinction. There are some worse things than extinction that people also worry about, but lets call it all “extinction” for now. Super-intelligence is coming. It literally can’t be stopped at this point. The only question is whether it’s 2, 5, or 10 years.

    If we don’t solve alignment, you die. It is the default. AI alignment is the hardest problem humans have ever tried to solve. Global warming will cause suffering on that timescale, but not extinction. A well-aligned super-intelligence has actual potential to reverse global warming. A misaligned one will mean it doesn’t matter.

    So, if you care, you should be working in AI alignment. If you don’t have the skillset, find something else: https://80000hours.org/

    Every single dismissal of AI “doom” is based on wishful thinking and hand-waving.







  • Yes, it seems pretty untenable that rare earth is the explanation for the lack of evidence of any life outside of Earth. But even if it is true that we’re the only life in the observable universe, the universe is still much bigger, and in many physicists opinion, probably infinite.

    The fact that life seems to have evolved on Earth as soon as it was possible to is some evidence that abiogenesis is not the bottleneck. But the usefulness of this observation depends on the distribution of other things we don’t know. For example, if on planets where life evolves later, life never makes it to human-level intelligence before the planet becomes uninhabitable, then our early abiogenesis is survivorship bias, rather than something we should expect to be in the center of the distribution of when abiogenesis happens on a planet where it is possible.


  • What?.. no. That would be confirmed life on another planet. It would be the single biggest discovery in human history. You would have heard if that was the case. “Fossilized microbes” would be “life”. Nobody needs the life to still be alive to be a huge, huge deal.

    If you’re thinking the word “organic” means “microbe”, it doesn’t. I guess this is the consequence of all the harm to the public understanding that all the shitty headlines caused trying to get clicks but still being “technically correct” despite knowing it will be misinterpreted as “life”.


  • Not 100% proof. That would require the universe to be infinite, which it still might not be if the curvature is within the tiny margin of error. It’s close enough to proof that it might as well be the case. The entire universe couldn’t be less than something like 130x the size of the observable universe, though… unless it has nontrivial topology. There’s always a caveat.