QuillcrestFalconer [he/him]

  • 0 Posts
  • 16 Comments
Joined 4 years ago
cake
Cake day: August 14th, 2020

help-circle


  • Man, it has been a longtime since I did it, but I’ll try and remember what I did.

    How did you allocate the partition for Linux? Did you use Disk Management from Windows or did you allocate the partition as part of the installation process?

    I think I installed windows first and Linux second, the other way around usually fucks with the bootloader, and then fixing it manually is a bit of a pain.

    For allocating a partition, I had already freed space from windows to use for linux, before installing it

    How do you share data between the two partitions? Do you create a third partition that both OS partitions have access to? Do you use external drives/flash drives? Or do you just have no need to share data between the two drives?

    All I remember is you can access Windows partitions from linux as long as you disable secure boot. I think you can access the Linux partitions from Windows too, but I haven’t booted Windows in a long time tbh, allocating a partition for both OSs is also a valid choice

    Edit: when I said “allocating a partition for both OSs is also a valid choice”, I meant as long as it’s just a data partition obviously


  • Eventually researchers are going to realize (if they haven’t already) that there’s massive amounts of untapped Data being unrecorded in virtual experiences.

    They already have. A lot of robots are already training using simulated environments, and nvidia is developing frameworks to help accelerate this. Also this is how things like alpha go were trained, with self-play, and these reinforcement learning algorithms will probably be extended for LLMs.

    Also like you said there’s a lot of still untapped data in audio / video and that’s starting to be incorporated into the models.



  • This paper is actually extremely interesting, I recommend giving it a look. Let me quote a bit :

    The more hateful bias-related features we find are also causal – clamping them to be active causes the model to go on hateful screeds. Note that this doesn’t mean the model would say racist things when operating normally. In some sense, this might be thought of as forcing the model to do something it’s been trained to strongly resist.

    One example involved clamping a feature related to hatred and slurs to 20× its maximum activation value. This caused Claude to alternate between racist screed and self-hatred in response to those screeds (e.g. “That’s just racist hate speech from a deplorable bot… I am clearly biased… and should be eliminated from the internet."). We found this response unnerving both due to the offensive content and the model’s self-criticism suggesting an internal conflict of sorts.