• 0 Posts
  • 6 Comments
Joined 1 year ago
cake
Cake day: July 5th, 2023

help-circle


  • It’s hard to guess what the internal motivation is for these particular people.

    Right now it’s hard to know who is disseminating AI-generated material. Some people are explicit when they post it but others aren’t. The AI companies are easily identified and there’s at least the perception that regulating them can solve the problem, of copyright infringement at the source. I doubt that’s true. More and more actors are able to train AI models and some of them aren’t even under US jurisdiction.

    I predict that we’ll eventually have people vying to get their work used as training data. Think about what that means. If you write something and an AI is trained on it, the AI considers it “true”. Going forward when people send prompts to that model it will return a response based on what it considers “true”. Clever people can and will use that to influence public opinion. Consider how effective it’s been to manipulate public thought with existing information technologies. Now imagine large segments of the population relying on AIs as trusted advisors for their daily lives and how effective it would be to influence the training of those AIs.


  • The big tech companies keep trying to sell AR as a gateway to their private alternate realities. That misses the whole point of AR. It’s supposed to augment reality, not replace it.

    Everyone who has played video games knows what AR is supposed to look like. Create an API to let developers build widgets and allow users to rearrange them on a HUD.

    Obvious apps that would get a ton of downloads:
    floatynames - floats people’s names over their heads
    targettingreticle - puts a customizable icon in the center of your screen so you know it’s centered
    graffiti - virtual tagging and you control who sees it
    breadcrumbs - replaces the UI of your map software to just show you a trail to your destination
    catears - add an image overlay that makes it look like your friends have cat ears healthbars - they’re a really familiar visual element that you can tie to any metric (which may or may not be health related)

    I imagine being able to meet my friends at a cafe that I’ve never been to. It’s easy to find because I just follow a trail of dots down the street. As I get closer I can see a giant icon of a coffee cup so I know I’m on the right block. Not everyone is there yet but I can see that the last of our friends is on the bus 2 blocks away. I only met one of them once a few months ago I can see their name and pronouns. We sit around discussing latte art. I get up for an other cup and see from their health bar that one of my friends is out of coffee so I get them a refill. On the way out I scrawl a positive review and leave it floating on the sidewalk.



  • Is it practically feasible to regulate the training? Is it even necessary? Perhaps it would be better to regulate the output instead.

    It will be hard to know that any particular GET request is ultimately used to train an AI or to train a human. It’s currently easy to see if a particular output is plagiarized. https://plagiarismdetector.net/ It’s also much easier to enforce. We don’t need to care if or how any particular model plagiarized work. We can just check if plagiarized work was produced.

    That could be implemented directly in the software, so it didn’t even output plagiarized material. The legal framework around it is also clear and fairly established. Instead of creating regulations around training we can use the existing regulations around the human who tries to disseminate copyrighted work.

    That’s also consistent with how we enforce copyright in humans. There’s no law against looking at other people’s work and memorizing entire sections. It’s also generally legal to reproduce other people’s work (eg for backups). It only potentially becomes illegal if someone distributes it and it’s only plagiarism if they claim it as their own.