Hi, I’m Cleo! (he/they) I talk mostly about games and politics. My DMs are always open to chat! :)

  • 0 Posts
  • 41 Comments
Joined 9 months ago
cake
Cake day: October 25th, 2023

help-circle




  • I think we’ve just stumbled on an issue where the rubber meets the road as far as our philosophies about privacy and consent. I view consent as important mostly in areas that pertain to bodily autonomy right? So we give people the rights to use our likeness for profit or promotion or distribution. And what we’re giving people is a mental permission slip to utilize the idea of the body or the body itself for specific purposes.

    However, I don’t think that these things really pertain to private matters. Because the consent issue only applies when there are potential effects on the other person. Like if I talk about celebrities and say that imagining a celebrity sexually does no damage because you don’t know them, I think most people would agree. And so if what we care about is harm, there is no potential for harm.

    With surveillance matters, the consent does matter because we view breaching privacy as potential harm. The reason it doesn’t apply to AI nudes is that privacy is not being breached. The photos aren’t real. So it’s just a fantasy of a breach of privacy.

    So for instance if you do know the person and involve them sexually without their consent, that’s blatantly wrong. But if you imagine them, that doesn’t involve them at all. Is it wrong to create material imaginations of someone sexually? I’d argue it’s only wrong if there is potential for harm and since the tech is already here, I actually view that potential for harm as decreasing in a way. The same is true nonsexually. Is it wrong to deepfake friends into viral videos and post them on twitter? Can be. Depends. But do it in private? I don’t see an issue.

    The problem I see is the public stuff. People sharing it. And it’s already too late to stop most of the private stuff. Instead we should focus on stopping AI porn from being shared and posted and create higher punishments for ANYONE who does so. The impact of fake nudes and real nudes is very similar, so just take them similarly seriously.


  • In every chat I find about this, I see people railing against AI tools like this but I have yet to hear an argument that makes much sense to me about it. I don’t care much either way but I want a grounded position.

    I care about harms to people and in general, people should be free to do what they want until it begins harming someone. And then we get to have a nuanced conversation about it.

    I’ve come up with a hypothetical. Let’s say that you write naughty stuff about someone in your diary. The diary is kept in a secure place and in private. Then, a burglar breaks in and steals your diary and mails that page to whomever you wrote it about. Are you, the writer, in the wrong?

    My argument would be no. You are expressing a desire in private and only through the malice of someone else was the harm done. And no, being “creepy” isn’t an argument either. The consent thing I can maybe see but again do you have a right not to be fantasized about? Not to be written about in private?

    I’m interested in people’s thoughts because this argument bugs me not to have a good answer for.


  • I just don’t see why you’d make the creation of this stuff illegal. Right now you could be easy photoshop to put people’s faces onto dirty pictures. It hurts zero people and also takes a similar low amount of effort. As long as you keep it to yourself, society should not care.

    Making it illegal also seems kind of dumb when you can just hold someone civilly liable for this stuff if they’re posting nude photos of you, real or not. I don’t see the issue of any of it if we enforce these photos spreading as if they were real and let people collect damages.


  • More like:

    “Hey OpenAI, can you give me a brief description of life in ancient Egypt?”

    “Certainly, I’d be happy to provide an explanation about Egyptian life! But first, I’d like to thank today’s sponsor: RAID: Shadow Legends™.

    RAID: Shadow Legends™ is an immersive online experience with everything you’d expect from a brand new RPG title. It’s got an amazing storyline, awesome 3D graphics, giant boss fights, PVP battles, and hundreds of never before seen champions to collect and customize.

    I never expected to get this level of performance out of a mobile game. Look how crazy the level of detail is on these champions!

    RAID: Shadow Legends™ is getting big real fast, so you should definitely get in early. Starting now will give you a huge head start. There’s also an upcoming Special Launch Tournament with crazy prizes! And not to mention, this game is absolutely free!

    So go ahead and check out the video description to find out more about RAID: Shadow Legends™. There, you will find a link to the store page and a special code to unlock all sorts of goodies. Using the special code, you can get 50,000 Silver immediately, and a FREE Epic Level Champion as part of the new players program, courtesy of course of the RAID: Shadow Legends™ devs.”








  • Well Vanced was a lot different, they were actually redistributing code from YouTube. They were asking to be sued and they got off really easy.

    Whereas here, no code is being used afaik. They don’t even include the keys for the decryption for the console. So the only thing this can do is: decrypt game files once provided keys and then run an emulated graphics pipeline and logic process for said game.

    Now I can see an argument about how Yuzu is specifically built to emulate the Switch which is a current product. Which makes this sketchy. But also it’s an emulator. What’s better is that breaking the law is not required to use the emulator. You can get your own rom rips and keys and use them with the emulator which gives it a legal purpose as a 3rd party application.

    This is Nintendo just trying to scare them Id bet. Not a zero chance that Yuzu could lose though.




  • When I say misinformation and fakery, I mean that this information won’t be the normal type. It’ll be closer to fabrication or lies where the truth is not even partially represented. Same with fakery. It’s the difference between propaganda editing and straight up CGI or photoshop. The propaganda editing is common. Skew the narrative to the point of misinforming. The other type is not currently common. Because making a believable lie vs a parody of the truth takes more time. But AI can and will change that.

    For a while now Facebook has been a place of delusion where a different reality exists for people. This will double or triple that disconnect and will likely drive the whole platform into full post-truth but then also delusional truth. I hope I’m wrong but I doubt it.


  • I’ve had family member ask me about AI since I’m younger and I tell them every time that they aren’t ready for it at all. Especially in this years elections, AI will be weaponized immediately. Like in a few months the machines will turn on from countries around the world to generate fake images and mountains of fake articles and videos and sound bites. Basically, the average Facebook user will be flooded with misinformation and fakery to the point of overload.

    AI stuff is survival of the believable, not reality. So if you can believe it, even just a little, you will reinforce an AI model on the edge of your disbelief and it is designed to perfect delusion and extremism.

    This year will end in tragedy. I guarantee it.