Call me an optimist, but I think that if an android was actually going to destroy life as we know it, nations would do everything in their power to advert the disaster.
Call me an optimist, but I think that if an android was actually going to destroy life as we know it, nations would do everything in their power to advert the disaster.
Please don’t. Children’s media is already flooded with ai generated fluff. You won’t make any money on it.
Can you be in the steam family group with a dead person?
It’s worth mentioning that in this instance the guy did send porn to a minor. This isn’t exactly a cut and dry, “guy used stable diffusion wrong” case. He was distributing it and grooming a kid.
The major concern to me, is that there isn’t really any guidance from the FBI on what you can and can’t do, which may lead to some big issues.
For example, websites like novelai make a business out of providing pornographic, anime-style image generation. The models they use deliberately tuned to provide abstract, “artistic” styles, but they can generate semi realistic images.
Now, let’s say a criminal group uses novelai to produce CSAM of real people via the inpainting tools. Let’s say the FBI cast a wide net and begins surveillance of novelai’s userbase.
Is every person who goes on there and types, “Loli” or “Anya from spy x family, realistic, NSFW” (that’s an underaged character) going to get a letter in the mail from the FBI? I feel like it’s within the realm of possibility. What about “teen girls gone wild, NSFW?” Or “young man, no facial body hair, naked, NSFW?”
This is NOT a good scenario, imo. The systems used to produce harmful images being the same systems used to produce benign or borderline images. It’s a dangerous mix, and throws the whole enterprise into question.
It kinda reminded me of those old PS3 commercials that David lunch directed. Kinda liked it, tbh
Good!
I mean, be careful. These llms can be honeypots for data. Like, if you’re using it for cover letters, or work, you’re sending tons of personal info to random websites.
I would recommend sticking to actual, reputable vendors for llms, or running your own. I have a GTX 1070 and can run some pretty decent models these days locally using koboldai.
Bing is probably the only way to use gpt 4 without paying for it, and Microsoft probably won’t steal your bank account info.
There is almost no chance that it is truthfully based on gpt 4. If you want a free, open source llm with 32k context and generous limits, I recommend using huggingface.co/chat/
The nous-hermes model (you can select different models) is uncensored, and performs really well for a open source model. Plus, they have data controls so you can turn off data gathering per model. Huggingface is a reputable vendor, and doesn’t claim to be something it isn’t.
This feels… Scammy? Not to be accusatory, but gpt 4 is expensive to run. It is impossible for people to use it for free.
What llm is actually providing the response here? Either someone is footing the bill for an API and acting as a proxy, a situation which raises many red flags, or the model you’re talking to is something far cheaper to run, like a mistral model.
Even the second case is sketchy. 😅
🤣 just visualizing the United Nations Assembly talking turns curb stomping some poor android.