Because this article is posited (with its title and the little blurb at the top about the author) to be about the safety of AI. The author doesn’t talk about what safety regulations there are. They don’t talk about what safety apparatus are being proposed or which ones have already been developed. There’s no conclusion here.
When you read a newspaper, generally there is a section for opinion pieces and editorials. There are several groups trying to push for clear and concise labeling of editorial, opinion pieces, and news pieces specifically because there’s so much misinformation going around.
But really. What is the point of posting an opinion piece to a community where we share tech news, when it’s not even valuable in its opinions? What is there to discuss here? That shareholders and consumers should view AI safety legislation or safety protocols differently because they affect those two parties differently? We already knew that.
Because this article is posited (with its title and the little blurb at the top about the author) to be about the safety of AI.
Unless the title and blurb have changed, this is just wrong.
The title says nothing about safety: “How AI’s booms and busts are a distraction - However current companies do financially, the big AI safety challenges remain.”
Likewise the blurb says nothing about safety: “Kelsey Piper is a senior writer at Future Perfect, Vox’s effective altruism-inspired section on the world’s biggest challenges. She explores wide-ranging topics like climate change, artificial intelligence, vaccine development, and factory farms, and also writes the Future Perfect newsletter.”
What are you going on about? You’re mad because you couldn’t tell this was on Op/Ed?
(Sidenote: I didn’t notice that “effective altruism” thing before. Barf.)
The blurb suggests that this person writes specifically altruist articles (a suggestion that this is for the benefit of someone which by proxy suggests that it’s telling the truth). Because opinions are subjective that conflicts with the context of the piece pretty harshly. It gives the idea that it may in some way be an opinion based in on fact when it simply isn’t because it cites no factual data that can be quantified whatsoever. This is literally how misinformation is spread. It doesn’t have to be outright lies in order to be damaging.
The article talks about how new safety measures could be developed. It’s in the text. It just doesn’t conclude anything or talk about any specifics. That’s really my problem with it. What good is the opinion of the author? What are they basing this opinion on? There’s no substance to this writing at all.
It gives the idea that it may in some way be an opinion based in on fact when it simply isn’t because it cites no factual data that can be quantified whatsoever.
This is also an opinion from you. Where’s your citation to support this statement? How do we know you’re not contributing to misinformation here?
Because this article is posited (with its title and the little blurb at the top about the author) to be about the safety of AI. The author doesn’t talk about what safety regulations there are. They don’t talk about what safety apparatus are being proposed or which ones have already been developed. There’s no conclusion here.
When you read a newspaper, generally there is a section for opinion pieces and editorials. There are several groups trying to push for clear and concise labeling of editorial, opinion pieces, and news pieces specifically because there’s so much misinformation going around.
But really. What is the point of posting an opinion piece to a community where we share tech news, when it’s not even valuable in its opinions? What is there to discuss here? That shareholders and consumers should view AI safety legislation or safety protocols differently because they affect those two parties differently? We already knew that.
Unless the title and blurb have changed, this is just wrong.
The title says nothing about safety: “How AI’s booms and busts are a distraction - However current companies do financially, the big AI safety challenges remain.”
Likewise the blurb says nothing about safety: “Kelsey Piper is a senior writer at Future Perfect, Vox’s effective altruism-inspired section on the world’s biggest challenges. She explores wide-ranging topics like climate change, artificial intelligence, vaccine development, and factory farms, and also writes the Future Perfect newsletter.”
What are you going on about? You’re mad because you couldn’t tell this was on Op/Ed?
(Sidenote: I didn’t notice that “effective altruism” thing before. Barf.)
The blurb suggests that this person writes specifically altruist articles (a suggestion that this is for the benefit of someone which by proxy suggests that it’s telling the truth). Because opinions are subjective that conflicts with the context of the piece pretty harshly. It gives the idea that it may in some way be an opinion based in on fact when it simply isn’t because it cites no factual data that can be quantified whatsoever. This is literally how misinformation is spread. It doesn’t have to be outright lies in order to be damaging.
The article talks about how new safety measures could be developed. It’s in the text. It just doesn’t conclude anything or talk about any specifics. That’s really my problem with it. What good is the opinion of the author? What are they basing this opinion on? There’s no substance to this writing at all.
This is also an opinion from you. Where’s your citation to support this statement? How do we know you’re not contributing to misinformation here?
Possibly because you read the article. But whatever I guess. It is just my opinion, after all.
lol what?
There’s no way to write an article with that title and not have it be an opinion.