• 0 Posts
  • 214 Comments
Joined 1 year ago
cake
Cake day: June 9th, 2023

help-circle

  • It’s no different than a number in your banks database, except it’s in your custody, like cash.

    And it’s not a real currency, it’s a memecoin.

    Is your bank’s database a currency?

    No, my bank’s database is a database, it refers to a currency that is real because it is accepted for paying taxes, fines, etc.

    but I’m happy to teach you about the industry if you’re interested

    There’s nothing you could teach that would be valuable to learn. You seem to be in on the grift, looking for another person to get in on the pyramid scheme. Good luck with that, but I’m not interested.



  • USDC is absolutely a token on many different ledgers that represents a currency.

    No, it is a speculative investment. If it were a currency it would be something people were using to buy things, accepting for selling things, using to pay taxes and fines, using to invest in something else, etc.

    It’s not a currency, it’s at best some kind of intermediate thing used to buy even more speculative “investments”.


  • The customer was using cloudflare IP addresses, which is causing a knock-on effect for the rest of cloudflare’s customers and putting cloudflare as a business themselves at risk.

    Right, so sales should not be involved in any way.

    The alternative was for the customer to use their own IP addresses as cloudflare advised .

    Again, sales should not have been involved in any way.

    I’m not sure what you think ‘Business development’ teams do but I certainly wouldn’t be expecting engineering advice from them.

    They are at least not identical to sales. They work with sales, but there’s at least some engineering component of the job. In this case if you were told you were meeting with the business development team, you’d expect that there would be talk about an engineering solution to the problem. Not just paying cloudflare more money.


  • I’m 100% on the side of CF.

    100%?

    We scheduled a call with their “Business Development” department. Turns out the meeting was with their Sales team,

    So we scheduled another call, now with their “Trust and Safety” team. But it turns out, we were actually talking to Sales again.

    This is the part that’s ridiculous to me. If CloudFlare thinks they’re violating TOS that’s fine. If they’re willing to let them continue with their business as-is as long as they pay more? That’s fine. But, scheduling calls with one group and it turns out it’s actually CloudFlare’s sales team on the phone, that’s ridiculous.





  • Interestingly, for a currency to actually be useful, there needs to be a demand for it, something that you can only pay for in that currency. For real currencies that is normally taxes. England only accepts taxes paid in pounds, so there’s a demand for pounds from every person who has to pay taxes in England. For crypto, extortion is basically the only source of demand.

    Sure, occasionally there are places that accept both real currencies and crypto currencies, but for legit businesses almost none of the revenue comes from the crypto side. But, for ransomware, etc. the hackers only accept crypto. That means there’s a demand for crypto, which means that it has some value.



  • I don’t know what their motivation is, but I definitely hope they protect the identity of the voice actress. If her name gets out, it’s basically guaranteed her life would suck for a while.

    If she’s like 99% of actors, she’s someone just struggling to get work, who’s lucky if she can afford to rent an apartment without roommates. If her name got out, she’s almost certainly have to deal with death threats, stalkers, etc. Rich celebrities can deal with that kind of attention because they have the money to hire security people, PR people, lawyers, etc. Some random voice actor is not going to have those resources.





  • I mean alledgedly ChatGPT passed the “bar-exam” in 2023. Which I find ridiculous considering my experiences with ChatGPT and the accuracy and usefulness I get out of it which isn’t that great at all

    Exactly. If it passed the bar exam it’s because the correct solutions to the bar exam were in the training data.

    The other side can immediately tell that somebody has made an imitation without understanding the concept.

    No, they can’t. Just like people today think ChatGPT is intelligent despite it just being a fancy autocomplete. When it gets something obviously wrong they say those are “hallucinations”, but they don’t say they’re “hallucinations” when it happens to get things right, even though the process that produced those answers is identical. It’s just generating tokens that have a high likelihood of being the next word.

    People are also fooled by parrots all the time. That doesn’t mean a parrot understands what it’s saying, it just means that people are prone to believe something is intelligent even if there’s nothing there.

    ChatGPT refuses to tell illegal things, NSFW things, also medical advice and a bunch of other things

    Sure, in theory. In practice people keep getting a way around those blocks. The reason it’s so easy to bypass them is that ChatGPT has no understanding of anything. That means it can’t be taught concepts, it has to be taught specific rules, and people can always find a loophole to exploit. Yes, after spending hundreds of millions of dollars on contractors in low-wage countries they think they’re getting better at blocking those off, but people keep finding new ways of exploiting a vulnerability.


  • Yeah, that’s basically the idea I was expressing.

    Except, the original idea is about “Understanding Chinese”, which is a bit vague. You could argue that right now the best translation programs “understand chinese”, at least enough to translate between Chinese and English. That is, they understand the rules of Chinese when it comes to subjects, verbs, objects, adverbs, adjectives, etc.

    The question is now whether they understand the concepts they’re translating.

    Like, imagine the Chinese government wanted to modify the program so that it was forbidden to talk about subjects that the Chinese government considered off-limits. I don’t think any current LLM could do that, because doing that requires understanding concepts. Sure, you could ban key words, but as attempts at Chinese censorship have shown over the years, people work around word bans all the time.

    That doesn’t mean that some future system won’t be able to understand concepts. It may have an LLM grafted onto it as a way to communicate with people. But, the LLM isn’t the part of the system that thinks about concepts. It’s the part of the system that generates plausible language. The concept-thinking part would be the part that did some prompt-engineering for the LLM so that the text the LLM generated matched the ideas it was trying to express.


  • The “learning” in a LLM is statistical information on sequences of words. There’s no learning of concepts or generalization.

    And what do you think language and words are for? To transport information.

    Yes, and humans used words for that and wrote it all down. Then a LLM came along, was force-fed all those words, and was able to imitate that by using big enough data sets. It’s like a parrot imitating the sound of someone’s voice. It can do it convincingly, but it has no concept of the content it’s using.

    How do you learn as a human when not from words?

    The words are merely the context for the learning for a human. If someone says “Don’t touch the stove, it’s hot” the important context is the stove, the pain of touching it, etc. If you feed an LLM 1000 scenarios involving the phrase “Don’t touch the stove, it’s hot”, it may be able to create unique dialogues containing those words, but it doesn’t actually understand pain or heat.

    We record knowledge in books, can talk about abstract concepts

    Yes, and those books are only useful for someone who has a lifetime of experience to be able to understand the concepts in the books. An LLM has no context, it can merely generate plausible books.

    Think of it this way. Say there’s a culture where instead of the written word, people wrote down history by weaving fabrics. When there was a death they’d make a certain pattern, when there was a war they’d use another pattern. A new birth would be shown with yet another pattern. A good harvest is yet another one, and so-on.

    Thousands of rugs from that culture are shipped to some guy in Europe, and he spends years studying them. He sees that pattern X often follows pattern Y, and that pattern Z only ever seems to appear following patterns R, S and T. After a while, he makes a fabric, and it’s shipped back to the people who originally made the weaves. They read a story of a great battle followed by lots of deaths, but surprisingly there followed great new births and years of great harvests. They figure that this stranger must understand how their system of recording events works. In reality, all it was was an imitation of the art he saw with no understanding of the meaning at all.

    That’s what’s happening with LLMs, but some people are dumb enough to believe there’s intention hidden in there.


  • That is to force it to form models about concepts.

    It can’t make models about concepts. It can only make models about what words tend to follow other words. It has no understanding of the underlying concepts.

    You can see that by asking them to apply their knowledge to something they haven’t seen before

    That can’t happen because they don’t have knowledge, they only have sequences of words.

    For example a cat is closer related to a dog than to a tractor.

    The only way ML models “understand” that is in terms of words or pixels. When they’re generating text related to cats, the words they’re generating are closer to the words related to dogs than the words related to tractors. When dealing with images, it’s the same basic idea. But, there’s no understanding there. They don’t get that cats and dogs are related.

    This is fundamentally different from how human minds work, where a baby learns that cats and dogs are similar before ever having a name for either of them.


  • Yeah. This is related to supernatural beliefs. If the grass moves it might just be a gust of wind, or it might be a snake. Even if snakes are rare, it’s better to be safe than sorry. But, that eventually leads to assuming that the drought is the result of an angry god, and not just some random natural phenomenon.

    So, brains are hard-wired to look for causes, even inventing supernatural causes, because it helps avoid snakes.