• 0 Posts
  • 21 Comments
Joined 4 months ago
cake
Cake day: March 3rd, 2024

help-circle


  • Understanding the variety of speech over a drive-thru speaker can be difficult for a human with experience in the job. I can’t see the current level of voice recognition matching it, especially if it’s using LLMs for processing of what it managed to detect. If I’m placing a food order I don’t need a LLM hallucination to try and fill in blanks of what it didn’t convert correctly to tokens or wasn’t trained on.





  • Is it a physical HD (magnetic) and making noise? I had one years ago (fortunately my only failure so far) and if I kept persisting to try and read it via a USB recovery drive, I managed to pull enough data off that was important. If it’s a newer SSD, that’s a different thing. Doesn’t mean all the data is gone, just a lot harder (read $$$) to pull. Hopefully it’s just software or a loose cable.




  • The narrow purpose models seem to be the most successful, so this would support the idea that a general AI isn’t going to happen from LLMs alone. It’s interesting that hallucinations are seen as a problem yet are probably part of why LLMs can be creative (much like humans). We shouldn’t want to stop them, but just control when they happen and be aware of when the AI is off the tracks. A group of different models working together and checking each other might work (and probably has already been tried, it’s hard to keep up).



  • It was a hook, and the media grabbed it. It’s really more of a way to continue to divide people and keep them in the voting groups. Trump won’t get anyone from the left to vote for him, but he has to keep those on the right in his camp. So these are tools to alienate each from the other and secure the base.

    And it’s also him saying what he really thinks out loud, but it’s been shown time and again he can do that and it won’t be those words that drive people away. His biggest fear is silence, if the media isn’t talking about him then people might drift to other places.









  • If anything I think the development of actual AGI will come first and give us insight on why some organic mass can do what it does. I’ve seen many AI experts say that one reason they got into the field was to try and figure out the human brain indirectly. I’ve also seen one person (I can’t recall the name) say we already have a form of rudimentary AGI existing now - corporations.