![](/static/253f0d9b/assets/icons/icon-96x96.png)
![](https://fry.gs/pictrs/image/c6832070-8625-4688-b9e5-5d519541e092.png)
Agreed, really hoping they stick to refocusing on the browser.
Agreed, really hoping they stick to refocusing on the browser.
That was the joke
Reported, rule 5
ISPs in the US are notorious for getting public funds for services that they never provide, so I wouldn’t be too concerned about that.
I will archive you!
Since myself and others had no issues with your float needle example, mind sharing what you searched for, and what Google returned?
Errrm,
Ma’am*
Sorry you just have a very raspy voice.
You, sir, are a genius
Tomato, tomato translates hilariously poorly in text, I’m dying
I’m not really following you but I think we might be on similar paths. I’m just shooting in absolute darkness so don’t hold much weight to my guess.
What makes transformers brilliant is the attention mechanism. That is brilliant in turn because it’s dynamic, depending on your query (also some other stuff). This allows the transformer to be able to distinguish between bat and bat, the animal and the stick.
You know what I bet they didn’t do in testing or training? A nonsensical query that contains thousands of one word, repeating.
So my guess is simply that this query took the model so far out of its training space that the model weights have no ability to control the output in a reasonable way.
As for why it would output training data and not random nonsense? That’s a weak point in my understanding and I can only say “luck,” which is, of course, a way of saying I have no clue.
Not many. I prefer smaller trackers though. If you see a lot of popular torrents on larger trackers, you’ll have a bunch of concurrent active seeds.
If you permaseed you don’t need to know individual tracker seeing requirements.
That’s why I’m here