This would be a meme by itself:
Yoko, Shinobu ni, eto… 🤔
עַם יִשְׂרָאֵל חַי Slava Ukraini 🇺🇦 ❤️ 🇮🇱
This would be a meme by itself:
lacks some cheese IMO
for the math homies, you could say that NaN is an absorbing element
Omae wa mou shindeiru
Let them fight among themselves and prove time and time again that patents are idiotic and hinder innovation.
Yup, they already forced Google to announce that they’ll add such a choice screen for the search engine and web browser on Android: https://www.neowin.net/news/google-will-add-new-search-and-browser-choice-screens-for-android-phones-in-europe/
It’s only a matter of time before Microsoft does so too.
ollama should be much easier to setup!
ROCm is decent right now, I can do deep learning stuff and CUDA programming with it with an AMD APU. However, ollama doesn’t work out-of-the-box yet with APUs, but users seem to say that it works with dedicated AMD GPUs.
As for Mixtral8x7b, I couldn’t run it on a system with 32GB of RAM and an RTX 2070S with 8GB of VRAM, I’ll probably try with another system soon [EDIT: I actually got the default version (mixtral:instruct) running with 32GB of RAM and 8GB of VRAM (RTX 2070S).] That same system also runs CodeLlama-34B fine.
So far I’m happy with Mistral 7b, it’s extremely fast on my RTX 2070S, and it’s not really slow when running in CPU-mode on an AMD Ryzen 7. Its speed is okayish (~1 token/sec) when I try it in CPU-mode on an old Thinkpad T480 with an 8th gen i5 CPU.
PSA: give open-source LLMs a try folks. If you’re on Linux or macOS, ollama makes it incredibly easy to try most of the popular open-source LLMs like Mistral 7B, Mixtral 8x7B, CodeLlama etc… Obviously it’s faster if you have a CUDA/ROCm-capable GPU, but it still works in CPU-mode too (albeit slow if the model is huge) provided you have enough RAM.
You can combine that with a UI like ollama-webui or a text-based UI like oterm.
Hmm I don’t think it’s because of that feature, because it only runs when you explicitly ask it to translate a page for you. You should probably check your extensions, see if you have some redundant ones (a mistake people make is use multiple ad-blockers/anti-trackers, when just uBlock Origin + Firefox’s defaults are usually good enough).
Yup, Firefox has it: https://browser.mt/ (it’s now a native part of Firefox)
Microsoft really wants someone to remind it of these days:
RedReader gets a barely-glance in a single sentence. A single dev (and with users providing PRs) has one of the best, and most unknown, apps for over a decade now.
RedReader is definitely a gem. Incredible app that still works despite the Reddit appocalypse.
But for a moment I was like wow, 100FPS in software rendering
Thank you, that exactly was my point.
Because the title is still vague, and yes GPU and “graphics card” are often used interchangeably by the internet (examples: https://www.hp.com/gb-en/shop/tech-takes/integrated-vs-dedicated-graphics-cards and https://www.ubisoft.com/en-us/help/connectivity-and-performance/article/switching-to-your-pcs-dedicated-gpu/000081045 ).
“New CPU hits 132fps” could wrongly suggest software rendering, which is very different (see for example https://www.gamedeveloper.com/game-platforms/rad-launches-pixomatic----new-software-renderer ) and died more than a decade ago.
A bit misleading, what is meant is that no dedicated GPU is being used. The integrated GPU in the APU is still a GPU. But yes, AMD’s recent APUs are amazing for folks who don’t want to spend too much to get a reasonable gaming setup.
just nostalgia
Surely mostly nostalgia. But I do remember feeling a sense of accomplishment whenever I managed to run a game and get the sound working 😅
Yeah it’s not Linux. It’s forked off MenuetOS (https://menuetos.net/ ) which is a hobby OS written entirely in assembly (FASM flavor, https://flatassembler.net/ ).
Not gonna lie, part of me wants to relive the SoundBlaster and DOS extenders era and watch stuff with QuickTime. Tinkering with config.sys and autoexec.bat was quite fun back then.
In the 2000s we had AdSense. So now we’re getting… AISense?