Bad article title. This is the “Textbooks are all you need” paper from a few days ago. It’s programming focused and I think Python only. For general purpose LLM use, LLaMA is still better.
Gamer, rider, dev. Interested in anything AI.
Bad article title. This is the “Textbooks are all you need” paper from a few days ago. It’s programming focused and I think Python only. For general purpose LLM use, LLaMA is still better.
I hear good things about Traefik. Basically all I need is a reverse proxy that will handle re-writing URLs and websockets and slapping some ssl and auth on it. If something is easier for that, I’m all ears.
Yep, I’m using an RTX2070 for that right now. The LLMs are just executing on CPU.
Do you recommend this email provider? Lots of people looking to get off gmail lately.
Are you running your own mail server? I only ever integrated Spamassassin with postfix.
Stable Diffusion (Stability AI version), text-generation-webui (WizardLM), a text embedder service with Spacy, Bert and a bunch of sentence-transformer models, PiHole, Octoprint, Elasticsearch/Kibana for my IoT stuff, Jellyfin, Sonarr, FTB Minecraft (customized pack), a few personal apps I wrote myself (todo lists), SMB file shares, qBittorrent and Transmission (one dedicated to Sonarr)… Probably a ton of other stuff I’m forgetting.
Yup, mostly running pretrained models for text embedding and some generative stuff. No real fine tuning.
Yup, typically we get into it after upgrading an older PC or something and instead of selling the parts, just turn it into a server. You can also find all sorts of cheap/good stuff on ebay from office off-lease.
Wow, a reply I made in another community ended up under this one. Yeah I’m doing a lot of work on local models and text embedding models for vector search.
I hate these filthy neutrals…
I paid $1100 for a 3070 during the pandemic with a newegg bundle deal (trash stuff they couldn’t sell). I already had a 2070 and it was a complete waste of money.
These are amazing. Dell, Lenovo and I think HP made these tiny things and they were so much easier to get than Pi’s during the shortage. Plus they’re incredibly fast in comparison.