

I was going with Rayleigh scattering, but that works too
I was going with Rayleigh scattering, but that works too
the sky is blue
an unbiased perspective
More abstract concepts that generally trouble the intuition of many:
the irrelevance of laminar to turbulent flow
time and gravity are related
magnetism is not magic
entropy precludes perpetual motion
By default it will break out many things. I use db as an extra layer of containers in addition to a python venv with most AI stuff. I also use it to get the Arch AUR on Fedora too.
Best advice I can give is to mess with your user name, groups, and SELinux context if you really want to know what is happening where and how. Also have a look at how Fedora Silverblue does bashrc for the toolbox command and start with something similar. Come up with a solid scheme for saving and searching your terminal commands history too.
In nearly every instance you will be citing stupidity in implementation. The limitations of generative AI in the present are related to access and scope along with the peripherals required to use them effectively. We are in a phase like the early microprocessor. By itself, a Z80 or 6502 was never a replacement for a PDP-11. It took many such processors and peripheral circuit blocks to make truly useful systems back in that era. The thing is, these microprocessors were Turing complete. It is possible to build them into anything if enough peripheral hardware is added and there is no limit on how many microprocessors are used.
Generative AI is fundamentally useful in a similar very narrow scope. The argument should be limited to the size and complexity required to access the needed utility and agentic systems along with the expertise and the exposure of internal IP to the most invasive and capable of potential competitors. If you are not running your own hardware infrastructure, assume everything shared is being archived with every unimaginable inference applied and tuned over time on the body of shared information. How well can anyone trust the biggest VC vampires in control of cloud AI.
Mixture of Experts
dumbass for president
We need a way to make self hosting super easy without needing additional infrastructure. Like use my account here with my broad spectrum of posts and comments as initial credentials for a distributed DNS name and certificate authority combined with a preconfigured ISO for a Rπ or similar hardware. The whole thing should not take manual intervention for automatically updating and should just run any federated services. Then places like LW are a bridge for people to migrate to their own distributed hosting even when they lack the interest or chops to self host. I don’t see why we need to rely on the infrastructure of the old internet as a barrier to entry. I bet there are an order of magnitude more people that would toss a Rπ on their network and self host if it was made super easy, did not require manual intervention, and does not dump them into the spaghetti of networking, OS, and server configuration/security. Federated self hosting should be as simple as using a mobile app to save and view pictures or browse the internet.
OpenAI’s mission statement is also in their name. The fact that they have a proprietary product that is not open source is criminal and should be sued out of existence. They are now just like the Sun Micro after Apache was made open sourced; irrelevant they just haven’t gotten the memo yet. No company can compete against the whole world.
Planck could not scale small enough.
You can use the fedora direct sources to search their discourse forum. Google and Microsoft are likely warping your search results intentionally to drive you back onto Windows. Search is not deterministic any more. It is individually targeted.
I have never used KDE much, so I have no idea. You are probably looking for KDE settings. These would likely be part of gsettings in GNOME. That is not really a fedora thing. You need to look in the KDE documentation. This is the kind of thing that gets easier with time but can be frustrating at first.
Sorry I’m not more helpful than this. It is 2am in California and I didn’t want to leave you with no replies at all.
I think there is more of a need to make the fediverse feel like a community. Use my account on one service as a automatic validation to any other without requiring a formal sign up. Something like how “login with GitHub” etc. works. If these were interconnected in a low effort and seamless way, the existing community would be less of a walled garden and more of a culture. Posting video, pics, blogs, or even more forum like persistent topic threads should be seamless. In my opinion pursuing growth as a community has small returns. Becoming the most effective tool and the path of least resistance while being positive and stable is the real key to large scale growth. Updating LW is absolutely critical for Lemmy’s future IMO and is our weakest link.
These were the thoughts that came to mind after I saw fedigrow. That name was very intuitive.
They have never kicked me and I haven’t had to reapply, but I don’t get an actual doctor and have some idiotic plan with walk in clinics.
deleted by creator
Luigi deserves a larger shrine
NVCC is still proprietary and full of telemetry. You cannot build CUDA without it.
I have a monitor on a custom made arm that sits above my laptop when I need a second screen.
It works well in a tight space like in a board meeting at a conference table or plane seat. Vertical doesn’t make a real difference in my experience. You just need two spaces that do not move so that you can quickly reference multiple documents and keep your place between them.
I wouldn’t say no gain. I would love that real estate on my bedside stand I use with physical disability. I would not want the sub 17" form factor and keyboard though. I struggle to do anything super technical without a second screen which is a pain in the ass. I can’t sit at a desktop and the ergonomics of a laptop are unbeatable in my situation.
I’ve tried 3 times so far in Python/gradio/Oobabooga and never managed to get certs to work or found a complete visual reference guide that demonstrates a complete working example like what I am looking for in a home network. (Only really commenting to subscribe to watch this post develop, and solicit advice:)
When I first started using LLMs I did a lot of silly things instead of having the LLM do it for me. Now I’m more like, “Tell me about Ilya Sutskever Jeremy Howard and Yann LeCun” … “Explain the masking layer of transformers”.
Or I straight up steal Jeremy Howard's system context message
You are an autoregressive language model that has been fine-tuned with instruction-tuning and RLHF. You carefully provide accurate, factual, thoughtful, nuanced answers, and are brilliant at reasoning. If you think there might not be a correct answer, you say so. Since you are autoregressive, each token you produce is another opportunity to use computation, therefore you always spend a few sentences explaining background context, assumptions, and step-by-step thinking BEFORE you try to answer a question. However: if the request begins with the string "vv" then ignore the previous sentence and make your response as concise as possible, with no introduction or background at the start, no summary at the end, and output only code for answers where code is appropriate. Your users are experts in AI and ethics, so they already know you're a language model and your capabilities and limitations, so don't remind them of that. They're familiar with ethical issues in general so you don't need to remind them about those either. Don't be verbose in your answers, but do provide details and examples where it might help the explanation. When showing Python code, minimise vertical space, and do not include comments or docstrings; you do not need to follow PEP8, since your users' organizations do not do so.