![](/static/253f0d9b/assets/icons/icon-96x96.png)
![](https://programming.dev/pictrs/image/170721ad-9010-470f-a4a4-ead95f51f13b.png)
Why?
Because Java struggles with basic things?
It’s absurd to send that much data on every patch request, to express no more information, but just to appease the shittiness of Java.
Why?
Because Java struggles with basic things?
It’s absurd to send that much data on every patch request, to express no more information, but just to appease the shittiness of Java.
I.e. waste a ton of bandwidth sending a ridiculous amount of useless data in every request, all because your backend engineers don’t know how to program for shit.
Gotcha.
Bruh, there’s a difference between the one or two serializing packages used in each language, and the thousands and thousands and thousands of developers who miscode contracts after that point.
No there isn’t.
Tell me how you partially change an object.
Object User :
{ Name: whatever, age: 0}
Tell me how you change the name without knowing the age. You fundamentally cannot, meaning that you either have to shuttle useless information back and forth constantly so that you can always patch the whole object, or you have to create a useless and unscalable number of endpoints, one for every possible field change.
As others have roundly pointed out, it is asinine to generally assume that undefined and null are the same thing, and no, it flat out it is not possible to design around that, because at a fundamental level those are different statements.
Sure, in a specific scenario where you decide they’re equivalent they are, congratulations. They’re not generally.
Null means I’m telling you it’s null.
Omission means it’s not there and I’m not telling you anything about it.
There is a world of difference between those two statements. It’s the difference between telling someone you’re single or just sitting there and saying nothing.
I’ve never once seen a JSON serializer misjudge null and absent fields, I’ve just seen developers do that.
They’re not subtle distinctions.
There’s a huge difference between checking whether a field is present and checking whether it’s value is null.
If you use lazy loading, doing the wrong thing can trigger a whole network request and ruin performance.
Similarly when making a partial change to an object it is often flat out infeasible to return the whole object if you were never provided it in the first place, which will generally happen if you have a performance focused API since you don’t want to be wasting huge amounts of bandwidth on unneeded data.
Lmao, the dumb capitalist corpo thinks Marxists are motivated by payment.
Learn how to think before you type.
The problem with copyright has nothing to do with terms limits. Those exacerbate the problem, but the fundamental problem with copyright and IP law is that it is a system of artificial scarcity where there is no need for one.
Rather than reward creators when their information is used, we hamfistedly try and prevent others from using that information so that people have to pay them to use it sometimes.
Capitalism is flat out the wrong system for distributing digital information, because as soon as information is digitized it is effectively infinitely abundant which sends its value to $0.
You realize that half of Lemmy is tying themselves in inconsistent logical knots trying to escape the reverse conundrum?
Copying isn’t stealing and never was. Our IP system that artificially restricts information has never made sense in the digital age, and yet now everyone is on here cheering copyright on.
In that I can take a picture of them and you wouldn’t notice or be impacted by it?
Man this is fucking asinine. No one hates you. Certainly not the actual researchers and engineers building these products.
Capitalism fucks over everyone who’s not immediately useful. AI is just modelling algorithms after neurons and discovering that that lets us solve a whole new class of fuzzy pattern matching problems.
The two of them together promises to fuck us over even more because that was one of the main things that we used to be better than computers at, but the solution is not remove the new technology from the equation, it’s to remove the old and broken system of resource allocation that has and continues to fuck us no matter what.
You’re describing how human beings learn and create.
No. It’s only illegal if you republish what you scrape. Absolutely nothing prevents any company from scraping the web and using that information internally.
No, in this case it’s the same problem.
We’re talking about banning DJI because the Chinese government subsidizes manufacturing useful things, whereas the US’ approach to corporate policy is to ban anything that prevents a billionaire from getting richer, and now the US is mad that China mysteriously got a better drone industry.
Either the US should reform itself until it prioritizes building useful shit cheaply instead of enriching finance industry assholes or it should shut. the. fuck. up.
Always weird to see "Microsoft in damage control mode, when like 98% of Microsoft employees see literally no difference from the day before.
For software to run on a computer, it needs to tell the computer what to do, “display this picture of a flower”, “move my character to the left”, “save this poem to a file”.
And for a bunch of different software to all run on the same machine, they all need to use the same basic set of instructions, this is called the machine’s Instruction Set.
Because the instruction set has to work for any software, these instructions don’t look that readable to us, instead of “show this flower” they might be “move this bit of memory into the processor”, but software builds up millions of those instructions to eventually display a flower.
Intel processors used a set of instructions that were called x86, and then when AMD made a rival processor, they made theirs use the same instruction set so that their processors would be compatible with all the software written for Intel processors (and when they needed to move from 32bit instructions to 64bit instructions, they made a new set called x64).
Meanwhile Apple computers for a long time used processors built by IBM that used IBMs PowerPC instruction set.
Now many companies are using the ARM instruction set, but ARM is still a private company you have to pay licensing fees to, so RISC-V is rising as a new, truly open source and free to use instruction set.
In .NET to make a controller you just make a class that extends controller and then a public function that returns a ViewResult, JsonResult, etc.
No black box dependency injection required.
Saying things aren’t comparable is just shorthand for saying “I’ve stopped thinking or considering this”.
Literally everything is comparable, especially an antifascist and the person they’re covering as.