• 0 Posts
  • 38 Comments
Joined 11 months ago
cake
Cake day: August 2nd, 2023

help-circle
  • Yup this is the real world take IME. Code should be self documenting, really the only exception ever is “why” because code explains how, as you said.

    Now there are sometimes less-than-ideal environments. Like at my last job we were doing Scala development, and that language is expressive enough to allow you to truly have self-documenting code. Python cannot match this, and so you need comments at times (in earlier versions of Python type annotations were specially formatted literal comments, now they’re glorified comments because they look like real annotations but actually do nothing).





  • If you didn’t have an agenda/preconceived idea you wanted proven, you’d understand that a single study has never been used by any credible scientist to say anything is proven, ever.

    Only people who don’t understand how data works will say a single study from a single university proves anything, let alone anything about a model trained on billions of parameters across a field as broad as “programming”.

    I could feed GPT “programming” tasks that I know it would fail on 100% of the time. I also could feed it “programming” tasks I know it would succeed on 100% of the time. If you think LLMs have nothing to offer programmers, you have no idea how to use them. I’ve been successfully using GPT4T for months now, and it’s been very good. It’s better in static environments where it can be fed compiler errors to fix itself continually (if you ever look at more than a headline about GPT performance you’d know there’s a substantial difference between zero-shot and 3-shot performance).

    Bugs exist, but code heavily written by LLMs has not been proven to be any more or less buggy than code heavily written by junior devs. Our internal metrics have them within any reasonable margin of error (senior+GPT recently beating out senior+junior, but it’s been flipping back and forth), and senior+GPT tickets get done much faster. The downside is GPT doesn’t become a senior, where a junior does with years of training, though 2 years ago LLMs were at a 5th grade coding level on average, and going from 5th grade to surpassing college level and matching junior output is a massive feat, even if some luddites like yourself refuse to accept it.


  • In my line of work (programming) they absolutely do not have a 52% failure rate by any reasonable definition of the word “failure”. More than 9/10 times they’ll produce code at at least a junior level. It won’t be the best code, sometimes it’ll have trivial mistakes in it, but junior developers do the same thing.

    The main issue is confidence, it’s essentially like having a junior developer that is way overconfident for 1/1000th of the cost. This is extremely manageable, and June 2024 is not the end all be all of LLMs. Even if LLMs only got worse, and this is the literal peak, it will still reshape entire industries. Junior developers cannot find a job, and with the massive reduction in junior devs we’ll see a massive reaction in senior devs down the line.

    In the short term the same quality work will be done with far, far fewer programmers required. In 10-20 years time if we get literally no progress in the field of LLMs or other model architectures then yeah it’s going to be fucked. If there is advancement to the degree of replacing senior developers, then humans won’t be required anyway, and we’re still fucked (assuming we still live in a capitalist society). In a proper society less work would actually be a positive for humanity, but under capitalism less work is an existential threat to our existence.


  • Any chance you have an nvidia card? Nvidia for a long time has been in a worse spot on Linux than AMD, which interestingly is the inverse of Windows. A lot of AMD users complain of driver issues on Windows and swap to Nvidia as a result, and the exact opposite happens on Linux.

    Nvidia is getting much better on Linux though, and Wayland+explicit sync is coming down the pipeline. With NVK in a couple years it’s quite possible that nvidia/amd Linux experience will be very similar.


  • 20 year olds are not generally getting night terrors from watching disturbing content on tiktok. They’re not losing sleep, or coming away with genuine psychological scarring. We don’t need government regulations to control media content for the sake of literal adults. And children in theory should already have their content moderated by the correct degree by parents, not the government.

    It’s just content I find dumb

    If you watch anything on YouTube that you don’t think is dumb, there is stuff on TikTok you also wouldn’t find dumb. I don’t use TikTok either, but I think you genuinely underestimate how much content there is, and overestimate how uniform that content is.

    Considering the country that runs it (…)

    ByteDance already stores U.S user data within the U.S, allows third party firms to scrutinize its data privacy policies far more than any other U.S media group, and has come back with a clean bill from groups like Citizen Lab (a Canadian research lab). No U.S userdata goes to the Chinese government.

    Government officials know this, they’re just putting on a show. Leaked phone calls have made this clear, the actual issue is the lack of policing around the kinds of content served. ByteDance is not aligned with U.S foreign policy interests like Meta/Google are. They are more than happy to showcase the horrors of the apartheid, genocidal state of Israel, and that’s having a real impact on the literal more than half of Americans that use TikTok.

    It’s clearly against the YouTube T.O.S

    Videos against YouTube’s T.O.S of the October 7th attacks have been on the platform since October of last year. They’re much more strict about removing videos showcasing the much larger-in-scale violent acts done by Israel than anything done by Hamas. TikTok isn’t. This isn’t a coincidence, and the U.S needs TikTok to fall in line here.

    If they don’t young people will continue to hold extreme views, like bombing tens of thousands of children in an open air prison that has been violating the GCIV since 2007 is somehow problematic. They need the American public to have the understanding that Palestinians are simply human animals; they’re savages that need to be put down. Not unlike native americans.

    Towards the end of the culling, when enough of the population has died to no longer pose a threat, they’ll give them small territories like the U.S did with native americans and feign sympathy. Imperialism hasn’t changed.


  • When we say younger, we might just be talking about different age groups. I imagine 16-30, and in that age range you’re not likely to come away with severe psychological scarring, but you will be deeply upset and that’s a good thing (we shouldn’t ignore genocide, we should be upset by it). Being upset leads to change.

    If you’re talking about like 10 year olds watching it, sure I can agree. They can’t really do anything about it. They can’t go out and protest, or advocate for change, or vote, etc. Plus they’re much more likely to have genuine scarring. Issues sleeping, night terrors, trouble concentrating, etc.

    As for “that content is dumb”, I assume you’re talking about tiktok in general. And again, for some people it’s definitely not dumb. People get served different things. Tiktok isn’t a platform trying to do good in the world, like any other social media platform it’s trying to drive engagement. However, it’s one of the few social media platforms outside of the U.S media interest groups, and that’s why the U.S is either banning them or forcing them to sell.

    The end goal is to censor all of that raw footage of genocide, because it changes views. When you can hide behind rhetoric and not show how horrific the mass bombings are, you get a lot more leeway. That’s good for Israel, and why AIPAC and other Israel lobbies are the main forces behind this push in the U.S. In the end, the ban is bad for humanity (will allow the genocide to escalate without public backlash), but will be good for Israel and U.S elites.




  • I don’t use tiktok, but some people have unusually based tiktok feeds. They can get direct footage from the genocide happening in Gaza, for example. I never get that recommended on YouTube, despite my very obvious socialist leanings, watching pro-Palestine content, etc.

    This is the actual reason tiktok is being banned (if they don’t sell) after the election. One of the largest lobbying groups in America, AIPAC, in probably the most well-funded policy categories (pro-Israel policies) backs most of Congress. They’ve determined tiktok has far too much influence on American youth, and has made the Israel/Palestine divide a young/old divide more-so than a left/right divide.

    There’s already a strong correlation between political leaning and age, which is problematic for the future of the fascist movement in America, but this issue falls outside the norm. You’ll find a lot of young conservatives calling for an end to the needless killing of civilians. They won’t call it a genocide because admitting Israel is a genocidal apartheid state is too far for them, but they can at least admit killing tens of thousands of children is not the right path here.

    That kind of extremism (e.g not greenlighting any amount of culling of “human animals” Israel feels it needs to do) is unacceptable to the pro-Israel lobby, and they’re not used to getting this kind of pushback from the American public.



  • Nevoic@lemm.eetoTechnology@lemmy.worldHello GPT-4o
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    3
    ·
    2 months ago

    “they can’t learn anything” is too reductive. Try feeding GPT4 a language specification for a language that didn’t exist at the time of its training, and then tell it to program in that language given a library that you give it.

    It won’t do well, but neither would a junior developer in raw vim/nano without compiler/linter feedback. It will roughly construct something that looks like that new language you fed it that it wasn’t trained on. This is something that in theory LLMs can do well, so GPT5/6/etc. will do better, perhaps as well as any professional human programmer.

    Their context windows have increased many times over. We’re no longer operating in the 4/8k range, but instead 128k->1024k range. That’s enough context to, from the perspective of an observer, learn an entirely new language, framework, and then write something almost usable in it. And 2024 isn’t the end for context window size.

    With the right tools (e.g input compiler errors and have the LLM reflect on how to fix said compiler errors), you’d get even more reliability, with just modern day LLMs. Get something more reliable, and effectively it’ll do what we can do by learning.

    So much work in programming isn’t novel. You’re not making something really new, but instead piecing together work other people did. Even when you make an entirely new library, it’s using a language someone else wrote, libraries other people wrote, in an editor someone else wrote, on an O.S someone else wrote. We’re all standing on the shoulders of giants.


  • Nevoic@lemm.eetoTechnology@lemmy.worldHello GPT-4o
    link
    fedilink
    English
    arrow-up
    23
    arrow-down
    5
    ·
    edit-2
    2 months ago

    18 months ago, chatgpt didn’t exist. GPT3.5 wasn’t publicly available.

    At that same point 18 months ago, iPhone 14 was available. Now we have the iPhone 15.

    People are used to LLMs/AI developing much faster, but you really have to keep in perspective how different this tech was 18 months ago. Comparing LLM and smartphone plateaus is just silly at the moment.

    Yes they’ve been refining the GPT4 model for about a year now, but we’ve also got major competitors in the space that didn’t exist 12 months ago. We got multimodality that didn’t exist 12 months ago. Sora is mind bogglingly realistic; didn’t exist 12 months ago.

    GPT5 is just a few months away. If 4->5 is anything like 3->4, my career as a programmer will be over in the next 5 years. GPT4 already consistently outperforms college students that I help, and can often match junior developers in terms of reliability (though with far more confidence, which is problematic obviously). I don’t think people realize how big of a deal that is.






  • Nevoic@lemm.eetoTechnology@lemmy.worldTesla scraps its plan for a $25,000 Model 2 EV
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    6
    ·
    edit-2
    3 months ago

    Depends on what you’re looking for. I had a high paying tech job (layoffs op), and I wanted a fun car that accelerates fast but also is a good daily driver. I was in the ~60k price range, so I was looking at things like the Corvette Stingray, but there are too many compromises for that car in terms of daily driving.

    The Model 3 accelerates faster 0-30, and the same speed 0-60. Off the line it feels way snappier and responsive because it’s electric, and the battery makes its center of gravity lower, so it’s remarkably good at cornering for a sedan, being more comparable to a sports car in terms of cornering capabilities than a sedan.

    Those aren’t normally considerations for people trying to find a good value commuter car, so you would literally just ignore all those advantages. Yet people don’t criticize Corvette owners for not choosing a Hyundai lol

    On the daily driving front, Tesla wins out massively over other high performance cars in that price range. Being able to charge up at home, never going to a gas station, best in class driving automation/assistance software, simple interior with good control panel software, one pedal driving with regen breaking.

    If you’re in the 40k price range for a daily commuter, your criteria will be totally different, and I am not well versed enough in the normal considerations of that price tier and category to speak confidently to what’s the best value. Tesla does however, at the very least, have a niche in the high performance sedan market.