Archive

Perhaps nothing has defined higher education over the past two decades more than the rise of computer science and STEM. Since 2016, enrollment in undergraduate computer-science programs has increased nearly 49 percent. Meanwhile, humanities enrollments across the United States have withered at a clip—in some cases, shrinking entire departments to nonexistence.

But that was before the age of generative AI. ChatGPT and other chatbots can do more than compose full essays in an instant; they can also write lines of code in any number of programming languages. You can’t just type make me a video game into ChatGPT and get something that’s playable on the other end, but many programmers have now developed rudimentary smartphone apps coded by AI. In the ultimate irony, software engineers helped create AI, and now they are the American workers who think it will have the biggest impact on their livelihoods, according to a new survey from Pew Research Center. So much for learning to code.

Fiddling with the computer-science curriculum still might not be enough to maintain coding’s spot at the top of the higher-education hierarchy. “Prompt engineering,” which entails feeding phrases to large language models to make their responses more human-sounding, has already surfaced as a lucrative job option—and one perhaps better suited to English majors than computer-science grads.

The potential decline of “learn to code” doesn’t mean that the technologists are doomed to become the authors of their own obsolescence, nor that the English majors were right all along (I wish). Rather, the turmoil presented by AI could signal that exactly what students decide to major in is less important than an ability to think conceptually about the various problems that technology could help us solve.

  • colonial@lemmy.world
    link
    fedilink
    arrow-up
    31
    ·
    9 months ago

    After all, the discipline has always been about more than just learning the ropes of Python and C++. Identifying patterns and piecing them together is its essence.

    Ironic, considering LLMs can’t fucking do that. All they do is hallucinate the statistically likely answer to your prompt, with some noise thrown in. That works… okay at small scales (but even then, I’ve seen it produce some hideously unsound C functions) and completely falls apart once you increase the scope.

    Short of true AGI, automatically generating huge chunks of your code will never end well. (See this video for a non-AI example. I give it two years tops before we see it happen with GPT.)

    Also… not hating on English majors, but the author has no idea what they’re talking about and is just regurgitating AI boosterism claims.

    • loobkoob@kbin.social
      link
      fedilink
      arrow-up
      7
      ·
      9 months ago

      All they do is hallucinate

      I read an article a couple of months ago about AI usage in geolocation (link because it’s interesting, even though it’s not necessarily relevant). In it, they brought up a quote from a computer scientist / AI specialist who said he preferred the word “confabulate” to describe what happens with AI, rather than “hallucinate”

      Confabulation: a type of memory error in which gaps in a person’s memory are unconsciously filled with fabricated, misinterpreted, or distorted information.

      I agree with the guy that it’s a slightly better term for it, but I also just think it’s such a fun word that it’s too good not to share!

    • varsock@programming.dev
      link
      fedilink
      arrow-up
      5
      ·
      9 months ago

      I agree with you and share the same opinions.

      For discussion sake I will add that, using AI I have became so fast at creating “units of code” or restructuring. I ask it to solve a narrow narrow scope and introduce constraints (like conditional variable and which parameters, initial conditions). And it does. I have the experience validate by reading and to piece together the units of code but now my productivity near tripled.

      I don’t write comments anymore. I write what I neeed, ask it to comment the function, maybe I’ll add something that is project specific.

      And getting started with new technologies is easier as long as, like you said, keep the scope small.

      AI will not replace programmers. Programmers that use AI will replace programmers who don’t.

      • psivchaz@reddthat.com
        cake
        link
        fedilink
        arrow-up
        2
        ·
        9 months ago

        I think this is generally true, probably for the rest of my career. I don’t think it is true forever. Asking “what happens when this stops being a career” or at least “what happens when there are less jobs to go around” is important, and something I would rather we all sort out long before I need the answer.

        • varsock@programming.dev
          link
          fedilink
          arrow-up
          1
          ·
          9 months ago

          Valid point. Again for the sake of discussion, technology evolves quickly. New tools are made out of the shortcoming of others. If docker evolves and a new tool - Kocker - is born, AI will need training data from best practices which should be generated by people.

          This could unfold in many ways. For one there could a small group of people pushing technology forward. But people will need to be around to create requirements, which takes experience.

          More likely, majority of engineers will likely just move up to a higher level of abstraction, letting new tools do the lower layer stuff. And any innovations in the lower levels of abstraction will be done by a small group of people with niche skills (take CPUs for example). This is the trend we saw historically. Assembly -> compilers -> lower languages -> interpreted languages -> scaling bare metal systems -> distributed systems -> virtual machines -> automation -> micro services etc etc