• Cratermaker@discuss.tchncs.de
    link
    fedilink
    arrow-up
    24
    ·
    8 months ago

    I don’t speak C, but isn’t this an extreme simplification of the issue? I thought memory could be abused in an almost infinite number of subtle ways outside of allocating it wrong. For example, improperly sanitized string inputs. I feel like if it were this easy, it would have been done decades ago.

    • WolfLink@lemmy.ml
      link
      fedilink
      arrow-up
      8
      ·
      8 months ago

      Buffer overflows are far from the only way for improperly sanitized inputs to be a problem

    • lysdexic@programming.devOP
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      8
      ·
      8 months ago

      I think this can be explained by underlining the differences between could, would, and should.

      The blog states the fact that at least some C compilers already offer the necessary and sufficient tools that characterize “memory-safe” languages, and proceeds to illustrate examples. This isn’t new. However, just like “memory-safe” languages enforce narrow coding styles through a happy path that is expected to prevent the introduction of some classes of vulnerabilities, leveraging these compiler features in C projects also requires the same type of approach.

      This isn’t new or unheard of. Some C++ frameworks are also known for supporting their own memory management and object ownership strategies, but you need to voluntarily adhere to them.

  • ExperimentalGuy@programming.dev
    link
    fedilink
    arrow-up
    13
    arrow-down
    1
    ·
    8 months ago

    The core principle of computer-science is that we need to live with legacy, not abandon it.

    The problem isn’t a principle of a computer science, but one of just safety. Also, who said this is a principle of computer science?

    • lysdexic@programming.devOP
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      6
      ·
      edit-2
      8 months ago

      The problem isn’t a principle of a computer science, but one of just safety.

      I think you missed the point entirely.

      You can focus all you want in artificial Ivory tower scenarios, such as a hypothetical ability to rewrite everything from scratch with the latest and greatest tech stacks. Back in the real world, that is a practical impossibility in virtually all scenarios, and a renowned project killer.

      In addition, the point stressed in the article is that you can add memory safety features even to C programs.

      Also, who said this is a principle of computer science?

      Anyone who devotes any resource learning software engineering.

      Here’s a somewhat popular essay in the subject:

      https://www.joelonsoftware.com/2000/04/06/things-you-should-never-do-part-i/

  • snowe@programming.devM
    link
    fedilink
    arrow-up
    9
    arrow-down
    1
    ·
    8 months ago

    Forking is a foolish idea. The core principle of computer-science is that we need to live with legacy, not abandon it.

    what a crazy thing to say. The core principle of computer-science is to continue moving forward with tech, and to leave behind the stuff that doesn’t work. You don’t see people still using fortran by choice, you see them living with it because they’re completely unable to move off of it. If you’re able to abandon bad tech then the proper decision is to do so. OP keeps linking Joel, but Joel doesn’t say to not rewrite stuff, he says to not rewrite stuff for large scale commercial applications that currently work. C clearly isn’t working for a lot of memory safe applications. The logic doesn’t apply there. It also clearly doesn’t apply when you can write stuff in a memory safe language alongside existing C code without rewriting any C code at all.

    And there’s no need. Modern C compilers already have the ability to be memory-safe, we just need to make minor – and compatible – changes to turn it on. Instead of a hard-fork that abandons legacy system, this would be a soft-fork that enables memory-safety for new systems.

    this has nothing to do with the compiler, this has to do with writing ‘better’ code, which has proved impossible over and over again. The problem is the programmers and that’s never going to change. Using a language that doesn’t need this knowledge is the better choice 100% of the time.

    C devs have been claiming ‘the language can do this, we just need to implement it’ for decades now. At this point it’s literally easier to slowly port to a better language than it is to try and ‘fix’ C/C++.

    • suy@programming.dev
      link
      fedilink
      arrow-up
      3
      arrow-down
      1
      ·
      8 months ago

      this has to do with writing ‘better’ code, which has proved impossible over and over again

      I can’t speak for C, as I don’t follow it that much, but for C++, this is just not fair. It has been proven repeatedly that it can be done better, and much better. Each iteration has made so many things simpler, more productive, and also safer. Now, there are two problems with what I just said:

      • That it has been done safer, that doesn’t mean that everyone makes good use of it.
      • That it has been done safer, doesn’t mean that everything is fixable, and that it’s on the same level of other, newer languages.

      If that last part is what you mean, fine. But the way that you phrased (and that I quoted) is just not right.

      At this point it’s literally easier to slowly port to a better language than it is to try and ‘fix’ C/C++.

      Surely not for everything. Of course I see great value if I can stop depending on OpenSSL, and move to a better library written in a better language. Seriously looking forward for the day when I see dynamic libraries written in Rust in my package manager. But I’d like to see what’s the plan for moving a large stack of C and C++ code, like a Linux distribution, to some “better language”. I work everyday on such a stack (e.g. KDE Neon in my case, but applicable to any other typical distro with KDE or GNOME), and deploy to customers on such a stack (on Linux embedded like Yocto). Will the D-Bus daemon be written in Rust? Perhaps. Systemd? Maybe. NetworkManager, Udisks, etc.? Who knows. All the plethora of C and C++ applications that we use everyday? Doubtful.

      • snowe@programming.devM
        link
        fedilink
        arrow-up
        6
        ·
        8 months ago

        I can’t speak for C, as I don’t follow it that much, but for C++, this is just not fair. It has been proven repeatedly that it can be done better, and much better. Each iteration has made so many things simpler, more productive, and also safer. Now, there are two problems with what I just said:

        That comment was not talking about programming languages, it was talking about human’s inability to write perfect code. Humans are unable to solve problems correctly 100% of the time. So if the language doesn’t do it for them then it will not happen. See Java for a great example of this. Java has Null Pointer Exceptions absolutely everywhere. So a bunch of different groups created annotations that would give you warnings, and even fail to compile if something was mismatched or a null check was missed. But if you miss a single @NotNull annotation anywhere in the code, then suddenly you can get null errors again. It’s not enforced by the type system and as a result humans can forget. Kotlin came along and ‘solved it’ at the type level, where types are nullable or non-nullable. But, hilariously enough, you can still get NPEs in Kotlin because it’s commonly used to interop with Java.

        My point is that C/C++ can’t solve this at a fundamental level, the same way Kotlin and Java cannot solve this. Programmers are the problem, so you have to have a system that was built from the ground up to solve the problem. That’s what we are getting in modern day languages. You can’t just tack the system on after the fact, unless it completely removes any need for the programmer to do literally anything, because the programmer is the problem.

        Surely not for everything. Of course I see great value if I can stop depending on OpenSSL, and move to a better library written in a better language. Seriously looking forward for the day when I see dynamic libraries written in Rust in my package manager. But I’d like to see what’s the plan for moving a large stack of C and C++ code, like a Linux distribution, to some “better language”. I work everyday on such a stack (e.g. KDE Neon in my case, but applicable to any other typical distro with KDE or GNOME), and deploy to customers on such a stack (on Linux embedded like Yocto). Will the D-Bus daemon be written in Rust? Perhaps. Systemd? Maybe. NetworkManager, Udisks, etc.? Who knows. All the plethora of C and C++ applications that we use everyday? Doubtful.

        I’m not talking about whole scale rewrites. I’m talking about what Linux is already doing with writing new code in Rust, or small portions of performance critical code in a memory safe language. I’m not talking about like what Fish Shell did and rewrote the whole codebase in one go, because that’s not realistic. But slowly converting an entire codebase over? That’s incredibly realistic. I’ve done so with several 250k+ line Java codebases, converting them to Kotlin. When languages are built to be easy to move to (Rust, Kotlin, etc), then migrating to them slowly over time where it matters is easily attainable.

    • lysdexic@programming.devOP
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      7
      ·
      8 months ago

      The core principle of computer-science is to continue moving forward with tech, and to leave behind the stuff that doesn’t work.

      I’m not sure you realize you’re proving OP’s point.

      Rewriting projects from scratch by definition represent big step backwards because you’re wasting resources to deliver barely working projects that have a fraction of the features that the legacy production code already delivered and reached a stable state.

      And you are yet to present a single upside, if there is even one.

      At this point it’s literally easier to slowly port to a better language than it is to try and ‘fix’ C/C++.

      You are just showing the world you failed to read the article.

      Also, it’s telling that you opt to frame the problem as "a project is written in C instead of <insert pet language> instead of actually secure and harden existing projects.

      • tatterdemalion@programming.dev
        link
        fedilink
        arrow-up
        6
        ·
        edit-2
        8 months ago

        And you are yet to present a single upside, if there is even one.

        This is a flippant statement, honestly, as it disregards the premise of the discussion. It’s memory safety. That’s the upside. The author even linked to memory safety bugs in OpenSSL. They might still exist elsewhere. (I realize there is a narrow class of memory bugs that C compilers understand, but it’s just that, a narrow class). We have scant way of knowing whether or not they exist without significant testing effort that is not likely to happen. And it would be fighting a losing battle anyway, because someone is writing new C code to maintain these legacy systems.

        • lysdexic@programming.devOP
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          8
          ·
          8 months ago

          This is a flippant statement, honestly, as it disregards the premise of the discussion. It’s memory safety.

          You’re completely ignoring even the argument you’re supposedly commenting,let alone the point it makes. You’re parroting cliches while purposely turning a blind eye to the point made in the blog that yes C can be memory safe. Likewise, Rust also has CVEs due to memory safety bugs. So, beyond these cliches, what exactly are you trying to argue?

      • snowe@programming.devM
        link
        fedilink
        arrow-up
        6
        arrow-down
        1
        ·
        8 months ago

        Rewriting projects from scratch by definition represent big step backwards because you’re wasting resources to deliver barely working projects that have a fraction of the features that the legacy production code already delivered and reached a stable state.

        Joel’s point was about commercial products not programming languages. I’m not the one misunderstanding here. When people talk about using Rust, it’s not talking about rewriting every single thing ever written in C/C++. It’s about leaving C/C++ behind and moving on to something that doesn’t have the issues of the past. This is not about large scale commercial rewrites. It’s about C’s inability to deal with these problems.

        You are just showing the world you failed to read the article.

        sure thing bud.

        Also, it’s telling that you opt to frame the problem as "a project is written in C instead of <insert pet language> instead of actually secure and harden existing projects.

        I didn’t say that and you know it. Also it’s quite telling (ooh, I can say the same things you can) that you think “better language” means “pet language”. Actually laughable.

  • tatterdemalion@programming.dev
    link
    fedilink
    arrow-up
    9
    arrow-down
    1
    ·
    8 months ago

    C compilers can’t tell you if your code has data races. That is one of the major selling points of Rust. How can the author claim that these features already exist in C compilers when they simply don’t?

    • Croquette@sh.itjust.works
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      8 months ago

      I’m a C dev and I don’t care if C ever go extinct, but the MCU toolchains are pretty much always C/C++.

      If it was for a personal project, I wouldn’t mind setting up a Rust compiler, but in a work setting, the uncertainty of unofficial crates (edit: I don’t sleep enough) is a no go for a product that needs to be maintained for years to come.

      And even when an official toolchain will be made, it will take a moment to spread.

      But I really like all the features of newer language over C, so if I have the opportunity to change language, I will take it.

      • Billegh@lemmy.world
        link
        fedilink
        arrow-up
        4
        ·
        8 months ago

        I’m merely commenting on the fact that every time this comes up, it’s just a chant of “skill issue” over and over again. The problem is that it’s hard to do it correctly, and there are so many more ways to do it incorrectly.

      • infamousta@sh.itjust.works
        link
        fedilink
        arrow-up
        2
        ·
        edit-2
        8 months ago

        I know what you mean. I’ve been doing higher level development for a couple decades and only now really getting into embedded stuff the past year or two. I dislike a lot of what C makes necessary when dealing with memory and controlling interrupts to avoid data races.

        I see rust officially supported on newer ARM Cortex processors and that sounds like it would be an awesome environment. But I’m not about to stake these projects with a hobbyist library for the 8-bit AVR processors I’m actually having to use.

        Unfortunately I just have to suck it up and understand how the ECU works at the processor/instruction level and it’s fine until there are better tools (or I get to use better processors).

        ETA: I’ve thought also that most of the avr headers are just register definitions and simple macros, maybe it wouldn’t be so bad to convert them to rust myself? But then it’s my library that’s probably broken lol

  • porgamrer@programming.dev
    link
    fedilink
    arrow-up
    9
    arrow-down
    1
    ·
    8 months ago

    Humans can be immortal

    Don’t believe me? I went to my local old folks home and found some who are older than 50!!