Most of the article is well-trodden ground if you’ve been following OpenAI at all, but I thought this part was noteworthy:

Some members of the OpenAI board had found Altman an unnervingly slippery operator. For example, earlier this fall he’d confronted one member, Helen Toner, a director at the Center for Security and Emerging Technology, at Georgetown University, for co-writing a paper that seemingly criticized OpenAI for “stoking the flames of AI hype.” Toner had defended herself (though she later apologized to the board for not anticipating how the paper might be perceived). Altman began approaching other board members, individually, about replacing her. When these members compared notes about the conversations, some felt that Altman had misrepresented them as supporting Toner’s removal. “He’d play them off against each other by lying about what other people thought,” the person familiar with the board’s discussions told me. “Things like that had been happening for years."

  • self@awful.systemsM
    link
    fedilink
    English
    arrow-up
    15
    ·
    1 year ago

    The Copilots let users pose questions to software as easily as they might to a colleague—“Tell me the pros and cons of each plan described on that video call,” or “What’s the most profitable product in these twenty spreadsheets?”—and get instant answers, in fluid English.

    it’s very funny to me that all the copilot examples this article breathlessly relates are things LLMs are absolutely fucking terrible at

    • self@awful.systemsM
      link
      fedilink
      English
      arrow-up
      10
      ·
      1 year ago

      “Perhaps to a greater extent than any technological revolution preceding it, A.I. could be used to revitalize the American Dream,” Scott has written. He felt that a childhood friend running a nursing home in Virginia could use A.I. to handle her interactions with Medicare and Medicaid, allowing the facility to concentrate on daily care. Another friend, who worked at a shop making precision plastic parts for theme parks, could use A.I. to help him manufacture components. Artificial intelligence, Scott told me, could change society for the better by turning “zero-sum tradeoffs where we have winners and losers into non-zero-sum progress.”

      Nadella read the memo and, as Scott put it, “said, ‘Yeah, that sounds good.’ ” A week later, Scott was named Microsoft’s chief technology officer.

      christ, I got bored and tapped out too soon. it’s fucking unsettling how hard this article tries to dodge around how insane all of this is — how much it normalizes these bad ideas wrapped in worse nationalism on Scott and Microsoft’s part, and how it tries to excuse OpenAI being run and staffed by cultists as them being problematically enthusiastic or whatever

      • raktheundead@fedia.io
        link
        fedilink
        arrow-up
        5
        ·
        1 year ago

        I agree, the article is way too credulous about the people working with and associated with OpenAI and doesn’t delve enough early enough into the dangerous weirdness of the organisation or the EA/rationalist crowd that have been leading it.

  • swlabr@awful.systems
    link
    fedilink
    English
    arrow-up
    13
    ·
    edit-2
    1 year ago

    Read the whole damn thing. Near the end:

    One of the last times I spoke to Scott, before the Turkey-Shoot Clusterfuck began, his mother had been in the hospital half a dozen times in recent weeks. She is in her seventies and has a thyroid condition, but on a recent visit to the E.R. she waited nearly seven hours, and left without being seen by a doctor. “The right Copilot could have diagnosed the whole thing, and written her a prescription within minutes,” he said. But that is something for the future. Scott understands that these kinds of delays and frustrations are currently the price of considered progress—of long-term optimism that honestly contends with the worries of skeptics.

    Either Scott is swimming in an olympic sized pool of AI kool-aid and constantly thinking about how else AI can invade aspects of his personal life or just a normal exec that is willing to cynically spin every aspect of his personal life in service of the grift. It’s probably just the latter.

    • self@awful.systemsM
      link
      fedilink
      English
      arrow-up
      9
      ·
      1 year ago

      there’s so much wrong with that story — wealthy folks’ families (that don’t get disowned) have doctors on call and don’t go to the ER for issues that can be solved with medication. also, all the thyroid patients I know are fairly close with their endocrinologist and are utterly paranoid about their meds. maybe this is one of those stories where Scott’s actual mom waiting 30 minutes to see her endo turned, out of convenience for the point he wanted to make, into someone’s theoretical, more relatable mom who can’t afford medical care waiting 7 hours and not getting it

      • swlabr@awful.systems
        link
        fedilink
        English
        arrow-up
        8
        ·
        1 year ago

        Aw dawg, that’s on me for supposing for a moment that his story had truth to it. I did think it strange that this rich dude’s mum had to go to the ER for something, but decided to apply good faith.

        So now it’s even more cynical. Scott, who now lives a life of convenience, doesn’t have any of his own stories to parlay into AI hype, so he has to wear someone else’s suffering as stolen valor.

  • VubDapple@lemmy.world
    link
    fedilink
    English
    arrow-up
    13
    ·
    1 year ago

    It doesn’t take a psychologist to appreciate this guy’s narcissism and sociopathy. That’s a feature in a CEO apparently, not a bug.