• zogwarg@awful.systems
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 days ago

    We have:

    No more sycophancy—now the AI tells you what it believes. […] We get common knowledge, which recently seems like an endangered species.

    Followed by:

    We could also have different versions of articles optimized for different audiences. The question is, how many audiences, but I think that for most articles, two good options would be “for a 12 years old child” and “standard encyclopedia article”. Maybe further split the adult audience to “layman” and “expert”?

    You have got to love the consistency.

    And the accidentally (or not so accidentally?) imperialistic:

    The first idea is translation to languages other than English. Those languages often have fewer speakers, and consequently fewer Wikipedia volunteers. But for AI encyclopedia, volunteers are not a bottleneck. The easiest thing it could do is a 1:1 translation from the English version. But it could also add sources written in the other language, optimize the article for a different audience, etc.

    And also a deep misunderstanding of translation, there is no such thing as 1:1 translation, it always requires re-interpretation.

  • blakestacey@awful.systemsM
    link
    fedilink
    English
    arrow-up
    16
    ·
    3 days ago

    And we don’t want to introduce all the complexities of solving disagreements on Wikipedia.

    wait for it

    There should also be some kind of support for multiple AIs disagreeing with each other.

    • scruiser@awful.systems
      link
      fedilink
      English
      arrow-up
      6
      ·
      2 days ago

      And we don’t want to introduce all the complexities of solving disagreements on Wikipedia.

      What they actually mean is they don’t want them to be solved in favor of the dgerad type of people… like (reviewing the expose on lesswrong)… demanding quality sources that aren’t HBD pseudoscience journals or right wing rags.

  • ignirtoq@fedia.io
    link
    fedilink
    arrow-up
    19
    ·
    3 days ago

    Now, I don’t intend this to be some kind of “computers vs humans” competition; of course that wouldn’t be fair, considering that the computers can read and copy the human Wikipedia.

    I like how he thinks a “computers vs humans” competition in generating an encyclopedia of knowledge (which necessitates true information) is unfair because AI has the advantage. They truly don’t understand that chat bots don’t have a concept of “fact” as we define it, so this task is impossible for LLMs.

  • blakestacey@awful.systemsM
    link
    fedilink
    English
    arrow-up
    14
    ·
    3 days ago

    From the comments:

    I wonder if you could do something similar with all peer-reviewed scientific publications, summarizing all findings into an encyclopedia of all scientific knowledge.

    True believers are fucked in the head.

    • CinnasVerses@awful.systems
      link
      fedilink
      English
      arrow-up
      5
      ·
      3 days ago

      The owner of Birdsite tweeted the same idea “make chatbots write a universal encyclopedia to free us from human experts” a year or so ago.

  • jackr@lemmy.dbzer0.comOP
    link
    fedilink
    English
    arrow-up
    15
    ·
    3 days ago

    also lmao @ one of the comments꧇

    Maybe future versions of AI chatbots could use something like this as a shared persistent memory that all chatbot instances could reference as a common ground truth. The only trick would be getting the system to use sound epistemology and reliably report uncertainty instead of hallucinations.

    This will fix all problems with AI if only we fix the fundamental flaw in the architecture guys!

    • scruiser@awful.systems
      link
      fedilink
      English
      arrow-up
      6
      ·
      2 days ago

      I keep seeing this sort of thinking on /r/singularity, people who are sure LLMs will be great once they have memory/ground-truth factual knowledge/some other feature that in fact the promptfarmers have already tried (and failed) to add via fancier prompting (i.e. RAG) or fine-tuning and would require a massive reinvention of the entire paradigm to actually fix. That, or they describe what basically amounts to a reinvention of the concept of expert systems like Cyc.

  • zbyte64@awful.systems
    link
    fedilink
    English
    arrow-up
    8
    ·
    3 days ago

    Normally when you design products you think of your users needs but author is too brilliant for that and is designing products based on making a fanfic “will it blend?” episode

  • Soyweiser@awful.systems
    link
    fedilink
    English
    arrow-up
    8
    ·
    edit-2
    3 days ago

    I have some thoughts on this plan. Say I make a new video game and use AI to write the articles for it. Where would the AI get the data from? And how do you prevent AI using slop as a source? (and just listing it as ‘a problem’ will not fix the problem, it undermines the fundamentals of your whole project).

    Also “sounds kinda cool” really? Cool? You have seen the current crop of LLMs? Did you ask a LLM if this was a good idea first or something?

  • A Wild Mimic appears!@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    3 days ago

    This… is a genuine thought, but that’s all it has going for it. Not every thought is worth an essay.

    But i actually read it halfway, just to humor the author.