21 comments

  • crazygringo 22 hours ago
    > That means the article contained a plausible-sounding sentence, cited to a real, relevant-sounding source. But when you read the source it’s cited to, the information on Wikipedia does not exist in that specific source. When a claim fails verification, it’s impossible to tell whether the information is true or not.

    This has been a rampant problem on Wikipedia always. I can't seem to find any indicator that this has increased recently? Because they're only even investigating articles flagged as potentially AI. So what's the control baseline rate here?

    Applying correct citations is actually really hard work, even when you know the material thoroughly. I just assume people write stuff they know from their field, then mostly look to add the minimum number of plausible citations after the fact, and then most people never check them, and everyone seems to just accept it's better than nothing. But I also suppose it depends on how niche the page is, and which field it's in.

    • crabmusket 21 hours ago
      There was a fun example of this that happened live during a recent episode of the Changelog[1]. The hosts noted that they were incorrectly described as being "from GitHub" with a link to an episode of their podcast which didn't substantiate that claim. Their guest fixed the citation as they recorded[2].

      [1]: https://changelog.com/podcast/668#transcript-265

      [2]: https://en.wikipedia.org/w/index.php?title=Eugen_Rochko&diff...

    • gonzobonzo 20 hours ago
      The problems I've run into is both people giving fake citations (the citations don't actually justify the claim that's being made in the article), and people giving real citations, but if you dig into the source you realize it's coming from a crank.

      It's a big blind spot among the editors as well. When this problem was brought up here in the past, with people saying that claims on Wikipedia shouldn't be believed unless people verify the sources themselves, several Wikipedia editors came in and said this wasn't a problem and Wikipedia was trustworthy.

      It's hard to see it getting fixed when so many don't see it as an issue. And framing it as a non-issue misleads users about the accuracy of the site.

      • mikkupikku 8 hours ago
        A common source of error is in articles for movies where it gives plot summaries. The plot summaries are very often written by people who didn't watch the movie but are trying to re-resemble the plot like a jigsaw puzzle from little bits they glean from written reviews, or worse just writing down whatever they assume to be the plot. Very often it seems like the fuck ups came from people who either weren't watching the movie carefully, or were just listening to the dialogue while not watching the screen, or simply lacked media literacy.

        Example [SPOILERS]: the page for the movie Sorcerer claims that rough terrain caused a tire to pop. The movie never says that, the movie shows the tire popping (which results in the trucks cargo detonating). The next scene reveals the cause, but only to those paying attention; the bloody corpse of a bandito laying next to a submachine gun is shown in the rubble beside the road, and more banditos are there, very upset and quite nervous, to hijack the second truck. The obvious inference is that the first truck's tire was shot by the bandit to hijack/rob the truck. The tire didn't pop from rough terrain, the movie never says it did, it's just a conclusion you could get from not paying attention to the movie.

        • shmeeed 6 hours ago
          To me that sounds a bit like summaries made on the base of written movie scripts. A long time ago, I read a few scripts to movies I had never watched, and that's exactly the outcome: You get a rough idea what it's about and even get to recognise some memorable quotes, but there's little cohesion to it, for lack of all the important visual aspects and clues that tie it all together.
      • Aurornis 5 hours ago
        > The problems I've run into is both people giving fake citations (the citations don't actually justify the claim that's being made in the article), and people giving real citations, but if you dig into the source you realize it's coming from a crank.

        Citations have become heavily weaponized across a lot of spaces on the internet. There was a period of time where we all learned that citations were correlated with higher quality arguments and Wikipedia’s [Citation Needed] even became a meme.

        But the quacks and the agenda pushers realized that during casual internet browsing readers won’t actually read, let alone scrutinize the citation links, so it didn’t matter what you linked to. As long as the domain and title looked relevant it would be assumed correct. Anyone who did read the links might take so much time that the comment section would be saturated with competing comments by the time someone can respond with a real critique.

        This has become a real problem on HN, too. Often when I see a comment with a dozen footnoted citations from PubMed they’re either misunderstandings what the study says or some times they even say the opposite of what the commenter claims.

        The strategy is to just quickly search PubMed or other sources for keywords and then copy those into the post with the HN footnote citation format, knowing that most people won’t read or question it.

      • 6510 13 hours ago
        > but if you dig into the source you realize it's coming from a crank.

        It is a dark sunday afternoon, Bob Park is sitting on his sofa as usual, drunk as usual, suddenly the TV reveals to him there to be something called the Paranormal (Twilight Zone music) ..instantly Bob knows there are no such things and adds a note to the incomprehensible mess of notes that one day will become his book. He downs one more Budweiser. In the distance lightning strikes a tree, Bob shouts You don't scare me! and shakes his fist. After a few more beers a miracle of inspiration descends and as if channeling, in the time span of 10 minutes he writes notes about Cold Fusion, Alternative Medicine, Faith Healing, Telepathy, Homeopathy, Parapsychology, Zener cards, the tooth fairy and father xmas. With much confidence he writes that non of them are real. It's been a really productive afternoon. It reminds him of times long gone back when he actually published many serious papers. He counts the remaining beers in his cooler and says to himself, in the next book I will need to take on god himself. The world needs to know, god is not real. I too will be the authority on that subject.

        https://en.wikipedia.org/w/index.php?title=Special:WhatLinks...

        • CPLX 6 hours ago
          Curious what the point you're making here is. I don't know anything at all about Bob Park and whether he is a crank. But if you make your career doing the admirable work of debunking pseudo-science and nonsense theories, you would necessarily be linked to in discussions of those theories very, very frequently.

          So maybe that's not a good description of him. But the link you posted is hardly dispositive.

    • chr15m 19 hours ago
      LLMs can add unsubstantiated conclusions at a far higher rate than humans working without LLMs.
      • mikkupikku 8 hours ago
        True, but humans got a 20 year head start and I am willing to wager the overwhelming majority of extant flagrant errors are due to humans making shit up and no other human noticing and correcting it.

        My go too example was the SDI page saying that brilliant pebble interceptors were to be made out of tungsten (completely illogical hogwash that doesn't even pass a basic sniff test.) This claim was added to the page in February of 2012 by a new wikipedia user, with no edit note accompanying the change nor any change to the sources and references. It stayed in the article until October 29th, 2025. And of course this misinformation was copied by other people and you can still find it being quoted, uncited, in other online publications. With an established track record of fact checking this poor, I honestly think LLMs are just pissing into the ocean.

        • asadotzler 7 hours ago
          If LLMs 10X it, as the advocates keep insisting, that means it would only take 2 years to do as much or more damage as humans alone have done in 20.
          • mikkupikku 7 hours ago
            Perhaps so. On the other hand, there's probably a lot of low hanging fruit they can pick just by reading the article, reading the cited sources, and making corrections. Humans can do this, but rarely do because it's so tedious.

            I don't know how it will turn out. I don't have very high hopes, but I'm not certain it will all get worse either.

            • SiempreViernes 1 hour ago
              The entire point of the article is that LLMs cannot make accurate text, but ironically you claiming LLMs can do accurate texts illustrates your point about human reliability perfectly.

              I guess the conclusion is there simply is no avenues to gain knowledge.

      • EA-3167 19 hours ago
        At some point you're forced to either believe that people have never heard of the concept of a force multiplier, or to return to Upton Sinclair's observation about getting people to believe in things that hurt their bottom line.
        • DrewADesign 18 hours ago
          I don’t see why people keep blaming cars for road safety problems; people got into buggy crashes for centuries before automobiles even existed
          • nullsanity 18 hours ago
            Because a difference in scale can become a difference in category. A handful of buggy crashes can be reduced to operator error, but as the car becomes widely adopted and analysis matures, it becomes clear that the fundamental design of the machine and its available use cases has fundamental flaws that cause a higher rate of operator error than desired. Therefore, cars are redesigned to be safer, laws and regulations are put in place, license systems are issued, and traffic calming and road design is considered.

            Hope that helps you understand.

            • DrewADesign 17 hours ago
              Is the sarcasm really that opaque? Who would unironically equate buggy accidents and automobile accidents?
              • obidee2 16 hours ago
                I’d like to introduce you to the internet.

                There’s a reason /s was a big thing, one persons obvious sarcasm is (almost tautologically) another persons true statement of opinion.

              • forgetfreeman 11 hours ago
                How much time have you spent around developers?
    • Wowfunhappy 4 hours ago
      > This has been a rampant problem on Wikipedia always. I can't seem to find any indicator that this has increased recently? Because they're only even investigating articles flagged as potentially AI. So what's the control baseline rate here?

      ...y'know, I don't want to be that guy, but this actually seems like something AI could check for, and then flag for human review.

    • shevy-java 7 hours ago
      > Applying correct citations is actually really hard work

      Not disagreeing - many existing articles on wikipedia have barely any references or citation at all and in some cases wrong citation or wrong conclusions. Like when an article says water molecules behave oddly and then the wikipedia article concluding that water molecules behave properly.

    • jacquesm 9 hours ago
      Linkrot is a problem and edited articles are another. Because you can cite all you want, but if the underlying resource changes your foundation just melted away.
      • jayflux 8 hours ago
        Pretty much every citation added to wikipedia is passed on to web archive now, either by the editor or automatically later on.

        For news articles especially the recommendation now is to use the archive snapshot and not the url of the page.

        It’s not a perfect solution, but it tries to solve the link rot issue.

    • mmooss 21 hours ago
      When I've checked Wikipedia citations I've found so much brazen deception - citations that obviously don't support the claim - that I don't have confidence in Wikipedia.

      > Applying correct citations is actually really hard work, even when you know the material thoroughly.

      Why do you find it hard? Scholarly references can be sources for fundamental claims, review articles are a big help too.

      Also, I tend to add things to Wikipedia or other wikis when I come across something valuable rather than writing something and then trying to find a source (which also is problematic for other reasons). A good thing about crowd-sourcing is that you don't have to write the article all yourself or all at once; it can be very iterative and therefore efficient.

      • crazygringo 20 hours ago
        It's not that I personally find it hard.

        It's more like, a lot of stuff in Wikipedia articles is somewhat "general" knowledge in a given field, where it's not always exactly obvious how to cite it, because it's not something any specific person gets credit for "inventing". Like, if there's a particular theorem then sure you cite who came up with it, or the main graduate-level textbook it's taught in. But often it's just a particular technique or fact that just kind of "exists" in tons of places but there's no obvious single place to cite it from.

        So it actually takes some work to find a good reference. Like you say, review articles can be a good source, survey articles or books. But it can take a surprising amount of effort to track down a place that actually says the exact thing. I literally just last week was helping a professor (leader in their field!) try to find a citation during peer review for their paper for an "obvious fact" in the field, that was in their introduction section. It was actually really challenging, like trying to produce a citation for "the sky is blue".

        I remember, years ago, creating a Wikipedia article for a particular type of food in a particular country. You can buy it at literally every supermarket there. How the heck do you cite the food and facts about it? It just... is. Like... websites for manufacturers of the food aren't really citations. But nobody's describing the food in academic survey articles either. You're not going to link to Allrecipes. What do you do? It's not always obvious.

        • Jepacor 6 hours ago
          If you can buy the food at a supermarket, can't you cite a product page? Presumably that would include a description of the product. Or is that not good enough of a citation?
          • crazygringo 1 hour ago
            Retail product listing URLs change constantly. They're not great.

            And then you usually want to describe how the food is used. E.g. suppose it's a dessert that's mainly popular at children's birthday parties. Everybody in the country knows that. But where are you going to find something written that says that? Something that's not just a random personal blog, but an actual published valid source?

            Ideally you can find some kind of travel guide or book for expats or something with a food section that happens to list it, but if it's not a "top" food highly visible to tourists, then good luck.

      • efilife 1 hour ago
        I found several that were contradicting the claim they were supposed to support (in popular articles). I will never regain faith in wikipedia. Being an editor or just verifying information from wikipedia makes you hate it
      • FranklinJabar 19 hours ago
        [dead]
  • julienchastang 3 hours ago
    "Never copy and paste the output from generative AI chatbots" is mentioned in the article three times. This has been my experience as well. Initial AI output can be stunning until you quickly realize that it is mostly BS, filler and pap. However, I do find LLMs to be really useful for brainstorming, ideation, sounding boards etc.
  • ColinWright 1 day ago
    The title I've chosen here is carefully selected to highlight one of the main points. It comes (lightly edited for length) from this paragraph:

    Far more insidious, however, was something else we discovered:

    More than two-thirds of these articles failed verification.

    That means the article contained a plausible-sounding sentence, cited to a real, relevant-sounding source. But when you read the source it’s cited to, the information on Wikipedia does not exist in that specific source. When a claim fails verification, it’s impossible to tell whether the information is true or not. For most of the articles Pangram flagged as written by GenAI, nearly every cited sentence in the article failed verification.

    • the_fall 22 hours ago
      FWIW, this is a fairly common problem on Wikipedia in political articles, predating AI. I encourage you to give it a try and verify some citations. A lot of them turn out to be more or less bogus.

      I'm not saying that AI isn't making it worse, but bad-faith editing is commonplace when it comes to hot-button topics.

      • mjburgess 21 hours ago
        Any articles where newspapers are the main source are basically just propaganda. An encyclopaedia should not be in the business of laundering yellow journalism into what is supposed to be a tertiary resource. If they banned this practice, that would immediately deal with this issue.
        • the_fall 21 hours ago
          That's not what I'm saying. I mean citations that aren't citations: a "source" that doesn't discuss the topic at all or makes a different claim.
        • mmooss 21 hours ago
          A blanket dimsissal is a simple way to avoid dealing with complexity, here both in understanding the problem and forming solutions. Obviously not all newspapers are propaganda and at the same time not all can be trusted; not everything in the same newspaper or any other news source is of the same accuracy; nothing is completely trustworthy or completely untrustworthy.

          I think accepting that gets us to the starting line. Then we need to apply a lot of critical thought to sometimes difficult judgments.

          IMHO quality newspapers do an excellent job - generally better than any other category of source on current affairs, but far from perfect. I remember a recent article for which they intervied over 100 people, got ahold of secret documents, read thousands of pages, consulted experts .... That's not a blog post or Twitter take, or even a HN comment :), but we still need to examine it critically to find the value and the flaws.

          • abacadaba 20 hours ago
            > Obviously not all newspapers are propaganda

            citation needed

            • tbossanova 19 hours ago
              There is literally no source without bias. You just need to consider whether you think a sources biases are reasonable or not
            • troyvit 16 hours ago
              See you should work for a newspaper. You have the gumption.
        • snigsnog 20 hours ago
          That is probably 95% of wikipedia articles. Their goal is to create a record of what journalists consider to be true.
    • dang 22 hours ago
      Submitted title was "For most flagged articles, nearly every cited sentence failed verification".

      I agree, that's interesting, and you've aptly expressed it in your comment here.

    • chr15m 19 hours ago
      People here are claiming that this is true of humans as well. Apart from the fact that bad content can be generated much faster with LLMs, what's your feeling about that criticism? It's there any measure of how many submissions before LLMs make unsubstantiated claims?

      Thank you for publishing this work. Very useful reminder to verify sources ourselves!

  • wry_durian 22 hours ago
    Note that this article is only about edits made through the Wiki Edu program, which partners with universities and academics to have students edit Wikipedia on course-related topics. It's not about Wikipedia writ large!
    • Jepacor 5 hours ago
      Ah, so when you force students to edit Wikipedia for their courses, you get worse results than someone editing something voluntarily because they're passionate about it. That's... Hardly surprising.

      So it's more about how generative AI is a problem in college right now because lazy students are using it to do the work than about Wikipedia itself, I think.

    • ketzu 12 hours ago
      That's interesting as my first thought reading the comments was "this problem seems very similar to many students writing papers just finding citations that sound correct".

      Sometimes it is really sad to read from (even PhD level) students on social media about their paper writing practices.

    • tovej 13 hours ago
      I've found Wiki Edu -edited pages with pages of creative writing exercises. When I have read their sources they were clumsily paraphrasing and misunderstanding the source.

      LLMs definitely fit the use-case of Wiki Edu students, who are just looking to pass a grade, not to look into a topic because of their interest.

  • fernly 16 hours ago
    Set aside the effect within Wikipedia and consider the larger picture, millions of people generating text with LLMs and at least some of that text being accepted as correct by millions of readers.

    The WikiEdu article clearly demonstrates what everyone should have known already: an LLM has no commitment to the truth. An LLM's only commitment is to correct syntax.

    • Jepacor 5 hours ago
      An LLM's only commitment isn't to correct syntax either. It's only commitment is to popular syntax.

      It happens that what is popular is correct often enough for the whole thing to somewhat work but I think it's always gonna be bristle.

  • chrisjj 23 hours ago
    So, a small proportion of articles were detected as bot-written, and a large proportion of those failed validation.

    What if in fact a large proportion of articles were bot-written, but only the unverifiable ones were bad enough to be detected?

    • EdwardDiego 22 hours ago
      Human editors, I suspect, would pick up the "tells" of generated text, although as we know, there's a lot of false positives in that space.

      But it looks like Pangram is a text classifying NN trained using a technique where they get a human to write a body of text on a subject, and then get various LLMs to write a body of text on the same subject, which strikes me as a good way to approach the problem. Not that I'm in anyway qualified to properly understand ML.

      More details here: https://arxiv.org/pdf/2402.14873

  • candiddevmike 22 hours ago
    I feel like this is such a tragedy of the commons for the LLM providers. Wikipedia probably makes up a huge bulk of their dataset, why taint it? Would be interesting if there was some kind of "you shall not use our platform on Wikipedia" stance adopted.
    • ohyoutravel 22 hours ago
      I don’t think it’s the providers doing this, it’s the awful users. They’re doing the same thing on GitHub. It’s maddening.
      • asadotzler 7 hours ago
        I don't think Lockheed Martin or Raytheon are doing this, it's the awful pilots and intercept operators launching missiles into Palestinian homes. I don't think Rostec Corporation is doing this. It's only the grunts on the ground pressing the button sending heavy munitions into crowds of Ukranian civilians.

        These mega corporations are entirely free from blame and you're gonna see to it none of us question their role, right?

        • ohyoutravel 5 hours ago
          It’s a bad analogy. In this case Lockheed isn’t building a killer drone and then finding a market for it, nation states are sending requirements to Lockheed based on what they want to do. Hence the label “defense contractor.”

          I think your analogy would hold if slop creators were creating requirements and contracting OpenAI to build the thing that lets them slop edit Wikipedia and GitHub issues. But since they aren’t, this is breaking the analogy.

          You are still within the edit window to change up your analogy (but unfortunately not to completely delete your post), so you have a little time to make it coherent.

          • tehjoker 5 hours ago
            I suspect that this is a very simplistic view of how R&D and the revolving door work.

            For example, previous Secretary of War Lloyd Austin was on the board of Ratheon

            https://www.opensecrets.org/revolving-door/lloyd-austin/summ...

            • ohyoutravel 4 hours ago
              You’re not wrong at all, and I agree, but it was an analogy for someone who was comparing defense contracting companies with a regular saas company. In any case, for these purposes I don’t think we need to address all edge cases.
        • parineum 5 hours ago
          > I don't think Lockheed Martin or Raytheon are doing this, it's the awful pilots and intercept operators launching missiles into Palestinian homes.

          Missiles have a lot of legitimate and good uses. They sold to the only entity that can buy them, the government, then redistributed from there.

          Missiles will be created because there is financial incentive to do so. If you really want to make the point you're trying to make, at least blame the people who create the financial incentive or the people giving orders. You've omitted the obvious most responsible party.

    • kingstnap 20 hours ago
      Wikipedia having incorrect citations is way older than LLMs. As many other people have pointed out in this thread, if you start pulling strings a lot of what people write starts falling apart.

      Its not even unique to Wikipedia. Its really not difficult to find very misleading statements cited through a citation that doesn't even support the claim when you check the original.

      • acdha 20 hours ago
        This is like saying handing out machine guns is no big change because people have been shooting arrows for a long time. At some point volume becomes the story once it overwhelms the community’s ability to correct errors.
        • parineum 5 hours ago
          > once it overwhelms the community’s ability to correct errors.

          I think the point is that it already has.

    • MattGaiser 22 hours ago
      It would be random individuals.
  • theendisney 16 hours ago
    What would be a truly epic application would be their own chat bot to ask about applying edit guidelines. After reading almost all of the guidelines the talkpage debates, even amoung experienced edditors, looked waaaay off. The pattern of revert first make up excuses later seems the worse newbie deterrent possible. This while it should be fine to make mistakes. Many such excuses would get debunked by a bot imediately. It simply wont do any favors. If established editors dont like it they can edit the guidelines.
  • shevy-java 5 hours ago
    Another issue, somewhat indirectly, is Grokipedia. As we now have more and more information, the AI that is used here deliberately engineers Grokipedia to contain, shall we say it ... "alternative facts". If you look at Grokipedia, it actually looks visually better than Wikipedia, on a smartphone at the least. At the same time it tries to destroy an objective purpose, e. g. Wikipedia trying to show accurate information without any "spin". I don't believe that how AI is used by, e. g. Elon or mega-corporations, has purity and truth at heart though. We may have to look carefully at what happens to Wikipedia - it almost seems as if the attacks against Wikipedia by AI may not be merely "accidental". (Since it stores a lot of data, of course AI bots will leech off regularly, but I am talking here about purposes by organisations who may dislike democracy, for instance.)
  • shevy-java 7 hours ago
    So, AI spam can degrade quality.

    But ... isn't this with regards to Wikipedia a much more general problem?

    Usually revisions are approved manually by real people. This already can be negative; takes a lot of time; no guarantee that new information is true but old information can be wrong too. To me it seems more as if the problem has much more to do with the quality control problems of wikipedia itself. Yes, AI spam fatigues here but if the quality control steps are bad then AI spam will only make this worse. But AI spam going away, does not mean the quality control steps have gotten any better. These two issues should be separate. Wikipedia needs to find better quality control mechanisms in general. And that also includes existing articles - some are written by people who are experts in the field. But they don't really explain anything at all. So, these articles appear good but are virtually useless for 98% of the people. I am not saying one should dumb down wikipedia, but you need to kind of focus primarily on the average person really - not stupid but not a godlike expert either. Explain it to, say, someone at age 18 or perhaps even a bit less than that.

  • thorum 14 hours ago
    I’m honestly surprised LLMs are still screwing up citations. It does not feel like a harder task than building software or generating novel math proofs. In both those cases, of course, there is a verifier, but self-verification with “Does this text support this claim?” seems like it ought to be within the capabilities of a good reasoning model.

    But as I understand the situation, even the major Deep Research systems still have this issue.

    • 12_throw_away 55 minutes ago
      > LLMs [...] reasoning model

      Found your problem right there

  • arjie 20 hours ago
    > That means the article contained a plausible-sounding sentence, cited to a real, relevant-sounding source. But when you read the source it’s cited to, the information on Wikipedia does not exist in that specific source.

    This happens a lot on Wikipedia. I'm not sure why, but it does and you can see its traces through the Internet as people post the mistaken information around.

    One that took me a little work to fix was pointed out by someone on Twitter: https://x.com/Almost_Sure/status/1901112689138536903

    When I found the source, the twitter poster was correct! Someone had decided to translate "A hundred years ago, people would have considered this an outrage. But now..." as "this function is an outrage" which honestly is ironically an outrageous translation. What the hell dude.

    But it takes a lot of work to clean up stuff like that! https://en.wikipedia.org/w/index.php?title=Weierstrass_funct...

    I had to go find the actual source (not the other 'sources' that repeated off Wikipedia or each other) and then make sure it was correct before dealing with it. A lie can travel halfway around the world...

  • throwaway5465 20 hours ago
    There seems much defensiveness in the comments here along the lines of "not a new thing" and "not unique to LLM/AI".

    It seems to deflect, even gaslight TFA.

    > For most of the articles Pangram flagged as written by GenAI, nearly every cited sentence in the article failed verification.

    So why deflect that into convenient other pedantry (surely not under the guise tech forums often do so)?

    WSo why the discomfort for part of HN at an assertion AI is being used for nefarious purposes and creation of alternate 'truths'?

    • malfist 18 hours ago
      There sure are a lot of green names on this post pushing that agenda. Makes you wonder if its astroturfing. And why its nessecary, is AI so fragile it can't let any criticism stand unchallenged?
    • emp17344 20 hours ago
      Astroturfing or marketing, I’d guess. I’ve noticed you’re no longer allowed to say negative things about AI here without significant pushback, and I’d bet this isn’t an organic shift in perception.
      • shmeeed 5 hours ago
        Well, I guess these days there's just a sizeable chunk of users on HN that earn their living with AI, one way or another. It's really only natural that some of them get thin-skinned if you shit on their lawn.

        I'm not a fan of certain trends either, but I wouldn't say it's inorganic. It's just a shift in the industry, and humans being human.

      • malfist 18 hours ago
        I've found that generally people reserve down votes for posts that don't add to the conversation, in general, just like we're supposed to do. Its always been down vote city if you happen to criticize political positions that benefit libertarian technologists. But lately anything critical of AI tends to get a lot of down votes. Even on older posts that you can't find on the front page anymore... It feels inorganic
        • oblio 12 hours ago
          > Its always been down vote city if you happen to criticize political positions that benefit libertarian technologists.

          This varies wildly by timezone. Usually I get upvoted during European timezones and then brace for the Americans to wake up.

  • simianwords 22 hours ago
    I find it very interesting that the main competitor to Wikipedia which is Grokipedia is taking a 180 degree approach being AI first.
    • ktzar 22 hours ago
      Didn't know about Grokipedia, I've just opened an article in it about Spain, scrolled to a random paragraph, and the information in it is plain wrong:

      From https://grokipedia.com/page/Spain#terrain-and-landforms > Spain's peninsular terrain is dominated by the Meseta Central, a vast interior plateau covering about two-thirds of the country's land area, with elevations ranging from 610 to 760 meters and averaging around 660 meters

      Segovia is at 1.000 meters, and so is most of the top half of the "Meseta". https://en-gb.topographic-map.com/map-763q/Spain/?center=41....

      I still stand on not trusting any of what AI spits out, be it code or text. And it takes me usually longer to check that everything is ok than doing it myself, but my brain is enticed by the "effort shortcut" that AI promised.

      • nl 18 hours ago
        I'm not an expert on the geography of Spain, and it's rare that I'd defend Grokipedia but in this case I think it is correct.

        Meseta Central mean central tableland. Segovia is on the edge of the mountain range that surrounds that tableland, but often referred to as part of it. This is fuzzy though.

        Wikipedia says: The Meseta Central (lit. 'central tableland', sometimes referred to in English as Inner Plateau) is one of the basic geographical units of the Iberian Peninsula. It consists of a plateau covering a large part of the latter's interior.[1]

        Looking at the map you linked the flat part is between 610 to 760 meters.

        Finally, when speaking about the Iberian Peninsula Wikipedia itself includes this:

        > "About three quarters of that rough octagon is the Meseta Central, a vast plateau ranging from 610 to 760 m in altitude."[2]

        [1] https://en.wikipedia.org/wiki/Meseta_Central

        [2] https://en.wikipedia.org/wiki/Iberian_Peninsula

        • anthk 12 hours ago
          Spaniard here. Spain it's tricky, it's both 'flat' with the meseta and the 2nd most mountainous country in Europe. I am not kidding, look at a heigth map. It has a plateau... surrounded by mountains and with a bigass sierra at mid-North (Picos de Europa).
      • charcircuit 21 hours ago
        Grok does cite that claim as being from https://countrystudies.us/spain/30.htm a page in Eric Solsten and Sandra W. Meditz, editors. Spain: A Country Study. Washington: GPO for the Library of Congress, 1988.

        The nice thing about grokipedia is that if you have counter examples like that you can provide it as evidence to change it and it will rewrite the article to be more clear.

        • malfist 18 hours ago
          You know what other site you can provide evidence to and change to be more correct?
          • charcircuit 11 hours ago
            Not Wikipedia as Wikipedia doesn't care about evidence. Those people care about reputable secondary sources and will ignore you when point out evidence that contradicts such sources.
          • homebrewer 17 hours ago
            I don't ever edit English wikipedia because my English is not nearly up to the standard, and suggestions for improvement (worthwhile IMO) are usually ignored. Grok at least won't ignore you. (I tend to post suggestions to unpopular pages with sparse edit history, which is probably the reason for them going unnoticed.)
            • 6510 12 hours ago
              I use to frequent irc channels and forums where no such thing as an old question existed. Someone asked an interesting question on irc and days or weeks later a response would happen. On forums the response could be more than a year "delayed". Gradually things shifted to newer new new news that couldn't possibly be new enough. Then debates happen where people sometimes link to the vastly superior olds. Wikipedia finally caught up and questions are no longer ignored. In stead they are archived long before an ignored status could be earned.
    • bawolff 20 hours ago
      > I find it very interesting that the main competitor to Wikipedia which is Grokipedia

      Encyclopedia Britannica (the website not the printed book) is the main competitor to Wikipedia and gets an order of magnitude more traffic than grokipedia. Right now grokipedia is the new kid on the block. It has yet to be seen if its just a novelty or if it has staying power but either way it still has a ways to go before its Wikipedia's primary competitor.

    • Sharlin 20 hours ago
      Main competitor? I’m pretty sure that Uncyclopedia is a more relevant competitor to Wikipedia than Grokipedia. Likely more accurate, too.
      • simianwords 10 hours ago
        In some time it will become a serious alternative.
        • bawolff 3 hours ago
          Maybe, maybe not. There are a ton of failed attempts at competing with Wikipedia out there. Whether grokipedia is one remains to be seen.
        • Sharlin 10 hours ago
          Serious alternative if what you're after is false information, certainly.
    • oblio 12 hours ago
      That thing is "the main competitor to Wikipedia" in the same way I'm the main competitor for the Olympic 100m race. I mean, both I and the winner have legs so it's going to be a close race, right?
      • simianwords 10 hours ago
        It’s on its way to becoming more popular and a clear competitor to it. Just a matter of time.
        • LightBug1 8 hours ago
          Is that "more popular" in the sense of McDonald's popular?
          • alt227 1 hour ago
            Doesnt really matter the reason why does it?

            If more people use it then it is more popular. Simple metric.

            • LightBug1 35 minutes ago
              Well, kinda ...

              Approximately 3.2m Americans drink Hard Kombucha.

              Approximately 5m Americans use Cocaine.

              Simple metric.

          • shmeeed 5 hours ago
            No, it's more the "a thousand flies can't be wrong" type. SCNR
          • simianwords 7 hours ago
            Yeah so?
    • LightBug1 8 hours ago
      Wouldn't touch that grokipedia pos with your bargepole ... let alone mine.
  • Lapsa 9 hours ago
    wikipedia is great but I can't get over this - https://www.wikifunctions.org/view/en/Z16393
    • alt227 1 hour ago
      Have you tried going under it? /s
  • PlatoIsADisease 9 hours ago
    ITT: People saying what I got downvoted for on the last wikipedia HN thread

    I don't care if AI is used. I care about citations.

    I don't know what happened between that thread and this, maybe the narrative really changes how people respond.

  • genie3io 13 hours ago
    [dead]
  • huflungdung 9 hours ago
    [dead]
  • ks2048 20 hours ago
    [flagged]
    • ragesoss 18 hours ago
      lol. would have written something shorter for HN, but the main expected audience for it was Wikipedians.
  • asyncadventure 20 hours ago
    [dead]
    • HPsquared 20 hours ago
      This goes much further than Wikipedia, it's just particularly visible there.
    • gwern 19 hours ago
      Thanks for the LLM comment, but that's dumb. If the problem really was as bad with humans (it obviously is not), then OP wouldn't've happened:

      > For most of the articles Pangram flagged as written by GenAI, nearly every cited sentence in the article failed verification.

      • chr15m 18 hours ago
        Agree. I'm curious about the human contribution baseline.
  • vibeprofessor 16 hours ago
    I trust Grokipedia way more, even though it's AI-generated. Wikipedia on any current topic is dominated by various edit gangs trying to push an agenda
    • coffeebeqn 6 hours ago
      Ah yes Elon Musk the man with no agenda
    • kmeisthax 3 hours ago
      Grokipedia is a pile of propaganda written by an AI that moonlights as a CSAM generator, built to serve as a weapon in a culture war being waged by a bunch of billionaires trying to normalize pedophilia by selling it to neo-Nazis.

      For now, I think I'll take the Wikipedia edit gangs.