Sam Altman may control our future – can he be trusted?

(newyorker.com)

1076 points | by adrianhon 18 hours ago

107 comments

  • ronanfarrow 15 hours ago
    Ronan Farrow here. Andrew Marantz and I spent 18 months on this investigation. Happy to answer questions about the reporting.
    • cs702 15 hours ago
      Thank you for coming on HN and offering to answer questions.[a]

      This is a fantastic piece, very timely, evidently well-researched, and also well-written. Judging by the little that I know, it's accurate. Thank you for doing the work and sharing it with the world.

      OpenAI may be in a more tenuous competitive position than many people realize. Recent anecdotal evidence suggests the company has lost its lead in the AI race to Anthropic.[b]

      Many people here, on HN, who develop software prefer Claude, because they think it's a better product.[c]

      Is your understanding of OpenAI's current competitive position similar?

      ---

      [a] You may want to provide proof online that you are who you say you are: https://en.wikipedia.org/wiki/On_the_Internet%2C_nobody_know...

      [b] https://www.latimes.com/business/story/2026-04-01/openais-sh...

      [c] For example, there are 2x more stories mentioning Claude than ChatGPT on HN over the past year. Compare https://hn.algolia.com/?dateRange=pastYear&page=0&prefix=tru... to https://hn.algolia.com/?dateRange=pastYear&page=0&prefix=tru...

      • ronanfarrow 11 hours ago
        Thank you for this, very much appreciate the thoughtful response.

        The piece captures some of the anxieties within OpenAI right now about their competitive position. This obviously ebbs and flows but of late there has been much focus on Anthropic's relative position. We of course mention the allegations of "circular deals" and concerns about partners taking on debt.

        • cs702 9 hours ago
          Thank you. Yes, I saw that. The company's always been surrounded by endless talk about insane hype, speculative bubbles, and financial engineering. I wasn't asking so much about that.

          I was asking more about your informed view on how OpenAI's technology, products, and roadmap are perceived, particularly by customers and partners, in comparison to those of competitors.

          If you have an opinion about that, everyone here would love to hear about it.

          • Ericson2314 4 hours ago
            Ronan Farrow's expertise is investigations into elite amorality, not evaluating technical products. Why are you asking this question?
            • cs702 3 hours ago
              I didn't asking him to evaluate them. I asked him how customer and partners perceive them.

              He's had so many conversations that he likely has a sense of how perceptions of the company and its offerings have changed.

              I'm curious.

            • bloppe 29 minutes ago
              Much of the article and general palace intrigue is predicated on the idea that OpenAI has a singularly revolutionary product. If it later turns out to be a commodity, or OpenAI is simply outcompeted nonetheless, then the idea that Sam Altman's personal shortcomings are something to stress about would seem quaint. Just another hubristic tech billionaire acting in bad faith doesn't really pry attention the same way as someone "controlling your future".
          • irishcoffee 4 hours ago
            My guess is that the answer to your question, fantastic question, is that nobody knows. I remember having the same thoughts when Covid was first “arriving” if you will: we wanted people in the know to throw us a nugget of information, and they just didn’t know.

            As it turns out, and what I’m kind of going with for this LLM shit, is that it’ll play out exactly how you think it will. The companies are all too big to fail, with billionaire backers who would rather commit fraud than lose money.

      • unsupp0rted 7 hours ago
        Many of us prefer OpenAI's Codex, because we think it's a better product.

        No comment on the CEO: I just find the product superior in everything but UI/UX and conversation. It's better at quality code.

        • mliker 7 hours ago
          Who is “us”? It does seem that some scientists prefer Codex for its math capabilities but when it comes to general frontend and backend construction, Claude Code is just as good and possibly made better with its extensive Skills library.

          Both codex and Claude code fail when it comes to extremely sophisticated programming for distributed systems

          • keldaris 3 hours ago
            As a scientist (computational physicist, so plenty of math, but also plenty of code, from Python PoCs to explicit SIMD and GPU code, mostly various subsets of C/C++), I can confirm - Codex is qualitatively better for my usecases than Claude. I keep retesting them (not on benchmarks, I simply use both in parallel for my work and see what happens) after every version update and ever since 5.2 Codex seems further and further ahead. The token limits are also far more generous (and it matters, I found it fairly easy to hit the 5h limit on max tier Claude), but mostly it's about quality - the probability that the model will give me something useful I can iterate on as opposed to discard immediately is much higher with Codex.

            For the few times I've used both models side by side on more typical tasks (not so much web stuff, which I don't do much of, but more conventional Python scripts, CLI utilities in C, some OpenGL), they seem much more evenly matched. I haven't found a case where Claude would be markedly superior since Codex 5.2 came out, but I'm sure there are plenty. In my view, benchmarks are completely irrelevant at this point, just use models side by side on representative bits of your real work and stick with what works best for you. My software engineer friends often react with disbelief when I say I much prefer Codex, but in my experience it is not a close comparison.

            • ricksunny 1 hour ago
              >As a scientist (computational physicist,

              Is there one that you prefer for, i dunno, physics?

          • the__alchemist 3 hours ago
            Claude Code, Codex, and Cursor are old news. If you're having problems, it's because you're not using the latest hotness: Cludge. Everyone is using it now - don't get left behind.
            • outside1234 1 hour ago
              Cludge has been left behind by Clanker, that’s the new hotness. 45B valuation!
          • zeroxfe 6 hours ago
            I'm in that camp -- I have the max-tier subscription to pretty much all the services, and for now Codex seems to win. Primarily because 1) long horizon development tasks are much more reliable with codex, and 2) OpenAI is far more generous with the token limits.

            Gemini seems to be the worst of the three, and some open-weight models are not too bad (like Kimi k2.5). Cursor is still pretty good, and copilot just really really sucks.

          • unsupp0rted 6 hours ago
            Us = me and say /r/codex or wherever Codex users are. I've tried both, liked both, but in my projects one clearly produces better results, more maintainable code and does a better job of debugging and refactoring.
            • sampullman 6 hours ago
              That's interesting, I actively use both and usually find it to be a toss up which one performs better at a given task. I generally find Claude to be better with complex tool calls and Codex to be better at reviewing code, but otherwise don't see a significant difference.
              • SOLAR_FIELDS 4 hours ago
                If you want to find an advocate for Codex that can give a pretty good answer as to why they think it's better, go ask Eric Provencher. He develops https://repoprompt.com/. He spends a lot of time thinking in this space and prefers Codex over Claude, though I haven't checked recently to see if he still has that opinion. He's pretty reachable on Discord if you poke around a bit.
              • aswanson 6 hours ago
                Any difference in performance on mobile development?
                • sampullman 4 hours ago
                  For that I'm not so sure. I tried both early 2025 and was disappointed in their ability to deal with a TCA based app (iOS) and Jetpack compose stuff on Android, but I assume Opus 4.6 and GPT 5.4 are much better.
            • rocketpastsix 4 hours ago
              yea Im not in this "us" you speak of.
          • zem 5 hours ago
            I've found claude startlingly good at debugging race conditions and other multithreading issues though.
            • josephg 4 hours ago
              My rule of thumb is that its good for anything "broad", and weaker for anything "deep". Broad tasks are tasks which require working knowledge of lots of random stuff. Its bad at deep work - like implementing a complex, novel algorithm.

              LLMs aren't able to achieve 100% correctness of every line of code. But luckily, 100% correctness is not required for debugging. So its better at that sort of thing. Its also (comparatively) good at reading lots and lots of code. Better than I am - I get bogged down in details and I exhaust quickly.

              An example of broad work is something like: "Compile this C# code to webassembly, then run it from this go program. Write a set of benchmarks of the result, and compare it to the C# code running natively, and this python implementation. Make a chart of the data add it to this latex code." Each of the steps is simple if you have expertise in the languages and tools. But a lot of work otherwise. But for me to do that, I'd need to figure out C# webassembly compilation and go wasm libraries. I'd need to find a good charting library. And so on.

              I think its decent at debugging because debugging requires reading a lot of code. And there's lots of weird tools and approaches you can use to debug something. And its not mission critical that every approach works. Debugging plays to the strengths of LLMs.

          • 7thpower 5 hours ago
            Not a scientist and use codex for anything complex.

            I enjoy using CC more and use it for non coding tasks primarily, but for anything complex (honestly most of what I do is not that complex), I feel like I am trading future toil for a dopamine hit.

        • bko 4 hours ago
          I also find Codex much more generous in terms of what you get with a Pro ($20/mo) subscription. I use it pretty much non-stop and I have yet to hit a limit. Weekly reset is much better as well.
        • enraged_camel 5 hours ago
          Yeah, there are dozens of you. Dozens!
      • brightbeige 14 hours ago
        He’s replying on this twitter thread - perhaps someone with an account can ask there and link his comment here?

        https://xcancel.com/RonanFarrow/status/2041127882429206532#m

        • jamiequint 10 hours ago
          Here is the actual link, not a link to some weird third-party site that can't be trusted.

          https://x.com/RonanFarrow/status/2041127882429206532

          • rounce 5 hours ago
            FYI xcancel is just a mirror that allows reading replies without needing an account.
          • SwellJoe 8 hours ago
            Whereas X can be trusted?
            • jamiequint 4 hours ago
              Yes? It's the data source, not a third-party. How is this even a question?
              • minimaxir 1 hour ago
                There's pedantic, and then there's needlessly pedantic.

                xcancel is a valid workaround for X links on Hacker News and is sufficient for original attribution.

              • SwellJoe 1 hour ago
                X restricts what you can view without logging in. Many folks don't want to log in to X, for obvious reasons. Posting an xcancel link is kinda like folks posting various `archive` URLs to bypass paywalls, work around overloaded servers, etc. That's an extremely common practice here that usually goes without comment.
      • ed 5 hours ago
        It's worth noting Codex has 2x more stories than Claude https://hn.algolia.com/?query=codex
        • cloverich 54 minutes ago
          But by page 5, those stories have around 50-60 karma, while claude page five is still 500+

          (i found your comment surprising based on my daily hn reading recollection - i mostly read top N daily and feel i only occassionally see codex stories).

      • ATMLOTTOBEER 2 hours ago
        Yeah we moved to Claude a few months ago, mostly because the devs kept using it anyway. Altman stuff is interesting but at the end of the day you just go with whatever tool works
      • georgemcbay 12 hours ago
        > You may want to provide proof online that you are who you say you are

        Unfortunately it probably doesn't even matter here on HN considering how brigaded down this story is predictably getting.

        But yeah, it was a fantastic piece.

    • jzymbaluk 5 hours ago
      Hi Ronan, thanks for the article and for answering questions.

      My question is, how do you know when an enormous project like this, conducted over an 18-month time span is "done"? I assume you get a lot of leeway from editors and publishers on this matter. How do you make the decision to finally pull the trigger on publishing?

    • taurath 12 hours ago
      The statements around the sexual abuse allegations seemed to be the most puzzling to me - his sister’s allegations and claims of underage partners because he has a tendency to hook up with younger partners. It does seem like this piece gives him a pretty clean bill of health in that matter - I guess would you be able to talk about how you investigated?

      Did you do any extra investigations into Annie’s allegations? It feels to me like the unstated conclusion is recovered memory can’t be trusted, which is a popular understanding but a very wrong one put out by the now defunct and discredited False Memory Syndrome Foundation. It was founded by the parents of the psychologist who coined DARVO, directly in reaction to her accusing them of abuse.

      Dissociation is real (I have a dissociative disorder, and abuse I “recovered” but did not remember for much of my adolescence and early adulthood has been corroborated by third parties) and many CSA survivors have severe memory problems that often don’t come to a head until adulthood. I know you didn’t dismiss her claim, but the way the public tends to think about recovered memories is shaped primarily by that awful organization.

      • ronanfarrow 11 hours ago
        All fair points on trauma and memory.

        As noted in the piece, we spent months talking to Altman's partners and what we found and didn't is as described.

        • taurath 9 hours ago
          Thanks for the response! Cheers just fully reread the piece and appreciate your reporting.
        • girvo 6 hours ago
          It's super neat to see you here on HN taking questions, kudos :)
      • fontain 2 hours ago
        I am very sympathetic to the situation you describe. I certainly think it is possible that Annie is describing something that happened. I think the author did a fair job of representing the allegations, finding the right balance between disclosing that they were unable to corroborate the allegations without dismissing them.

        That said, "recovering" memories as a therapy does not pass any sort of sniff test and it doesn't take a concerted effort to discredit the concept. Human memory is very malleable. Patients with mental health issues (which could predate abuse, or could be caused by abuse) are often in search of answers and that makes them very vulnerable.

        Could a memory be buried deep in our subconscious, forgotten, only to return to the surface later? Sure, we all forget things and then remember them when triggered by something, whether that's a smell or sound or something else entirely. But can we engineer that process, with any degree of reliability? How can we even begin to reliably reverse engineer the triggers?

        I think it is also important to keep in mind that Annie is rich, and the health care available to rich people can be very predatory. There are endless examples of nonsense therapies for all types of health, from ear seeds to treatments for "chronic Lyme".

        Memories that return organically due to a trigger are a world apart from "recovered" memories, we shouldn't conflate them. If Annie's memories were triggered in adulthood, sure, that's really no different than remembering something... but "recovered"? That is something else entirely.

        Correct me where I'm wrong, I'd like to learn your perspective, maybe there's a missing piece.

      • gowld 3 hours ago
        That's not a fair assessment. "False memory syndrome" and "repressed/recovered memory" are both outside scientific mainstream consensus.
      • hello_humans 6 hours ago
        [flagged]
    • cm2012 3 hours ago
      I just spent a while reading the article. I really appreciate you writing it. In my case, it made me like Sam Altman a lot more. But I was only able to conclude this because of all the evidence you took the time to put together. It paints the picture of someone trying to do something very difficult in a rapidly changing environment and a lot of pressure, but still making the important choices and not shirking them.
      • ronanfarrow 3 hours ago
        Interesting to hear! While this hasn’t been a commonplace reaction, I think if I do my job right it should allow people to read the facts as they will, exactly like this. It’s strenuously designed to be fair and, where appropriate, even generous.
    • sebmellen 4 hours ago
      Ronan Farrow on Hacker News. Now I’ve seen everything.
      • ronanfarrow 3 hours ago
        I’ve really appreciated how substantive and polite the discourse here is, overall!
        • dang 2 hours ago
          I'm a mod here and wanted to let you know 2 things: (1) I've marked your account with a beta feature that displays a colored line to the left of new comments (since you last viewed the page). It might help you keep track of this rather large thread.*

          (2) I'm sorry the post was downranked off the frontpage for a while this afternoon. A software penalty kicks in when the discussion seems overheated ("flamewar detector") but I turned this off as soon as I became aware of it. We make a point of moderating HN less when a story is YC-related (https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...) but as this goes against standard internet axioms, people usually assume the opposite.

          (* And yes, any reader who wants this is welcome to email hn@ycombinator.com to ask - I haven't turned it on for everyone because I'm worried it would slow the site down. Also, it's a bit buggy and not only have I not had time to fix it, I've forgotten what the bugs are.)

        • tootie 1 hour ago
          Not a question but just wanted to make sure you saw this:

          https://theonion.com/anyone-else-have-those-weird-dreams-whe...

    • aragonite 3 hours ago
      I had a question about reporting conventions. In the paragraph where Altman is said to have told Murati that his allies were "going all out" to damage her reputation, the claim is attributed to "someone with knowledge of the conversation" but the attribution is tucked inconspicuously into the middle of the sentence (rather than say leading upfront ("According to someone with knowledge of the conversation, Altman...")) and Altman's non-recollection appears only parenthetically.

      As a reader, am I supposed to infer anything about evidentiary weight from these stylistic choices? When a single anonymous source's testimony is presented in a "declarative" narrative style like here (with the attribution in a less prominent position), should we read that as reflecting high confidence on your end (perhaps from additional corroboration not fully spelled out)? And does the fact that Altman’s non-recollection appears in parentheses carry any epistemic signal (e.g. that you assign it less evidentiary weight)? Or is that mostly a matter of (say) prose rhythm?

    • fblp 6 hours ago
      Hi Ronan appreciate you being here. what would help you and others continue to do journalism like this? (including commenting on HN?)
      • ronanfarrow 3 hours ago
        This is a vast and tricky question. The business model has basically fallen out from under journalism, and especially this kind of labor-intensive investigative reporting. The media landscape is increasingly dominated by moneyed individuals and companies essentially buying up the discourse.

        I would really suggest subscribing to and finding ways to amplify independent outlets and journalists, and encouraging others to do so.

        • ricksunny 56 minutes ago
          Treating quality investigative reporting like the scarce resource that it is, as one of the most well-known can you shed any light on why Reuters would delegate resources to commission investigative reporters to unmask Banksy (in a world where all-things-Epstein represents an unending source of investigative opportunities in the public interest)?
        • fblp 2 hours ago
          Got it! Any recommendations on who to subscribe to? Any personal links for you?

          In developer communities often you can support individual developers or groups through a monthly subscription / donation on their github page or similar.

          • mplanchard 2 hours ago
            Well, this piece was in The New Yorker, which is reasonably priced and regularly includes excellent investigative journalism. I get the physical copies, which can be too much to keep up with if you try to read everything, but it’s easy enough if you skim and just read the things that stick out as being of particular interest.
    • euio757 2 hours ago
      Nice biography from Loopt to OpenAI. Why no mention of the Worldcoin cryptocurrency https://x.com/sama/status/1451203161029427208 in this piece? Was there nothing interesting to report in that area?
    • philip1209 3 hours ago
      We talk about Sam Altman a lot. At this point he has a Hollywood movie in post-production, a book ("The Optimist"), and a seemingly endless stream of profiles. It feels intellectually lazy to keep researching the same guy when the industry is moving beyond him.

      All evidence today suggests Anthropic is passing OpenAI in relative and absolute growth. So where's the critical reporting? The DOD coverage was framed around the Pentagon's decisions, not Anthropic's. And nobody seems interested in examining whether the company that branded itself as the ethical AI lab actually is one. That seems like a story worth writing.

      • solenoid0937 3 hours ago
        > whether the company that branded itself as the ethical AI lab actually is one

        FWIW I have two(!!) close friends working for Anthropic, one for nearly two years and one for about 4 months.

        Both of them tell me that this is not just marketing, that the company actually is ethical and safety conscious everywhere, and that this was the most surprising part about joining Anthropic for them. They insist the culture is actually genuine which is practically unicorn rarity in corporate America.

        We have worked for FAANG so I know where they're coming from; this got me to drop my cynicism for once and I plan on interviewing with them soon. Hopefully I can answer this question for myself.

        • root_axis 2 hours ago
          Yeah, every engineer in the bay area has a way of framing the business they work for as a benign force for good... Until they find themselves working somewhere else, then suddenly they have a lot to say about the unacceptable things going on there.

          From the outside, I find Anthropic's hyperbolic marketing to be an indication that they are basically the same as every other bay area tech startup - more or less nice folks who are primarily concerned with money and status. That's not a condemnation, but I reject all the "do no evil" fanfare as conveniently self serving.

          • JumpCrisscross 1 hour ago
            > every engineer in the bay area has a way of framing the business they work for as a benign force for good

            This isn't remotely true in my experience. The senior folks I know at Meta, for example, pretty much concede they're ersatz drug dealers.

          • solenoid0937 1 hour ago
            TBH I have worked at multiple FAANG and I don't know anyone other than maybe new grads that actually drank the koolaid.

            Certainly most of us know we are just in it for the money, and the soul-grinding profit machine will continue to grind souls for profit regardless of what we want.

            So that's why it is surprising to me when my (fairly senior) grizzled ex-FAANG friends, that share the same view, start waxing poetic about Anthropic being different and genuine. I think "maybe it is" and decide to interview. IDK, I guess some part of me wants to believe that nice things can exist.

        • Bolwin 10 minutes ago
          I find it bizarre even the public image of Anthropic is seen as ethical after the Department of War debacle, in which they themselves admitted they had basically no qualms with their tech being used for war and slaughter at all except two very very thin lines, namely mass surveillance of American citizens and fully automated weaponry with their current models.

          It only showed they were marginally more ethical than OpenAI and XAI which isn't saying much.

        • hypersoar 1 hour ago
          I can believe that such an atmosphere exists there. I can't believe that it will stay. It will be squeezed out by the drive for profit in time.
        • foolswisdom 2 hours ago
          I think cynicism is deserved just from observing Dario's remarks.
      • giwook 3 hours ago
        There may be a reason why Altman is talked about a lot. This article in particular surfaces real information and new perspectives we've not heard in this level of detail before on some pretty significant topics that will be impacting you, me, and pretty much everyone we know not only today but well into the future.

        You have a point in that Anthropic deserves some coverage too and that there are interesting perspectives that we've not heard of on that front either.

        But just because that's true doesn't mean this article isn't very much relevant and needed.

        Because it is.

        • freely0085 3 hours ago
          The New Yorker has given plenty of coverage about Anthropic in their past issues earlier this year.
      • ronanfarrow 3 hours ago
        For what it’s worth, the story, while focused on OpenAI, is not uncritical of Anthropic. It explores whether there is a wider race to the bottom in terms of safety, and erosion of even some of Anthropic’s commitments.
      • k1m 2 hours ago
        After the US launched its attack on Iran, the ethical AI lab's CEO wrote: "Anthropic has much more in common with the Department of War than we have differences." - https://www.anthropic.com/news/where-stand-department-war
        • mptest 1 hour ago
          "how easy it is, for those of us who play no part in public affairs, to sneer at the compromises required of those who do" - robert harris

          Not making any value judgements, but I can see how one might value their interpretability research higher than what the ceo says in a time where the corrupt, criminal executive branch is muscling in to everything from what's written on currency, to journalistic sources. I generally blame fascists before i blame those unable or unwilling to resist them. though obviously, ideally, we'd all lock arms and, together through friendship, crush authoritarians and fascists.

          • whattheheckheck 1 hour ago
            Seriously blame anyone other than the fucking abuser. These people
      • basisword 3 hours ago
        OP says they’ve been working on this for 18 months. Most of what you’ve said wasn’t the case until much more recently.
      • Nevermark 3 hours ago
        We should stop talking about potential problems or perpetrators, when we have talked about them “enough”?

        That would be irrational.

        We should give air time to other problems?

        I think everyone agrees with that.

        You have managed to distill a surprisingly pure vintage of false dichotomy, from a near Platonic varietal of whataboutism.

      • xvector 3 hours ago
        Normies don't know what an "Anthropic" is. They use ChatGPT. Particularly sharp normies might know that ChatGPT is made by OpenAI, and the sharpest might know that Sam Altman is the CEO.

        Now, they may have heard the word "Anthropic" due to recent media coverage. But they don't know what it is and don't remember what it makes. The fact that all businesses use "Anthropic" is about as relevant to them as knowing the overseas shipping company for all the shit they buy off Amazon.

        So articles about OAI will always produce more revenue for the media, because it's related to what normies actually use day to day.

      • _HMCB_ 3 hours ago
        [flagged]
      • easterncalculus 3 hours ago
        [flagged]
    • f154hfds 1 hour ago
      > in 2014, [Graham] had recruited Altman to be his successor as president.

      > [Graham's] judgment was based not on Altman’s track record, which was modest, but on his will to prevail, which Graham considered almost ungovernable.

      One thing I don't understand is why Paul Graham offered YC to Altman if he knew how slippery he was..

      • sonofhans 36 minutes ago
        Perhaps your question answers itself.
    • egonschiele 4 hours ago
      Just wanted to say what an incredible person you are! Catch and Kill and the related reporting was awesome too!
      • ronanfarrow 3 hours ago
        This is so appreciated, thank you! These stories can honestly take a lot out of me so thoughtful reactions mean a lot.
    • tbagman 3 hours ago
      Wonderful work and writing, Ronan -- I'm appreciative of your careful balance between objective fact-finding and synthesis.

      For me, a big worry about AI is in its potential to further ease distorting or fabricating truth, while simultaneously reducing people's "load-bearing" intellectual skills in assessing what is true or trustworthy or good. You must be in the middle of this storm, given your profession and the investigations like this that you pursue.

      Do you see a path through this?

    • Uhhrrr 41 minutes ago
      The last couple sentences tie things up really nicely.
    • cmiles8 15 hours ago
      Great reporting.

      Altman describes his shifting views as genuine good faith evolution of thinking. Do you believe he has a clear North Star behind all this that’s not centered on himself?

      • ronanfarrow 11 hours ago
        The piece is an interrogation of this very question, at great length and with some nuance. I think what it does most usefully is scrutinize an array of different answers to the question.

        My own impression after many hours of conversation is that he is identifying something of a true north star when he frames this around "winning." There are people in the story who talk about him emphasizing a desire for power (as opposed to, say, wealth). I think he probably also believes, to some extent, the story he tells that equates winning, and his gaining power, with a superabundant utopian future for all.

        However, I think critics correctly highlight a tension between his statements about centering humanity writ large and his tilt into relentless accelerationism.

      • i7l 15 hours ago
        (Other people's) money.
    • mplanchard 2 hours ago
      Hi Ronan, absolutely wild to see you here in the belly of the beast.

      I have not read the article yet, because I get the physical magazine and look forward to reading it analog. I therefore only have an inconsequential question.

      I love the New Yorker’s house style and editorial “voice,” and I have always been curious about the editing process. I enjoyed the recent exhibit at the NYPL, which had some marked up drafts with editor feedback and author comments.

      Did you find that your editors made significant changes to the voice of the piece, and/or do you find any aspects of their editing process particularly notable or unusual?

      Can’t wait to read this one, and hope the HN crowd treats you well.

    • bck102 2 hours ago
      Have you considered doing a piece on Aaron Swartz? Timnit Gebru? Michael O. Church?
    • tsunamifury 2 hours ago
      I know why the cantilevered pool statement is there and why you mentioned it.

      I’m sure you don’t know half of the totally fucked up things Sam did to get “revenge” for the slight of a leaking pool.

    • jharohit 2 hours ago
      what model was used to create the visual at the top of the article?
    • wyldfire 4 hours ago
      Dang, can you substantiate that this is actually Mr. Farrow like he claims?

      Or Mr Farrow can you post some evidence somewhere we can see?

    • _alternator_ 4 hours ago
      Do you think the recent conflict between Anthropic and the Department of War, and the apparent bootlicking by OpenAI has fundamentally altered the public perception of OAI? Are they the baddies now in the general public opinion?
    • felixgallo 5 hours ago
      This is brilliant work, guys. Did you get any pressure to soften or spike the story?
      • ronanfarrow 3 hours ago
        I won’t get into behind-the-scenes specifics here but I think you can imagine how pressurized this topic was and the amount of heat that tends to generate. I’m used to getting a lot of blowback and it’s never fun. I just hope the work is meticulous and fair enough, and that enough people see the benefits of that, that I get to continue to do it.
        • Balgair 35 minutes ago
          Hey, just want to say thanks for the piece and for all the hard work and effort you did to get this out there. I've published a bit here and there, and the actual writing is only ~50% of the work load (for me at least). So thanks for going through all the effort and pain to get it out, really appreciate all the work you do for me and the rest of Joe Public.
    • artursapek 1 hour ago
      hey I loved that Ricky Gervais joke about you at the globes
    • Stevvo 5 hours ago
      Love the visual. Fantastic.
    • xnx 13 hours ago
      In depth reporting is great. This is a really tricky topic to cover over the course of 18 months. A year and a half ago OpenAI was ascendant, now it's -at best- stalling and, more likely, trending toward irrelevant.
    • Lerc 1 hour ago
      From time to time I have been accused of being an apologist for Sam Altman, but I have always tried to assess information based upon what it says instead of whether it matches an existing narrative. You list a number of distortions in your article which show the problem. If you are a good person, bad stories about you may be fake. If you are a bad person, bad stories about you may still be fake.

      My prima facie view on Altman has been that he presents as sincere. In interviews I have never seen him make a statement that I considered to be a deliberate untruth. I also recognise that people make claims about him go in all directions, and that I am not in a position to evaluate most of those claims. About the only truly agreed upon aspect has been how persuasive he is.

      I can definitely see a possibility of people feeling like they have been lied to if they experienced a degree of persuasion that they are unaccustomed to. If you agree to something that you feel like you didn't really feel like you would have, I can see people concluding that they have been lied to rather than accept that they had been intellectually beaten.

      In all such cases where an issue is contentious, you should ask yourself, what information would significantly change your views. If nothing could change your view, then it's a matter beyond reason.

      I think you will agree that there is no smoking gun in this article, and it is just an outlay of the allegations. Evaluating allegations becomes tricky because I think it becomes a character judgement of those making the claims.

      I have not heard a single person in all of this criticise Ilya Sutskever's character. If he were to make a statement to say that this article is an accurate representation of what he has experienced, it would go a long way.

      I think Paul Graham should make a statement, The things he has publicly claimed are at odds with what the article says he has privately claimed. I have no opinion if one or the other is true or if they can be reconciled but there seem to be contradictions that need to be addressed.

      While I do not have sources to hand (so I will not assert this as true but just claim it is my memory) I recall Sam Altman himself saying that he himself did not think he should have control over our future, and the board was supposed to protect against that, but since the 'blip' it was evident that another mechanism is required. I also recall hearing an interview where Helen Toner suggested that they effectively ambushed Altman because if he had time to respond to allegations he could have provided a reasonable explanation. It did not reflect well on her.

      I am a little put off by some of the language used in the article. Things like "Altman conveyed to Mira Murati" followed by "Altman does not recall the exchange" Why use a term such as 'conveyed' which might imply no exchange to recall? If a third party explained what they thought Altman thought. Mira Murati could reasonbly feel like the information has been conveyed while at the same time Altman has no experience of it to recall. Nevertheless it results in an impression of Altman being evasive. If the text contained "Altman told Mira Murati" then no such ambiguity would exist.

      "Later, the board was alarmed to learn that its C.E.O. had essentially appointed his own shadow board" Is this still talking about Brockman and Sutskever? I just can't see this as anything other than a claim he took advice from people he trusted. I assume those board members who were alarmed were not the ones he was trusting, because presumably the others didn't need to find out. The people he disagreed with still had votes so any claim of a 'shadow board' with power is nonsense, and if it is a condemnable offence, is the same not true of the alignment of board members who removed him.

      Josh Kushner apparently made a veiled threat to Muratti, the claim "Altman claims he was unaware of the call" casts him as evasive by stacking denial upon denial, but without any other indication that was undisclosed in the article, it would have been more surprising if he did know of the call. I also didn't know of the call because I am not those two people.

      The claim of sexual abuse says via Karen Hao "Annie suggested that memories of abuse were recovered during flashbacks in adulthood." To leave it at that without some discussion about the scientific opinion on previously unremembered events being recalled during a flashback seems to be journalistically irresponsible.

      • clapthewind 1 hour ago
        You make very good points. Signed up to point this out to others.
    • tstrimple 1 hour ago
      Hard hitting journalism here. Is the person who lied for years to promote himself trustworthy? More news at 11!
    • rhlannx 6 hours ago
      I have the feeling that if you write an article in that style, the subject of the story becomes the hero even if you insert a couple of negatives. In the same manner that Michael Corleone becomes the hero of The Godfather.

      I'm not pleased with the headline and the general framing that AI works. The plagiarism and IP theft aspects are entirely omitted. The widespread disillusion with AI is omitted.

      On the positive side, the Kushner ad Abu Dhabi involvements (and threats from Kushner) deserve a wider audience.

      My personal opinion is that "who should control AI" is the wrong question. In the current state, it is an IP laundering device and I wonder why publications fall silent on this. For example, the NYT has abandoned their crown witness Suchir Balaji who literally perished for his convictions (murder or not).

      • ronanfarrow 3 hours ago
        For what it’s worth, I don’t think the piece at all avoids key areas of disillusionment with the technology. Quite the contrary.
    • FloorEgg 7 hours ago
      Hi Ronan,

      I would love to read your piece and pay you and new Yorker for it, but I am not interested in paying a subscription. If I could press a button and pay a reasonable one time license such as $3 or $5 for just this article, or better yet a few cents per paragraph as they load in, I wouldn't hesitate.

      However I'm not going to pay for yet another subscription to access one article I'm interested in.

      I'm sure you can't do anything about this, but I just wanted you to know.

      You deserve to be compensated for great journalism. In this case, unfortunately, I won't read it and you won't earn income from me.

      • cloud_line 6 hours ago
        You could buy a physical copy (and this isn't meant to sound sarcastic).
      • jzymbaluk 5 hours ago
        You can walk down to a bookstore or anywhere that sells magazines and buy a physical copy
      • mattbee 7 hours ago
        Or just switch your browser to Reader Mode and it's free.
      • IrishTechie 7 hours ago
        I’ve often thought about a model like this and would love to see a few news outlets run it as a pilot and see how it stacks up.
        • mikeyouse 5 hours ago
          Many have tried it (as well as the oft-recommended micropayments idea) and it never justifies the added expense and overhead of the customization. Closest is probably the NYTimes’ gift article feature.
          • Dylan16807 4 hours ago
            I really doubt the implementation difficulty is the actual reason. It's not hard to have an extra table of specific article permissions.
      • caycep 7 hours ago
        You could hit up a public library...
        • eichin 6 hours ago
          Looking online it looks like the newsstand price of an issue is around $10 (which I'd assume is heavily ad subsidized, if anyone is still buying print ads?) which is an interesting data point for a pricing model. (Of course, I looked online because I have no idea where I'd find a newsstand around here - the nearest newsstand that show up on google maps has reviews that say "It's just snacks and scratch tickets." and "three newspapers and no magazines" - I may have to stop by just to see what three newspapers they have :-)
      • CookieTonsure 4 hours ago
        [dead]
    • sieabahlpark 5 hours ago
      [dead]
    • loloquwowndueo 15 hours ago
      [flagged]
      • LoganDark 15 hours ago
        Many browsers let you disable autoplay globally.
        • loloquwowndueo 15 hours ago
          Sure, there are a couple of buttons I can press to stop the video. Why do I have to? Find me one person who likes auto playing videos. The page was created with a deliberate annoying choice that I have to go out of my way to override.
          • binarymax 14 hours ago
            Why do you think the author of this piece, to who you originally replied, has any control over this?
          • LoganDark 15 hours ago
            I'm not talking about pausing the video after it starts playing. I'm talking about a global setting to prevent videos from playing before you manually unpause them. Safari has such a setting, for instance.
            • loloquwowndueo 12 hours ago
              Exactly what “I have to go out of my way to override” covers, from my comment.
    • mannyv 3 hours ago
      [flagged]
    • Uptrenda 4 hours ago
      Damn, just wanted to say reporters are scary... The amount of detail here is huge. You think of hackers as the ones good at doxing... Nah, its reporters.
    • wileydragonfly 2 hours ago
      Stop pretending you aren’t Woody’s son. The contact lens are beyond cringe.
    • giwook 5 hours ago
      Any plans to tackle any of the other folks who might be mentioned in the same sentence as Altman, like Darius Amodei?
      • mathisfun123 5 hours ago
        [flagged]
        • yakkomajuri 5 hours ago
          I think the comment was out of legitimate interest rather than weighing one against the other
        • giwook 3 hours ago
          Huh? It's a genuine question. The article is great and the writer did a fantastic job.

          Please try to give people the benefit of the doubt though I know it's hard in today's society.

    • stavros 3 hours ago
      There's a very minor typo in the article:

      > “Investors are, like, I need to know you’re gonna stick with this when times get hard,”

      Should be:

      > “Investors are like, I need to know you’re gonna stick with this when times get hard,”

      • JumpCrisscross 1 hour ago
        I'm not seeing a typo. Just a stylistic difference.
        • SwellJoe 35 minutes ago
          Pretty sure the correction is wrong, not merely a stylistic choice.
  • andrewrn 1 hour ago
    “By 2018, several Y.C. partners were so frustrated with Altman’s behavior that they approached Graham to complain. Graham and Jessica Livingston, his wife and a Y.C. founder, apparently had a frank conversation with Altman. Afterward, Graham started telling people that although Altman had agreed to leave the company, he was resisting in practice”

    You can subtly see residue of this frustration in Dalton and Michael’s videos when Sam Altman comes up. It’s only thinly veiled that Sam was a snake while at YC.

  • jablongo 21 minutes ago
    For me, the attempted productization of Sora was conclusive proof that 1) OAI was overcapitalized and desperate for revenue 2) safety didn't matter to them much 3) improving the world didn't matter much either.

    At one point you mentioned an interaction with OpenAI staff where you were looking to interview AI Safety researchers. You were rebuffed b/c "existential safety isn't a thing". Does this mean that you could find no evidence of a AI Safety team at OAI after Jan Leike left? If you look at job postings it does seem like they have significant safety staff...

  • thrwaway55 1 hour ago
    We need only ask the dead. Aaron Schwartz knew what Altman is. The answer to the topic is no.
  • arionhardison 6 hours ago
    Hi @ronanfarrow — I have only had one interaction with Sam Altman in person, and I was advised to keep it to myself. I know this crowd may not care, but Altman is absolutely terrified of Black people — not in any contextual sense, but in a visceral, instinctive way. For someone who, as you put it, "controls our future," this should matter.

    FYI: I am by far not the only one to have experienced this and it 100% impacts hiring and other decisions at OpenAI.

    • edbaskerville 6 hours ago
      Can you give more details?

      It wouldn't particularly surprise me if Sam Altman were racist, but I'm curious what the specific incident you observed was.

      • arionhardison 5 hours ago
        Yes, but first I want to be very clear on some things.

        1. I could have hidden my identify behind a throwaway. I did not feel that would be appropriate when making this calim.

        2. I am not looking for anything, literally at all. Any follow ups for blogs; anything that would benefit I will not answer.

        3. This is NOT a new account, I am very easy to find; I am 6'1 140lbs

        I was working for a company called NationBuilder and I had the opportunity to go on a work trip. Outside of a talk he had just given I was waiting for my ride and I looked over like...damn thats the speaker. I wanted to say Hi; he damn near flagged down the police. I apologized and just decided to move on.

        Note: It was in Reno, and no I don't want to go into details; the others are not hard to find because I happened upon them via blog posts so i'm sure if someone with the accumen of RF wants to know, he will find.

        I have heard similar stores from several people in the years since. I AM NOT CALLING THIS PERSON RACIST. I am saying; he is observably scared of black people and that is not someone I want making descions about how the world moves foward.

        • pesus 12 minutes ago
          Thank you for sharing this. I 100% believe it, and it lines up with my experience with other people who came from similar backgrounds as Sam Altman - i.e. white, rich, privileged, and attending elite universities.

          I will disagree with one part - I do believe it is racism. Most will never admit it publicly, but if they think you're one of them, it often comes out rather quickly, especially when alcohol is involved.

        • arionhardison 5 hours ago
          Note: To all the downvotes; I did this publicly and not anon for a reason, if you will do the same I am more than willing to provide evidence for all of these claims as long as its done publicly and in the open.
          • arionhardison 4 hours ago
            PG said something along the lines of: "There should be no truth that is increasingly unpopular to speak."

            If you don't believe what I shared is true, address that directly. But seeing my post sitting at 1 point and [flagged] after 2 hours is not OK. Just as DJT can't flag away his issues, you shouldn't be able to do so on HN.

            One of the things I've loved most about HN is that it was real — grounded in observability, empirical evidence, not bias or feelings. I really hope that what happened to my post is not the beginning or a continuance of the end for that ethos.

            • tastyface 1 hour ago
              Unfortunately, HN is full of racists these days. They really came out of the woodwork after Trump came to power. Just try saying anything about Musk's virulently white supremacist Twitter posting: "it doesn't look like anything to me," insta-flagged.

              I appreciate your anecdote.

  • kmfrk 12 hours ago
    Gobsmacking details about Altmans' time as Y Combinator president, in case anyone's wondering.

    Fantastic reporting.

    • ronanfarrow 11 hours ago
      As is always the case with incredibly precise and rigorously fact-checked reporting like this, where every word is chosen carefully (the initial closing meeting for this one was nearly eight hours long, with full deliberation about each sentence), there is more out there on that subject than is explicitly on the page.
      • kmfrk 10 hours ago
        One of the decidedly eerier parts of this story as you keep reading are all the gaps between what people are saying about Altman, and what they clearly want to say about Altman but can't.
        • devmor 6 hours ago
          Throughout my life, what colleagues/friends are unwilling to remark plainly on has been the most telling factor of someone’s character to me.
          • dugidugout 5 hours ago
            This can be true I suppose, but equally I have a few friends who practically play characters as if they've resigned themselves to a role in a sitcom. For instance: one of my friends is late to just about everything and treats everyone as if we are on-call. We plainly note this repeatedly, the friend is, I hope, equally frustrated and embarrassed by it, and in spite of this nothing changes. This is obviously a critical element to their broader character.

            Perhaps you mean to distinguish social groups without much intimacy? To which I'm sure we could provide some convincing cases, but this seems like a silly heuristic generally.

            • rincebrain 5 hours ago
              I have been in or next to a number of social circles with such missing stairs, where for various reasons people in the groups have decided to not directly acknowledge certain Facts that are known about some members, because it would involve them confronting their hypocrisy.

              Someone cheating regularly on their partner, flagrant substance use problems, controlling people who ostracize anyone who doesn't agree with their sometimes insane perspectives...

              People will go along with quite a lot to avoid friction, especially as they get older and picking up new social circles becomes higher cost.

              It's possibly the most telling thing, when you see what people say is a hard line versus how they actually respond to it.

            • satvikpendem 1 hour ago
              Maybe they have ADHD because the symptoms fit, if they really do acknowledge the problem yet cannot fix it.
      • xnx 2 hours ago
        > where every word is chosen carefully (the initial closing meeting for this one was nearly eight hours long

        For anyone unfamiliar with this process, the New Yorker documentary is well worth the watch: https://www.netflix.com/title/81770824

      • Teever 4 hours ago
        You mention many proxies of Musk who post negative content about Altman.

        In your investigation were you able to determine if Altman has similar proxies?

        How common would you say that this is? Do these kinds of people generally have teams of people who sling mud for them?

        Can you speculate on how that manifests on a site like Hackernews?

  • neonate 7 hours ago
    • calebm 28 minutes ago
      This is pretty hilarious - when I asked ChatGPT to "summarize this article: https://archive.ph/hOYMn", it said it's about Jesus ("The article traces the development of early Christian Latin hymns, especially focusing on how themes about the Virgin Mary and Christ evolved from the 4th to later centuries..." (https://chatgpt.com/share/69d48476-9bf4-8327-8c19-709865a547...)
    • nafey 1 hour ago
      I hope ronan farrow doesn't mind his article being shared like this
      • stevenwoo 1 hour ago
        It’s also available via public libraries in USA via Libby if your local library system pays for a subscription, so it’s a way to support the magazine indirectly, since your local taxes pays for your library. The downside for weekly is you have to read it that week, no archive access.
  • swingboy 6 hours ago
    It's really interesting reading about how these folks view LLMs. Yeah, they're transformative, but I don't know that we're going to be eating ramen in a Neo-Tokyo street bar anytime soon. So much "A.G.I" mentioned in the article.
    • satvikpendem 1 hour ago
      Well I'd hope they're transformative, they're using transformers after all. We just need to pay attention to them, that's all they need.
      • kfarr 5 minutes ago
        Do they need all our attention?
    • 0x3f 6 hours ago
      It's because they're really good at the kind of busywork the average white collar job requires. Most people are out there writing documents and making presentations. Only when you use them for actual complexity does the shortfall become clear.
    • m4rtink 2 hours ago
      I find it interesting how a lot of cyberpunk does not really include AI or does not present it in transformative way. There is a lot of mind uploading, implants, corpo fun and overall technology permeating all aspects of life, but often AI itself does not actually play a big role.
      • Terr_ 1 hour ago
        Counterexamples that come to mind are Neuromancer (AI driving the plot) and Blade Runner (AI antagonists.)

        A compromise thesis might be that in cyberpunk media, AI is at never powerful or motivated to fundamentally reform the worldwide crapsack economic system. They don't abolish corporations, although they might take them over.

        Of course, if there was a story about an AI taking over the world into a post-scarcity society, it probably wouldn't be filed under "cyberpunk" either...

      • ehnto 1 hour ago
        It is a pretty core part of Cyberpunk the "franchise" though, both tabletop and more recent video game.

        I think as well if you look closer, many cyberpunk worlds imply AI through robots, computers with personality etc.

      • gilgoomesh 1 hour ago
        I think you can look at Star Trek as a fairly grounded example of where current LLMs could go: the ship's computer is not autonomous in any way but it does accept fairly vague instructions and you can apparently vibe-code the holodeck.
      • satvikpendem 1 hour ago
        I find that more realistic then, because it appears that's the trajectory we are going on with regards to AI, as a tool not a panacea.
      • Trasmatta 2 hours ago
        Deus Ex is an outlier, AI is a core part of that plot
        • staticman2 1 hour ago
          The first Cyberpunk book, Neuromancer, has a plot which revolves around A.I recruiting human agents to forward its plans...
    • red369 3 hours ago
      I'm going to write a silly comment here: For a moment I thought you wrote "... LLMs. Yeah, they're transformative, but I don't know that they're going to be eating ramen in a Neo-Tokyo street bar anytime soon."

      I liked that mental image a lot! (I try to maintain being uncertain whether Deckard was a replicant)

  • krackers 5 hours ago
    [1] is also good to read as a follow-up, and compare the personalities

    https://harpers.org/archive/2026/03/childs-play-sam-kriss-ai...

    • mplanchard 2 hours ago
      This was a great article, and absolutely savage in some of its characterizations.
  • stavros 4 hours ago
    I found it very interesting that Altman et al were worried that AI will become supremely intelligent and China will make a supervirus or some AI drones or whatnot, but not a single person was worried about destroying all jobs because we wouldn't need humans any more.

    Or maybe they were not so much "worried" but "hopeful" that they'd amass literally all the wealth in the world.

    • RealityVoid 3 minutes ago
      I think fundamentally, the concern is misplaced. The fact you need to work for wealth is a convention of our constraints. The change in constraints would lead to other means of distribution. It's easy to see if someone who believes more productivity is good would not see making jobs obsolete a real problem. Thew would see us adapting to the new conditions in a relatively short while.
    • red369 2 hours ago
      I also find that interesting.

      And not intending to defend the motives of anyone involved, but I'm hoping we can not worry about literally all jobs being destroyed, and AI companies amassing all the wealth in the world.

      Don't we need at least some humans working and earning to buy these AI services? Am I not being imaginative enough? Is it possible for the whole economy to consist just of AI selling services to each other?

      I realise that even if AI destroys most jobs, or even just a lot of jobs, and amasses most wealth, or a lot of wealth, it would still be a terrible thing for humans. The word "all" could have just been hyperbole, and it is still a valid point. I just want to know people's thoughts on whether entire replacement is possible.

  • ainch 5 hours ago
    Great piece. And a good excuse to read up on the use of diaeresis in English (eg. coördination, reëlection) to distinguish repeated vowels - I hadn't seen the New Yorker's usage before.
    • mplanchard 2 hours ago
      They also prefer some less common spellings. For instance, just noticed “vender” instead of “vendor” in an article this morning.
    • goodoldneon 4 hours ago
      It isn’t for all repeated vowels; only for when the 2 vowels don’t make a single sound. So “chicken coop” wouldn’t have a dieresis
      • stavros 4 hours ago
        It would if the chickens formed a business structure that was owned and democratically controlled by its member-owners.
      • OJFord 4 hours ago
        Unless it was a chicken coöp... One of few cases it actually resolves an ambiguity!
  • morleytj 5 hours ago
    Wow, this is an incredibly detailed piece. Really in depth reporting and the kind of detailed investigation we need more of on important topics like this.

    > "Employees now call this moment “the Blip,” after an incident in the Marvel films in which characters disappear from existence and then return, unchanged, to a world profoundly altered by their absence."

    This is a very small detail, but an instinctive grimace crosses my face at the thought of these sort of Marvel references and I'm not entirely sure why.

    • ytoawwhra92 5 hours ago
      They're mass media cynically produced to extract maximum profit from lowest common denominator audiences, so the idea that people working in such influential positions find them appealing enough to reference suggests they are members of that lowest common denominator audience.

      The people shaping the future have no taste.

      • eutropia 4 hours ago
        There's a time and a place for everything, and rejecting popular media as "lowest common denominator" is the most uninspired form of cultural elitism.

        Is it cynical to want your <art project> to make a profit? Or for it to make enough profit to subsidize other projects?

        Is it cynical to make something accessible so more people who watch it are able to enjoy it?

        I agree that it's embarrassing and feels crass when movies both try to be broadly appealing and simultaneously fail to be entertaining or well executed ... but many of the marvel movies clearly surpass that bar.

        No one wants to make a bad movie that does poorly with critics and paying customers - but it does happen because making a movie is expensive and complicated and requires a lot of skilled people working together towards the same goal.

        Regarding taste: do you think a michelin star chef swears off cheap food like hotdogs or fish and chips? Doubtful - because those foods have their place and the chef is able to enjoy them for what they are rather than use them as an excuse to display a superiority complex.

        • ytoawwhra92 3 hours ago
          > There's a time and a place for everything

          Yeah, I'm saying professional communication isn't the place for Marvel references, and that those who choose to include references to those movies in their professional communications are revealing something about their media tastes.

          If I'm at a Michelin star restaurant I don't want to be served a ballpark hotdog.

          • ianbutler 3 hours ago
            That they relate to the common person and aren't overly snobby?
            • ytoawwhra92 2 hours ago
              Exactly. They share the cultural sensibilities of the average person on the street, and yet they're making decisions that will shape the world for future generations. I think that's bad. I want those decisions being made by people who have a more extensive cultural education. Snobs, if you want to call them that.
              • ianbutler 51 minutes ago
                Interestingly, the smartest people I know have the widest range of media consumption and understanding. To assume that because someone uses a marvel reference they might not have a deeper cultural education is rather...limited thinking.
              • satvikpendem 1 hour ago
                Of course they're average people, why do you think tech or AI company employees are somehow above or beyond the average person? I'm not sure why you'd willingly say you'd want snobs controlling the world, that is somehow even worse and reeks of aristocracy which is why you see replies rejecting your thoughts, it is simply not a western ideal or one to strive towards.
              • abustamam 2 hours ago
                I'm confused as to what your point is. Employees refer to the incident as "the blip." I got no impression that there was a formal memo that went out to the company or the media at large that officially refers to the incident as the blip, merely that employees refer to it as a blip (likely to each other, not too dissimilar to a meme).

                And while I don't think someone's media tastes ought to preclude them from making important decisions, I also disagree with your point at large. I don't think the world should be shaped by snobs. The world is already being shaped by snobs in other sense of the word, and I don't see any indication that it's any better than the alternative.

        • mvdtnz 54 minutes ago
          Marvel movies absolutely target the lowest common denominator of film watchers. To deny that is delusional.
      • Noumenon72 4 hours ago
        When things reach a certain level of popularity they constitute "mental real estate". Your audience has heard of Groundhog Day, so there is an opening for a movie with that title to make money -- your film will start out already having name recognition and some understanding of what the movie is about.

        Thus it is a writer's job not to make references they find appealing to reveal their good taste, but to know what references their audience will find appealing and use them to help communicate concepts. If this bothers you it's because they're insulting you by saying you might be part of the audience that watches Marvel, and you had hoped reading the New Yorker would signal that you aren't.

        • ytoawwhra92 4 hours ago
          The writers of this piece didn't make the reference.
          • halter73 2 hours ago
            No, but they chose to include it. Presumably there were a lot less apt references they chose not to include.
      • red369 2 hours ago
        I agree that these movies are really being cranked out. I hadn't even realised quite the extent of this until I went to look. But I think some of these movies are good enough that it shouldn't be disturbing that people in influential positions find them appealing:

        I know a lot of people are critical of the Rotten Tomatoes score, but I find that when a high enough percentage of reviews are positive, it is likely I will enjoy the movie. Some of the Marvel movies have a very high proportion of positive reviews (admittedly, those reviews could be just positive, not very positive). And for most in this list with a very high score, I think it's deserved.

        https://en.wikipedia.org/wiki/List_of_Marvel_Cinematic_Unive...

        Arguably, one indication of the limitations of the Rotten Tomatoes score is the number of these Marvel movies with high scores :)

        Btw, I'm not trying to convince you that if you watch the movies you'll like them. Just that they may not all be as bad as you think.

      • abustamam 45 minutes ago
        I'm an MCU fan. And while I do agree quality has gone down, I think it's hard to ignore the fact that the MCU did something really novel. They made a franchise that spanned 20+ movies and tied it up in a way that was almost universally loved by nerds and normies alike.

        Are there a lot of plot holes and retcons? Yeah. And some bad writing. And the movies that came after have been pretty meh with some exceptions.

        But for someone to say that referring to one of the highest grossing films and franchises of all time, means their decisions should be questioned, is quite the stretch.

  • adrianhon 15 hours ago
  • just_once 14 hours ago
    Amazing that this article and an actual comment from Ronan Farrow is this far down the list while...Scientists Figured Out How Eels Reproduce (2022) has 6 times the points.
    • dang 7 hours ago
      This thread set off a software penalty called the flamewar detector.* I turned that off as soon as I saw it.

      (* This was predictable from the title, because the question in it was inevitably going to trigger an avalanche of crap replies. Normally we'd change the title to something less baity, and indeed the article is so substantive that it deserves a considerably better one. But I'm not going to change it in this case, since the story has connections to YC - about that see https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu....)

  • throw4847285 10 hours ago
    A new Ronan Farrow piece is a rare gift (and Marantz is no slouch). Can't wait to read this in the physical magazine when it arrives!
    • geokon 51 minutes ago
      I hadn't heard of him before. The wiki article is worth a look

      https://en.wikipedia.org/wiki/Ronan_Farrow

      It's got to be one of the most unusual biographies of a living person that I've ever come across. Nearly every sentence is a head-turner. If you made it up no one would believe you

  • pharos92 5 hours ago
    We focus these critiques far too much on the face rather than the underlying mechanics. Just like in politics, we critique the personality/politician yet the underlying system architecture evades it.

    Sam Altman clearly has a long history of nefarious activity. But the underlying threat posted by AI to society, the economy and human freedom persists with or without his presence.

    • xgulfie 4 hours ago
      It's because we only really know one economic system but we've known many people
    • j2kun 4 hours ago
      [dead]
  • 6Az4Mj4D 4 hours ago
    I am in 40s and going to be made redundant this June. In future only people who can afford to keep things like Claude, OpenAI and most importantly create value using them more than what others can do be able to survive. Otherwise, game is more or less over, and I question what's next for my own future while I learn to use Claude in FOMO. I cannot trust Sam or others if they will have any interest to keep this tech affordable for common people like me.
  • wk_end 6 hours ago
    This anecdote is so absurd it sounds like satire. This is the guy with the $23M mansion?

    > Amodei’s notes describe escalating tense encounters, including one, months later, in which Altman summoned him and his sister, Daniela, who worked in safety and policy at the company, to tell them that he had it on “good authority” from a senior executive that they had been plotting a coup. Daniela, the notes continue, “lost it,” and brought in that executive, who denied having said anything. As one person briefed on the exchange recalled, Altman then denied having made the claim. “I didn’t even say that,” he said. “You just said that,” Daniela replied.

    • simoncion 5 hours ago
      He's a liar and untrustworthy. Based on their public statements, that's a big part of why the board fired him.

      Of course, (despite the fact that Altman previously publicly stated that it was very important that the board can fire him) he got himself unfired very quickly.

  • ambicapter 5 hours ago
    I didn't have the mental energy to read the whole thing but man the final paragraph is some really good writing. Way to tie it all in together.
    • krackers 4 hours ago
      The entire thing is a joy to read, you should really set aside some time to cleanse your palette in this age of LLM prose. I mean just look at this juxtaposition

      >Altman continued touting OpenAI’s commitment to safety, especially when potential recruits were within earshot. In late 2022, four computer scientists published a paper motivated in part by concerns about “deceptive alignment,” in which sufficiently advanced models might pretend to behave well during testing and then, once deployed, pursue their own goals.

      (plus it finally resolves the mystery of "what Ilya saw" that day)

      Also since it wasn't stated clearly

      >“the breach” in India. Altman, during many hours of briefing with the board, had neglected to mention that Microsoft had released an early version of ChatGPT in India

      That was Sydney if I understand correctly.

  • avaer 41 minutes ago
    Who would you trust more: Sam Altman, or a council of 1000 representative AI models?
  • steve_adams_86 5 hours ago
    > Amodei, in one of his early notes, recalled pressing Brockman on his priorities and Brockman replying that he wanted “money and power.” Brockman disputes this. His diary entries from this time suggest conflicting instincts. One reads, “Happy to not become rich on this, so long as no one else is.” In another, he asks, “So what do I really want?” Among his answers is “Financially what will take me to $1B.”

    I can't imagine having such uninspired thoughts and actually writing them down while in a role of such diverse and worthwhile opportunities. I'd like to ask "how the hell do these people find themselves in these positions", but I think the answer is literally what he wrote in his diary. What a boring answer. We need to filter these people out at every turn, but instead they're elevated to the highest peaks of power.

    • ks2048 3 hours ago
      It's not surprising. I made this comment on HN before, but if you follow him on Twitter, it's pretty remarkable - the CTO of one of the most important technology companies in the world and he has never (that I've seen) posted something with some technical insight, or just anything interesting about technology. It's just boring truisms, cliches, empty statements, etc.
    • chromacity 4 hours ago
      Eh. It doesn't start or stop with people like Altman, Zuckerberg, or Nadella. I think it's a symptom of a broader problem in tech. Half the people on this site made a decision to work at companies that do shady things, and they did that to maximize personal wealth.

      The difference isn't that the average techie doesn't dream of making a billion by any means necessary; it's that most of us don't think we have a shot, so we stick to enabling lesser evils to retire with mere millions in the bank.

      • skybrian 3 hours ago
        I don't think it's all that hard to avoid working on anything shady. It's not as easy to avoid being associated with anything shady due to widespread cynicism and a tendency to treat tech companies with thousands of projects as a monolith.
      • ggregoire 3 hours ago
        > The difference isn't that the average techie doesn't dream of making a billion by any means necessary

        That's actually the difference, most people don't want a billion

      • bluefirebrand 3 hours ago
        > The difference isn't that the average techie doesn't dream of making a billion by any means necessary

        I hope that's not true. If it is, we live in a bleak world indeed.

        I can confidently say I've never once dreamed of having billions. I've never wanted billions. Not even in a fanciful manner. What would I do with that money? Buy mansions and megayachts? That's loser stuff

        Most of what I want out of life cannot be bought. The pieces that come with a price tag, like a comfortable home, do not require billions

        I think only sociopaths want billions because they don't understand spending your life seeking things that actually matter, like family and human connection

    • kevinqi 4 hours ago
      it is disappointing, but is it shocking that people most driven by gaining money/power are the ones the most successful at achieving it?
      • steve_adams_86 4 hours ago
        What sticks out to me most is that humanity consistently fails to weed these creatures out and regulate society. It's a bug in our social software; we seem to like these broken people rather than recognize that they're a liability.
        • hackable_sand 3 hours ago
          Trust is not a bug

          You need to accept that every generation some people are going to try and fuck things up.

          Then you get to decide to stop or help them

        • basket_horse 3 hours ago
          This isn’t a bug. It’s the driving force of our capitalist society. We are not trying to weed them out. We are trying to encourage them. It’s pretty simple, when they get rich, so do all their investors.
    • dolebirchwood 5 hours ago
      Sociopaths don't have much going for them in life other than winning status games.
      • buzzerbetrayed 4 hours ago
        Sociopath is the next word that people seem to want to entirely destroy the meaning of
        • dolebirchwood 4 hours ago
          Struck a nerve?
          • JumpCrisscross 4 hours ago
            > Struck a nerve?

            No need to be petty. They have a point. We did this with the words racist and fascist. Overinclusion diluted the term and gave cover for the actual baddies to come in. I'm not sure debating who is and isn't a sociopath is as useful as, say, the degree to which Sam is a liar (versus visible).

            • greenchair 3 hours ago
              Speaking of overinclusion, 'wild' is my nominee for 2026 as I'm seeing it all over the place.
              • JumpCrisscross 1 hour ago
                > 'wild' is my nominee for 2026

                I don't know how to define the delineation I'm about to propose. But there is a difference between overinclusivity trashing a morally-loaded, potentially even technical, term, and slang evolving.

            • nixosbestos 9 minutes ago
              I would be curious to hear you expand on that, walk me through it, maybe a small paragraph to explain what over inclusion happened with the weird fascist, what baddies you're vaguely referring to, and connect those dots?
            • rexpop 3 hours ago
              I'm sorry, we did what with the word "racist"?
              • JumpCrisscross 2 hours ago
                > we did what with the word "racist"?

                “Overinclusion diluted the term and gave cover for the actual baddies to come in.” The next sentence.

      • kakacik 4 hours ago
        While true and we can see them literally everywhere where there is some money and/or power (even miniscule places like classic banks have easily 1/3 of the staff with clear sociopathic traits, I have to deal with them daily... or whole politics) - thats just human nature, or part of it.

        Its up to rest of society to keep them in check since classic morals are highly optional and considered nuissance blocking those games. And here we the rest fail pretty miserably, while having on paper perfect tool - majority vote.

      • lokar 5 hours ago
        Or, some fraction of otherwise good/normal people who “win” are turned into sociopaths by the power and sycophancy.
    • xorgun 4 hours ago
      [dead]
  • bootload 3 hours ago
    “By 2018, several Y.C. partners were so frustrated with Altman’s behavior that they approached Graham to complain. Graham and Jessica Livingston, his wife and a Y.C. founder, apparently had a frank conversation with Altman. Afterward, Graham started telling people that although Altman had agreed to leave the company, he was resisting in practice”

    This statement rings true.

    JL, PG has mentioned often, is his weapon to test the “people” integrity aspect of YC / Startups. It’s not lost on me both Altman and Thiel both associated with YC were useful short term only, highlighting how regular “character” evaluations are required at higher levels of responsibility.

    • jacquesm 2 hours ago
      I don't think they were useful at all. If anything they pulled down YCs up to that point stellar reputation.
      • argee 1 hour ago
        At least two of YC's early (mid-aughts) "huge" successes come down to PG unilaterally (or with some help from JL) making some kind of "weird" call. AirBnB and Reddit come to mind. Even Stripe can be traced to him since he basically created the Auctomatic team (Patrick Collison's previous YC entry).

        In other words, PG had the "knack" for sometimes encouraging the right weird thing. I'm not sure it's been the same since he handed off the reins, like any other formerly-founder-led company. Nowadays it really gives off the vibe of bean-counting and hype-chasing.

        I don't think it's gotten quite as bad as this [0] article suggests, though.

        [0] https://stanfordreview.org/is-yc-for-cowards/

      • bootload 1 hour ago
        “Today’s news comes at an interesting time. Last week, Business Insider’s Jonathan Marino reported that YC is close to raising several billion dollars for a new fund, with the goal of possibly expanding its scope to later stage funding. It said it’s still in preliminary discussions for this new strategy, but if true, Thiel could definitely play a big role there.”

        My recollection was Thiel was injecting cash, a money deal. [0] There was another less advertised play. An established path for the Thiel “Boy Wonder Fellows”. [1]

        “In addition to founding PayPal and Palantir and being the first investor in Facebook, Peter has been involved with many of the most important technology companies of the last 15 years, both personally and through Founders Fund, and the founders of those companies will generally tell you he has been their best source of strategic advice. He already works with a number of YC companies, and we’re very happy he’ll be working with more.”

        Guess who was involved in the Thiel / YC deal? [2] You are not the only one seeing this as a reputation hit for YC. [3] Even I, disconnected across the other side of the world could see this as an issue.

        [0] https://www.inc.com/business-insider/peter-thiel-is-joining-...

        [1] https://boingboing.net/2016/08/25/peter-thiel-y-combinator-f...

        [2] https://www.ycombinator.com/blog/welcome-peter/

        [3] https://qz.com/810778/y-combinator-has-no-problem-with-partn...

        • jacquesm 19 minutes ago
          Having Thiel on board of YC would probably turn off a lot of potentially successful founders. Or maybe it's a way to select for those with a lack of ethics. Having Musk and Thiel visibly associated probably is good from a monetary perspective but it sends all kinds of bad signals.
  • b8 1 hour ago
    Sam failed upwards.
  • einrealist 5 hours ago
    I don't trust anyone who claims that LLMs today are superhumanly intelligent. All they do is perform compute-intensive brute-force attacks on the problem/solution space and call it 'reasoning', all while subsidising the real costs to capture the market. So much SciFi BS and extrapolation about a technology that is useful if adopted with care.

    This technology needs to become a commodity to destroy this aggregation of power between a few organizations with untrustworthy incentives and leadership.

    • shruggedatlas 4 hours ago
      Your brain is performing "compute-intensive brute-force attacks on the problem/solution space" as you read this very sentence. You trained patterns on English syntax, structure, and semantics since you were a child and it is supporting you now with inference (or interpretation). And, for compute efficiency, you probably have evolution to thank.
      • JohnMakin 3 hours ago
        people like to say this like they’re apples to apples but this comparison isn’t remotely how the brain actually works - and even if it did, the brain does it automatically without direction and at an infitesimal percentage of the power required.

        And we’re just talking about cognition - it completely ignores the automatic processes such as maintaining and regulating the body and it’s hormones, coordinating and maintaining muscles, visual/spacial processing taking in massive amounts of data at a very fine scale, and informing the body what to do with it - could go on.

        One of the more annoying things about this conversation is you don’t even need to make this argument to make the point you’re trying to make, but people love doing it anyway. It needlessly reduces how amazing the human brain is to a bunch of catchy sci fi sounding idioms.

        It can be simultaneously true that transformer based language models can be very smart and that the human brain is also very smart. It genuinely confuses me why people need to make it an either/or.

        • igggh 54 minutes ago
          Great post
      • stonyrubbish 3 hours ago
        Human cognition is nothing like AI "cognition." It really bothers me that people think AI is doing the same thing the human mind does. AI is more like a parrot which is trained to give a correct-looking response to any question. The parrot doesn't think, doesn't know what its doing etc, it just does it because it gets a treat every time a "good" answer is prompted. This is why it can't do things like know how many parenthesis are balanced here ((((()))))) (you can test this), it doesn't have any kind of genuine cognition.
        • saxonww 2 hours ago
          > Human cognition is nothing like AI "cognition."

          I've wondered about this. Do we really know enough about what the human brain is doing to make a statement like this? I feel like if we did, we would be able to model it faithfully and OpenAI, etc. would not be doing what they're doing with LLMs.

          What if human cognition turns out to be the biological equivalent of a really well-tuned prediction machine, and LLMs are just a more rudimentary and less-efficient version of this?

          • davebren 1 hour ago
            Yes, we do. Humans share the statistical association ability that LLMs possess, but also conscious meaning and understanding. This is a difference in kind and means that we can generalize beyond the statistical pattern associations that we've extracted from data, so we don't require trillions of examples to develop knowledge.

            Theoretically a human could sit alone in a dark room, knowing nothing of mathematics and come up with numbers, arithmetic algebra, etc...

            They don't need to read every math textbook, paper, and online discussion in existence.

            • saxonww 1 minute ago
              The point I'm trying to make is that I don't think we know, so we can't say either way.

              In your example, would the human have ever had contact with other humans, or would it be placed in the room as a baby with no further input?

            • AstroBen 57 minutes ago
              Our DNA does contain our pre-training, though. It's not true that we're an entirely blank slate.
              • davebren 34 minutes ago
                Pre-training is not a good term if you are trying to compare it to LLM pre-training. Closer would be the model's architecture and learning algorithms which has been designed through decades of PhD research, and my point on that is that the differences are still much greater than the similarities.
        • chpatrick 3 hours ago
          This is such a boring cliche by now. "thinking" and "knowing what it's doing" are totally vague statements that we barely understand about the human mind but in every comment section about AI people definitively state that LLMs don't do them, whatever they are.
          • davebren 1 hour ago
            This is the epitome of learned helplessness, that you need a neuroscience paper to tell you what thinking and knowledge is when you experience it directly all the time, and can't tell that an LLM doesn't have it. Something is extremely evil about these ideologies that are teaching people that they are NPCs.
          • stonyrubbish 3 hours ago
            They aren't so vague that you would argue the parrot is thinking.
      • wil421 3 hours ago
        If you think this way then why not talk to LLMs exclusively. Don’t let the oxytocin cloud your ability to problem solve.
      • slopinthebag 3 hours ago
        I get you're trying to do the whole "humans and LLMs are the same" bit, but it's just plainly false. Please stop.
    • stavros 4 hours ago
      > All they do is perform compute-intensive brute-force attacks on the problem/solution space and call it 'reasoning'

      If they discover the cure to cancer, I don't care how they did it. "I don't trust anyone who claims they're superhumanly intelligent" doesn't follow from "all they do is <how they work>".

      • bjacobel 2 hours ago
        Has generative AI made material progress on curing cancer? Has it produced any breakthroughs, at all?
        • igggh 59 minutes ago
          In b4

          - it’s the worst it’ll ever be - big leaps happened the fast few months bro

          Etc.

          Personally I think llm’s can be very powerful in a narrow-band. But the more substance a thing involves, the more a human is needed to be involved.

      • stonyrubbish 3 hours ago
        > "I don't trust anyone who claims they're intelligent" doesn't follow from "all they do is <how they work>".

        It kind of does if how they work is nothing like genuine intelligence. You can (rightly) think AI is incredible and amazing and going to bring us amazing new medical technologies, without wrongly thinking its super amazing pattern recognition is the same thing as genuine intelligence. It should be worrying if people begin to believe the stochastic parrot is actually wise.

        • einrealist 1 hour ago
          I can slow down the compute by a factor of a thousand. It would not change the result. But it changes the economics. We only call it intelligent, because we can do the backpropagation, the inference (and training) fast enough and with enough memory for it to appear this way.
        • stavros 3 hours ago
          If LLMs can come up with superhumanly intelligent solutions, then they're superhumanly intelligent, period. Whether they do this by magic or by stochastic whatever doesn't make any difference at all.
      • bigyabai 4 hours ago
        That's moonshot logic that reinforces the parent's point. You'd absolutely care if the AI's cure to cancer entailed full-body transplants or dismemberment.
        • JumpCrisscross 4 hours ago
          > You'd absolutely care if the AI's cure to cancer entailed full-body transplants or dismemberment

          That's not a cure. Like yes, I'd care if the AI says it cures cancer while nuking Chicago. But that isn't what OP said.

        • Noumenon72 4 hours ago
          "The cure for cancer" as a phrase doesn't include those solutions. If the headline was "Pope discovers the cure for cancer" and those were his solutions you would say "No he didn't." OP was referring to AI discovering the cure for cancer that cancer research is working towards.
    • crazylogger 3 hours ago
      If all they do is "just" brute-force problem solving, then they are already bound to take over R&D & other knowledge work and exponentially accelerate progress, i.e. the SciFi "singularity" BS ends up happening all the same. Whether we classify them as true reasoning is just semantics.
    • Rover222 3 hours ago
      Yeah and everything is just atoms. If you reduce anything enough it’s not real.
    • semiinfinitely 3 hours ago
      calculator is superhumanly intelligent
  • HardwareLust 13 hours ago
    Of course he cannot be trusted. Anyone whose motivation is based on greed is by nature untrustworthy.
    • throwway120385 6 hours ago
      Even if your motivation is some utopian vision of the future, you should not be trusted. Utopia is a thought experiment in a philosophy of living taken too far, not something to be reached for earnestly.
    • davebren 1 hour ago
      Not just the greed. The whole AI is so dangerous that we must be the ones to build it to save humanity, and then gaslighting yourself and everyone around you into believing that your language model is AGI. This is some weird detached from reality cult behavior.
      • kortex 19 minutes ago
        Complete hearsay, but I struck up a convo with someone who had spent a few hours drinking around a campfire with him and a few others at burning man, prior to GPT3's popularity. Apparently he was utterly convinced in his pivotal role to shepherd in a new era with AI, to the point where it got really messianic and culty. He didnt recall much else other than just being really weirded out by the dude.
        • davebren 6 minutes ago
          The AI CEOS and most of their employees are in the same place as that guy. They're just in a more professional context and will be careful not to let their delusions of grandeur look too insane.

          I remember watching the fitness function improve while my neural net learned to recognize characters for a project I did in school, and there was something about it that felt powerful. I guess we've always had that with the machines we imbue that have any sort of decision making "intelligence", but mix that with taking psychedelics and you have an interesting cocktail.

    • hellojimbo 5 hours ago
      lol thats like 99% of planet earth, including the animals
  • BrenBarn 46 minutes ago
    Of course not. No one can be trusted to control our future.
  • slg 6 hours ago
    One thing that stands out when reading profiles like this is the number of positive and negative descriptions of the subject that agree. For example, there seems to be little dispute that Altman will happily say something that he knows/believes isn't true, there's just a lot of people who are willing to forgive any lies if the lies are in service of something they themselves agree with.
    • palata 6 hours ago
      > there's just a lot of people who are willing to forgive any lies if the lies are in service of something they themselves agree with.

      Or if the person lying is in a position of power?

  • ycui1986 4 hours ago
    he won't. if anything, openai is falling behind recently. the trend won't change easily. it is like the old time Netscape.
  • dmitrygr 5 hours ago
    The number of "Altman doesn’t remember this" or "Altman denies this" is hilarious
    • jcgrillo 3 hours ago
      Life would be so much easier if I was that forgetful
  • innocenttop 10 hours ago
    Why is the story so downranked? Folks at HackerNews have something to do with it ?
    • dang 7 hours ago
      It off the flamewar detector, a,k.a. the overheated discussion detector. I've turned that off now - this is obviously a serious article.
    • randycupertino 10 hours ago
      HN generally downvotes and/or flags anything that paints ycombinator in a bad light. As Altman was president of yc from 2014 to 2019 that could be why this is getting downvoted.

      Articles critical of Airbnb, one of yc's biggest wins, also get flagged and taken down.

      • dang 6 hours ago
        I'm not sure whether you meant this about moderator interventions or not, but our actual practice is the opposite:

        https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...

        As those comments explain, this has been the #1 rule of HN moderation from the beginning. See also https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que....

        • lovich 6 hours ago
          I don’t think the poster you responded to was claiming that moderators directly did this. The flagging system is open to bias from the community at large and certain types of articles(ex. Anything critical of the current admin) get a bunch of real users organically flagging them.
          • dang 1 hour ago
            Yes, it's hard to tell sometimes but I've at least learned not to automatically take these personally. Well, partly learned.

            I don't think anyone familiar with this community would assume positive bias towards Sam, Airbnb, or even YC anymore - it's quite the contrary, from my perspective, but of course everyone notices different things and has their own view. Ditto for political slants.

            • lovich 1 hour ago
              I dont assume positive bias, but I do assume that most negative things that get people irked are removed as a result of the mechanics of the flagging system.

              Like, I dont really expect puff pieces for ycombinator or the like to get artificially pushed to the top, but I do expect that enough people who are feel culturally or financially invested in ycombinator to flag negative things into oblivion, especially as its completely reasonable that the population of users here has a much higher percentage of those folk than any random population sampling.

  • saeranv 3 hours ago
    Greg Brockman honestly sounds like a psychopath:

    > In 2017, Amodei hired Page Hedley, a former public-interest lawyer, to be OpenAI’s policy and ethics adviser. In an early PowerPoint presentation to executives, Hedley outlined how OpenAI might avert a “catastrophic” arms race—perhaps by building a coalition of A.I. labs that would eventually coördinate with an international body akin to NATO, to insure that the technology was deployed safely. As Hedley recalled it, Brockman didn’t understand how this would help the company beat its competitors. “No matter what I said,” Hedley told us, “Greg kept going back to ‘So how do we raise more money? How do we win?’ ” According to several interviews and contemporaneous records, Brockman offered a counterproposal: OpenAI could enrich itself by playing world powers—including China and Russia—against one another, perhaps by starting a bidding war among them. According to Hedley, the thinking seemed to be, It worked for nuclear weapons, why not for A.I.?

  • 383toast 5 hours ago
    if you have to ask if someone can be trusted, they usually can't
  • slibhb 2 hours ago
    It is disconcerting how Altman has used "AI safety" as a marketing tool. The more people imagine the universe turned into paperclips, the more they invest. Obviously Altman doesn't care about safety (I don't either; I'm not an AI-doomer). But he truly does come across as someone incapable of telling the truth. Are you even a liar if honesty is not in the set of possible outcomes?

    Still, there's something oddly reassuring here: if you believe "AI safety" is essentially a buzzword (as I do), then this whole affair comes down to people squabbling over money and power. There really is nothing new under the sun.

  • ergocoder 5 hours ago
    I wonder if Sam might abandon the ship soon. Other co-founders already did.

    The main reason is that he gets all the downsides without the upsides. I know $5B is a lot but, for a 700B company, it isn't. If OpenAI was a regular for-profit, he would have been worth >$100B already.

    This is probably one of the significant factors why other co-founders left too. It's just a lot of headaches with relatively low reward.

    • 0x3f 5 hours ago
      But nobody is going to just gift him the same valuation on the next company. It's not like his execution is OpenAI's moat right now. So where would he be going that's a better deal for him?
      • ergocoder 5 hours ago
        Founding his own company would be one alternative. Full control. No stigma on the non-profit part. Probably get the same paper money as he got now at OpenAI.
      • davebren 1 hour ago
        What is the value he adds anyway, being a delusional cult leader where most people around him characterize him as a sociopath? Is it just his ability to lie and create fear-hype?

        It's not like he had anything to do with the technical achievements, except convincing the engineers that they were doing something valuable, but the cat is out of the bag on that.

    • palata 4 hours ago
      IMHO, nobody is remotely worth $1B, period.

      The fact that some (usually toxic) individuals get there shows that the system is flawed.

      The fact that those individuals feel like they can do anything other than shut up, stay low and silently enjoy the fact that they got waaaay too much money shows that the system is very flawed.

      We shouldn't follow billionaires, we should redistribute their money.

      • simonh 4 hours ago
        If someone founds a company, grows it and owns $1bn of its stock, they don’t have $1bn in cash to distribute. They have a degree of control over the economic activity of that company. Should that control be taken away from them? Who should it be given to?

        I can see an argument when it comes to cashing out, but I’m not clear how that should work without creating really weird incentives. Some sort of special tax?

      • r14c 3 hours ago
        Big agree, at a certain point a company is big enough that their impact has to be managed democratically. I don't have an issue with effective leaders, the problem is that we reward a certain kind of success with transferable credits that don't necessarily align with people's actual talents or skills.

        I want skilled institutional investors who have a track records of making smart bets. I don't want a random person who happened to get lucky in business dictating investment policy for substantial parts of the economy. I want accountability for abuses and mismanagement.

        I know China gets a bad rep, but their bird cage market economy seems a lot more stable and predictable than this wild west pyramid scheme stuff we do in the US. Maybe there are advantages for some people in our model, but I really dislike the part where we consistently reward amoral grifters.

      • rafterydj 4 hours ago
        Well, redistributing their money is (in some cases disingenuously) exactly how they are able to pitch investors. "Sure, value my company at $10B and my shares make me $2B, but we're alllllll gonna make money when hit AGI!!!" That kind of thing.
    • raincole 5 hours ago
      And OpenAI's influence is hugely exaggerated compared to, say, Google.
      • ergocoder 5 hours ago
        Yes, and it seems people hate him more than Google co-founders, for example.

        All the downsides without much upside...

        • georgemcbay 4 hours ago
          > Yes, and it seems people hate him more than Google co-founders, for example.

          Sergey Brin is trying to change that lately, but Altman still has a sizable head start.

  • the_arun 3 hours ago
    The main animated picture reminded me of evil king Ravan from Ramayan with 10 heads. Not sure it is intentionally done that way.
  • tines 2 hours ago
    Two "insure" typos?
    • Wyverald 2 hours ago
      In American English, "insure" can also mean "to make sure" as in "ensure", in additional to meaning "to take out insurance for".
    • mplanchard 2 hours ago
      The New Yorker prefers insure to ensure. They have a unique house style. I commented on another thread about alternative spellings like vender instead of vendor, too.
    • o0-0o 2 hours ago
      Dictation likely and not caught by editing.
  • pupppet 14 hours ago
    Ask Condé Nast if he can be trusted..

    https://www.reddit.com/r/AskReddit/s/VWJVBNzc2u

  • game_the0ry 8 hours ago
    For those curious about how sama got to where he got and stayed on top for so long, I recommend you read the book: The Sociopath Next Door by Martha Stout.

    I am fairly confident when I say this -- sama is a sociopath. I don't know how anyone with solid intuition could even come to any other conclusion than the guy is deeply weird and off-putting.

    Some concepts from the book:

    > Core trait: The defining characteristic is the absence of conscience, meaning they feel no guilt, shame, or remorse.

    > Identification: Sociopaths can be charming and appear normal, but they often lie, cheat, and manipulate to get what they want.

    > The Rule of Threes: One lie is a mistake, two is a concern, but three lies or broken promises is a pattern of a liar.

    > Trust your instincts over a person's social role (e.g., doctor, leader, parent)

    Check and check.

    OpenAI is too important to trust sama with. He needs to go. In fact, AI should be considered a public good, not a commodity pay-as-you-go intelligence service.

    • jcgrillo 3 hours ago
      I was with you right up until the final paragraph, but this made me do a double take:

      > OpenAI is too important to trust sama with.

      ...wat? They made a chat bot. How can that possibly be so existentially important? The concept of "importance" (and its cousin "danger") has no place in the realistic assessment of what OpenAI has accomplished. They haven't built anything dangerous, there is no "AI safety" problem, and nothing they've done so far is truly "important". They have built a chat bot which can do some neat tricks. Remains to be seen whether they'll improve it enough to stay solvent.

      The whole "super serious what-ifs" game is just marketing.

      • davebren 58 minutes ago
        Yeah the whole fearmongering is clearly just marketing at this point. Your LLM isn't going to suddenly gain sentience and destroy humanity if it has 10x more parameters or trains on 10x more reddit threads.

        I'm not even sure we're any closer to AGI than we were before LLMs. It's getting more funding and research, but none of the research seems very innovative. And now it's probably much more difficult to get funding for anything that's not a transformer model.

    • unsupp0rted 7 hours ago
      I suspect there's some other category, which isn't really a sociopath and isn't really a not-sociopath, which we don't have a good definition for.

      We only say a lot of CEOs are sociopaths because they're in that third category we haven't named, where they're very good at manipulating people, but also can feel conscience, guilt, remorse, etc, perhaps just muted or easier to justify against.

      E.g. if you think you're doing something for the betterment of mankind, it doesn't really matter if you lie to some board members some year during the multi-decade pursuit.

      • xg15 5 hours ago
        That's not a third category, that's just a sociopath as seen by themself.
        • unsupp0rted 4 hours ago
          I doubt most sociopaths, when they’re honest, would agree they feel much guilt or remorse at all.

          Whereas the people in the category I’m describing might feel those things, but prioritize those feelings far below the benefits of achieving what they set out to achieve.

          • game_the0ry 3 hours ago
            > I doubt most sociopaths, when they’re honest, would agree they feel much guilt or remorse at all.

            Yes that is the core trait I highlighted in the 1st bullet.

      • game_the0ry 3 hours ago
        > I suspect there's some other category, which isn't really a sociopath and isn't really a not-sociopath, which we don't have a good definition for.

        There is -- I call it "corpo sociopath." The corpo sociopath really comes out in the workplace, less so in personal life.

  • tw04 3 hours ago
    I don't even need to read the article to know that he unequivocally can't be trusted. Every action he's taken to this point have shown he will say literally anything to get what he wants.
  • imagetic 32 minutes ago
    No.
  • panzi 4 hours ago
    No. Next question.
  • brandonpollack2 3 hours ago
    I haven't read it yet. The answer is no.
  • andrewstuart 13 minutes ago
    Meh. I’m no particular fan of Altman but there’s nothing in this article particularly surprising or terrible.

    The whole AI safety thing has always seemed extreme to me and has turned out to be a storm in a teacup. All those prominent people who used to tell us how AI will end humanity seem to have stopped talking about it.

    I get the sense that Altman is not particularly like-able person but Bill Gates and Steve Jobs both seem to have scored a 10/10 on their “is this guy a jerk” rating, it’s common for tech CEOs.

    So, the article and headline are dramatic but not much really there.

    I think all the AI safety obsessed people turn out to have been the ones off course.

  • pdonis 5 hours ago
    Does the article ever actually answer the title question?
    • mohamedkoubaa 5 hours ago
      The answer is no, he can't be trusted
      • pdonis 4 hours ago
        Oh, I agree that's the correct answer. I just don't see the article actually ending up with that answer. I see it waffling. Basically, the article ends up saying that, well, we told you about all this dodgy stuff, but what he's doing is working.
        • mohamedkoubaa 3 hours ago
          Trusted to increase shareholder value is also questionable
        • Wyverald 2 hours ago
          God forbid an article presents all the evidence from all parties and asks you to reach a conclusion by yourself...

          Sorry for the snark. But I genuinely think the way they did this was perfect.

          • pdonis 1 hour ago
            > I genuinely think the way they did this was perfect.

            Evidently we disagree. I responded about that to another commenter downthread.

    • kubik369 4 hours ago
      I think you are misunderstanding the point of journalism. It can be debated whether the title should be such a question. Nevertheless, the article should just present information, ideally in a balanced way, without author's bias, so that you can decide for yourself. You can see the attempts at the balanced part in the article where an allegation/statement is made about Altman followed by parentheses saying that Altman recalls the exchange differently/does not remember.
      • pdonis 3 hours ago
        > the article should just present information, ideally in a balanced way, without author's bias, so that you can decide for yourself.

        I get that this is the claimed ideal of journalism, at least for straight reporting. The problem is that it's impossible.

        There isn't time or space to present all the information; the journalist has to filter. And filtering is never unbiased. Even the attempt to be "balanced" is a bias--see next item.

        "Balanced" always seems to mean "give equal time and space to each side". But what if the two sides really are unbalanced? What if there's a huge pile of information pointing one way, and a few items that might point the other way if you believe them--and then the journalist insists on only showing you a few items from the first pile, so that the presentation is "balanced"? You never actually get a real picture of the facts.

        There's a story that I first encountered in one of Douglas Hofstadter's books, about two kids fighting over a piece of cake: Kid A wants all of it for himself, Kid B wants to split it equally. An adult comes along and says, "Why don't you compromise? Kid A gets three-quarters and Kid B gets one-quarter." To me, the author of this article comes off like that adult.

        In any case, all that assumes that this article is supposed to be just straight reporting, no opinion. For which, see the next item.

        > It can be debated whether the title should be such a question.

        Yes, it certainly can. If this article is just supposed to be straight reporting--no editorializing--then that title is definitely out of place. That title is an editorial--and the article either needs to own that and state the conclusion it's trying to argue for, or it shouldn't have had that title in the first place.

  • lenerdenator 7 hours ago
    If you are asking if a single human can be trusted with such a responsibility, the answer is, by default, no.
  • Arubis 4 hours ago
    This is unfair to the original article, which is well-researched and worth a read. But the answer this question is _always_ no. Nobody should have as much power as the oligarch class currently does, even if of inscrutable power.
  • cm2012 5 hours ago
    I don't see anything bad about Altman in this article that cant be explained by the chaos of growing a billion dollar company in a few years.
  • KellyCriterion 7 hours ago
    Na, it will be Dario instead of Sam, Id say? :-))
  • jrflowers 2 hours ago
    I hope somebody just publishes The Ilya Memos. Sounds like a fun read
  • o0-0o 2 hours ago
    Hey, Ronan. Did the IPO come up at all in the research or interviews for this article? A yes or no will suffice, and color it if you want. ~_^
  • almostdeadguy 11 hours ago
    Seems this got buried from the front page very quickly
    • dang 7 hours ago
      It set off the flamewar detector. I've turned that off now.

      I only saw this thread by chance and almost didn't look, because the title made the piece sound like a flamebait blog post. Fortunately I saw newyorker.com beside the title and looked more closely.

    • ronanfarrow 11 hours ago
      There is dwindling space for sincere independent accountability reporting on big tech like this to a) be created, since it's incredibly resource-intensive and so many resources flow from Silicon Valley, and b) actually reach people, since more platforms are now owned or otherwise influenced by interested parties.

      Thank you for looking. Please do spread this kind of reporting in your communities, and subscribe to investigative outlets when you can.

      • walterbell 5 hours ago
        > OpenAI has closed many of its safety-focussed teams

        A paper with "ideas to keep people first" was (coincidentally?) published today:

          • Worker perspectives
          • AI-first entrepreneurs
          • Right to AI
          • Accelerate grid expansion
          • Accelerate scientific discovery and scale the benefits. 
        
          • Modernize the tax base
          • Public Wealth Fund
          • Efficiency dividends
          • Adaptive safety nets that work for everyone
          • Portable benefits
        
          • Pathways into human-centered work
        
        https://openai.com/index/industrial-policy-for-the-intellige...
      • almostdeadguy 8 hours ago
        This was an excellent piece with many new pieces of information in it. Thanks to you and your coauthor for getting it released.
      • big_toast 10 hours ago
        You can see the vote history here[1]. It's always hard to know exactly why something gets buried. I was a little sad to see the story down-ranked when I saw that you were here in the comments.

        But the discussion is generally pretty low quality with these sort of posts. People react without having read the story, or with whatever was on their mind already, or are insubstantive, or simply low effort. I don't think you'll lose k-factor not having a bigger post here.

        Sometimes if you talk to the mods, they'll let you know their perspective. I generally find they're correct that people are much better at contributing/disseminating new knowledge to the world on more technical topics here.

        [1]: https://news.social-protocols.org/stats?id=47659135

        • dang 6 hours ago
          Yes, I was surprised that it was downranked when I saw that too. Then I realized it had set off the flamewar detector and it was a simple matter to turn it off. I'm glad we got to this in time, because sometimes we don't, and this was an important case not to miss.
        • throw4847285 8 hours ago
          But isn't that circular? If the ranking algorithm used by the mods tends to devalue articles like this because they don't trust the user base to comment intelligently, doesn't that alter the culture of this site to make that more true?
          • dang 6 hours ago
            I'm not sure what big_toast meant, but we do trust the user base to comment intelligently (which sometimes works and sometimes not), and we don't devalue articles like this.

            We do tend to devalue titles like this, or more likely change them to something more substantive (preferably using a representative phrase from the article body), but I'm worried that if I did that here we would get howls of protest, since YC is part of the story.

            • throw4847285 4 hours ago
              I'm sure you're sick of comments about moderation, but I will say, this makes me more sympathetic to the position you're in.

              It's an interesting dilemma. Many very respected publications use provocative titles because of the attention economy. And I'm sure you have good data that provocative titles lead to drive-by comments and flame wars.

              But I don't think big_toast was entirely wrong that there is a side effect of sometimes burying articles that are by their nature provocative. And how do you distinguish a flame war over a title from a flame war over content? That's not a leading question. I don't know.

              • dang 1 hour ago
                For us the litmus test isn't the title, it's whether the article itself can support a substantive discussion on HN. If yes, then we'll rewrite the provocative title to something else, as I mentioned. Ironically this often gives the author more of a voice because (1) the headline was often written by somebody else, and (2) we're pretty diligent about searching in the article itself for a representative phrase that can serve as a good title.

                If, on the other hand, the title is provocative and the article does not seem like it can support a substantive discussion on HN, we downweight the submission. There are other reasons why we might do that too—for example, if HN had a recent thread about the same topic.

                How do we tell whether an article can support a substantive discussion on HN? We guess. Moderation is guesswork. We have a lot of experience so our guesses are pretty good, but we still get it wrong sometimes.

                In the current case, the title is baity while the article clearly passes the 'substantive' test, so the standard thing would have been to edit the title. I didn't do that because, when the story intersects with YC or a YC-funded startup, we make a point of moderating less than we normally do.

                I know I'm repeating myself but it's pretty random which readers see which comments, and redundancy defends against message loss!

  • mayhemducks 3 hours ago
    I would really appreciate it if someone in the know could explain to me how a markov chain with some backpropagation can surpass human cognition. Because right now I call BS.
  • jesterson 13 hours ago
    Watch Altman's reaction in Tucker Carlson interview to the question about (alleged) murder of OpenAI researcher Suchir Balaji.

    The overall response and particularly the body language speaks a lot.

  • thewileyone 2 hours ago
    [flagged]
  • primer42 5 hours ago
    "Any headline that ends in a question mark can be answered by the word no."

    https://en.wikipedia.org/wiki/Betteridge%27s_law_of_headline...

  • simoncion 6 hours ago
    Can Sam "The board can fire me, I think that's important." Altman be trusted?

    If for no other reason, given what happened when the board fired him... no. I'd say not.

  • jader201 6 hours ago
    Am I the only one that feels like Claude is clearly winning code generation, and Gemini in general LLM?

    I just don’t feel like OpenAI has a legitimate shot at winning any of the AI battles.

    Therefore, I feel like “Sam Altman may control our future” is a far stretch.

    • guelo 6 hours ago
      Well I just canceled my Claude Pro subscription because of the mysterious limits that I don't experience with codex, even after paying for "extra usage". If Anthropic can't figure out their capacity problems they are in trouble.
      • chrisjj 5 hours ago
        I doubt Anthropic see this as their capacity problem. They like "extra usage", and users who don't, well its their capacity problem.
    • dominotw 6 hours ago
      how is gemini winning in general llm. what is general llm .
      • SwellJoe 6 hours ago
        General LLM is what Apple is paying Google for.
        • tartoran 3 hours ago
          I noticed that Apple speech to text has gotten pretty good lately. Is that because they’re paying Google? Not sure I use other AI features from Apple as I have my Siri turned off.
    • gambiting 5 hours ago
      >>and Gemini in general LLM?

      You might be. Or at least I feel like Gemini is actually dumber than a house of bricks - I have multiple examples, just from last week, where following its advice would have lead to damage to equipment and could have hurt someone. That's just trying to work on an electronics project and askin Gemini for advice based on pictures and schematics - it just confidently states stuff that is 100000% bullshit, and I'm so glad that I have at least a basic understanding of how this stuff works or I would have easily hurt myself.

      It's somewhat decent at putting together meal plans for me every week, but it just doesn't follow instructions and keeps repeating itself. It hardly feels worth any money right now, like it's some kind of giant joke that all these companies are playing on us, spending billions of these talking boxes that don't seem that intelligent.

      I also use claude at work, and for C++ programming it behaves like someone who read a C++ book once and knows all the keywords, but has never actually written anything in C++ - the code it produces is barely usable, and only in very very small portions.

      Edit: I just remembered another one that made me incredibly angry. I've been reading the Neuromancer on and off, and I got back into it, but to remind myself of the plot I asked Gemini to summarise the plot only up to chapter 14, and I specifically included the instruction that it should double check it's not spoiling anything from the rest of the book. Lo and behold, it just printed out the summary of the ending and how the characters actions up to chapter 14 relate to it. And that was in the "Pro" setting too. Absolute travesty. If a real life person did that I'd stop being friends with them, but somehow I'm paying money for this. Maybe I'm the clown here.

      • staticman2 39 minutes ago
        I'm curious: did you give Gemini the entire text of Neuromancer or did you expect it to use search results for chapters 1 to 14?

        I would have just fed it the text of chapters 1 to 14 from a non drm copy.

  • wileydragonfly 2 hours ago
    No
  • davidmurdoch 2 hours ago
    "Good luck, have fun, don't die."
  • y1n0 3 hours ago
    Betteridge's law of headlines: no
  • lnenad 15 hours ago
    This whole situation goes to show that yesterday's conspiracy theorists are today's realists. What's happening to USA's leadership and as a country and what's happening with with their top companies is really scary for the rest of us. If this trend continues we're all definitely gonna end up in a kleptocracy.
  • zoklet-enjoyer 4 hours ago
    I believe Annie Altman.
    • zoklet-enjoyer 2 hours ago
      Annie Altman is more credible than a serial scammer
    • s5300 4 hours ago
      [dead]
  • ProAm 4 hours ago
    Nope, never trust this man. His history proves why you cannot. Pure greed.
  • therobots927 15 hours ago
    Excellent work. I’ll have to wait until we get the print version delivered to finish as I’m not signed into the new Yorker on my phone.

    I’ve always been a huge fan of Ronan Farrow’s journalism and willingness to speak truth to power. I think he’s pulling at exactly the right thread here, and it’s very important to counteract Altman’s reputation laundering given that we run a very real risk of him weaseling his way into the taxpayer’s wallet under the current administration.

  • smcg 3 hours ago
    Rule of Headlines says "no"
  • GlibMonkeyDeath 11 hours ago
    Disclaimer: I have no association with any AI company and have never met Altman or any of the other top AI scientists.

    The real question is: can anyone be trusted if the fever dreams of super-intelligence come true? Go ahead and replace Sam Altman with someone else - will it make a difference? Any other CEO is going to be under the same overwhelming pressure to make a profit somehow. I think the OpenAI story is messier because it was founded for supposedly altruistic reasons, and then changed.

    Methinks many of Altman's detractors protesteth too much. He's doing his job as it is defined (make OpenAI profitable.) Nothing of substance in this article seemed to make him exceptionally "sociopathic" compared to any other tech CEO. It goes with the territory.

    What depressed me most is that trillions of dollars are being raised for building what will undoubtedly be used as a weapon. My guess is the ROI on that money is going to be extremely bad for the most part (AI will make some people insanely rich, but it is hard to see how the big investors will get a return.) Could you imagine if the world shared the same vision for energy infrastructure (so we could also stop fighting wars over control of fossil fuels and spewing CO2?) A man can dream...

    • tim333 8 hours ago
      People do vary even if none are perfect. Demis Hassabis has a pretty good reputation amongst the AI leaders. Altman seems unusually shifty.
  • guzfip 9 hours ago
    > Lehane—whose reported motto, after Mike Tyson, is “Everyone has a game plan until you punch them in the mouth”

    lol do you think these guys have ever been hit? Let alone in the face. They’d probably be less eager to mouth off as much as they do if so.

  • selimthegrim 56 minutes ago
    Quite frankly, if he went and scrubbed (or had scrubbed) a Facebook thread I got in an argument with him on in 2018 (about the last time someone did an article about him) I can only imagine how obsessive he is about controlling his past and info about it.
  • nickphx 5 hours ago
    speak for yourself, he doesnt control my future.
    • vntok 4 hours ago
      Please don't leave us hanging; what makes you immune?
  • Aboutplants 15 hours ago
    Seeing Sam Altman slowly degrade into the realization that he is in fact not as smart as others in this space has been fascinating to watch. He used to speak with enthusiasm and confidence and now he’s like a scared little boy who got in way too deep.

    The last person that this happened to was Sam Bankman Fried as investors and regular folk finally realized he was full of complete shit and could only talk the game for so long until the truth emerged.

    • the_doctah 13 hours ago
      And they both peddle the same altruism smokescreen. Sociopath leader playbook.
    • therobots927 15 hours ago
      [flagged]
      • jjtheblunt 7 hours ago
        which of the two are you referring to as possibly angling for a pardon?
        • Findecanor 5 hours ago
          Bankman-Fried has already done it.
      • throwawayq3423 7 hours ago
        I have a feeling he might be angling for a pardon if he ends up bringing the whole global economy down.
  • jojobas 5 hours ago
    The guy called out for being a sociopath by a multitude of Silicon Valley CEOs of all people, sure we can trust him our future.
  • thm 16 hours ago
    Hybris.
  • seba_dos1 15 hours ago
    Looks like Betteridge's law of headlines applies here too.
  • josefritzishere 14 hours ago
    Betteridge's law of headlines is an adage that states: "Any headline that ends in a question mark can be answered by the word "NO."
  • ambicapter 6 hours ago
    > The day that Altman was fired, he flew back to his twenty-seven-million-dollar mansion in San Francisco, which has panoramic views of the bay and once featured a cantilevered infinity pool, and set up what he called a “sort of government-in-exile.” Conway, the Airbnb co-founder Brian Chesky, and the famously aggressive crisis-communications manager Chris Lehane joined, sometimes for hours a day, by video and phone. Some members of Altman’s executive team camped out in the hallways of the house. Lawyers set up in a home office next to his bedroom. During bouts of insomnia, Altman would wander by them in his pajamas. When we spoke with Altman recently, he described the aftermath of his firing as “just this weird fugue.”

    These sociopaths are so good at giving away nothing. He managed to engender sympathy instead of saying "I'm not gonna talk about anything that happened then".

    Also very weird how many of these people are so deeply-linked that they'll drop everything they're doing just to get this guy back in power? Terrifying cabal.

  • sumeno 15 hours ago
    Betteridge strikes again
  • drivingmenuts 15 hours ago
    Short answer: No. Long answer: Hell, no.
  • tylerchilds 5 hours ago
    [dead]
  • ihsw 5 hours ago
    [dead]
  • surcap526 10 hours ago
    [dead]
  • HarHarVeryFunny 7 hours ago
  • huflungdung 15 hours ago
    [dead]
  • giwook 5 hours ago
    tl;dr

    No, he cannot.

  • covercash 15 hours ago
    [flagged]
    • runevault 6 hours ago
      It is, at best, incredibly hard to accumulate that much wealth without doing shady things. Microsoft's monopolistic practices in the 90s for example. The only person I can think of that ever cracked a billion without their money coming through dirty means was, funny enough, JK Rowling who has her own set of issues separate from the value she got out of Harry Potter.
      • balls187 6 hours ago
        John Lithgow had a take I agreed with: Her opinions were heavily misconstrued though she chose to double down at her own peril.
    • i7l 15 hours ago
      I feel the "always have been" meme might be a suitable insert here.
    • aleph_minus_one 14 hours ago
      > Why are all billionaires (especially tech) such villains?

      Not all billionaires are villians. But it is long-known in organizational psychology that dark triad [1] traits are very "helpful" if one wants to climb career ladders fast.

      [1] https://en.wikipedia.org/wiki/Dark_triad

    • seba_dos1 15 hours ago
      I'm not 100% sure if it's strictly necessary to be a villain in order to become and remain a billionaire, but it seems like it could be and even if it's not it surely helps.
    • burnt-resistor 14 hours ago
      Money often changes people's attitude in a fashion similar to chronic substance abuse. Plus, there's a insular and detached bubble effect that grows around them.

      Also, there's the psychopathic and narcissistic tendencies of greedier people and the false "virtue" "greed is good" that is contrary to the values espoused by Adam Smith.

      We need standard income tax brackets of 90% after $20M/y and 99% after $100M/y.

  • romeroej 6 hours ago
    Can anybody tho?
    • morleytj 6 hours ago
      Yeah, some people can more than others.
  • neya 15 hours ago
    [flagged]
  • FpUser 6 hours ago
    >"Sam Altman may control our future"

    TLDR but just the heading is already ugly. No single person no matter how nice they're should be able to control our future. Power corrupts, what fucking trust. We are supposed to be democratic society (well looking at what is going on around this is becoming laughable)

  • asK1ajsh 5 hours ago
    The New Yorker is owned by Conde Nast just as Reddit. Conde Nast has a deal with OpenAI:

    https://www.reuters.com/technology/openai-signs-deal-with-co...

    This is a damage control piece, and you see that the most stinging comments here get downvoted.

    • cake_robot 4 hours ago
      What might feel like "damage control" is more likely to be the outcome of the even-handedness you get with serious, rigorous reporting. Something the New Yorker is known for.
  • gchokov 15 hours ago
    He is cooked. Only a matter of time before the whole thing blows up. Once a scammer, always a scammer.
  • ahartmetz 15 hours ago
    Well, no, obviously not. Not one bit.
  • aduty 5 hours ago
    LOL, no.
  • nielsbot 6 hours ago
    No one person control our future. Stop there.
    • _moof 5 hours ago
      Some people have far, far more power over our lives than others. More than they deserve, frankly.
    • mikkupikku 6 hours ago
      Yeah, but one person can fuck a lot of shit up.
  • killbot5000 6 hours ago
    No. Why is this a question?
  • Cheyana 16 hours ago
    Harvey Dent…
    • the_doctah 13 hours ago
      The brighter the picture, the darker the negative
  • LetsGetTechnicl 15 hours ago
    No
    • gonzo41 15 hours ago
      just like Zuck.
  • catigula 15 hours ago
    1. No.

    2. You cannot "control" superintelligent AI.

  • ekjhgkejhgk 15 hours ago
    No.
  • aksss 7 hours ago
    "could", "may", "might" - these words do so much heavy lifting in "journalism". Almost always it's an invitation to worry and be miserable.
  • drob518 2 hours ago
    [flagged]
  • bijowo1676 6 hours ago
    This article is just another typical New Yorker fluff piece that tries to look deep but misses the actual point.

    The biggest flaw is that it spends way too much time on high-school level drama and "he-said-she-said" gossip about Sam Altman’s personal life instead of focusing on the actual technical and corporate capture of OpenAI.

    The author treats the "nonprofit mission" like some holy quest that was "betrayed," when anyone with a brain in tech saw the Microsoft deal as the moment the original vision died. Instead of a hard-hitting look at how compute-monopolies are actually forming (MSFT AMZN NVDA and circular debt dealing inflating the AI bubble that could crash the economy), we get 5,000 words of hand-wringing over whether Sam is a "nice guy" or a "liar."

    Who cares???????

    The board failed because they had no real leverage against billions of dollars, not because they didn't write enough Slack messages. It's a long-winded way of saying "Silicon Valley has internal politics," which isn't news to anyone here.

  • ninjahawk1 6 hours ago
    OpenAI is like #3 or #4 of the AI companies right now in terms of power, and last place in the court of public opinion.

    I’d be more concerned about Anthropic both being in the good graces of the public and having access to all of our computers indirectly with Claude Code.

    • 0x3f 6 hours ago
      OpenAI has ~30x the userbase of Anthropic.
      • aduffy 5 hours ago
        I'm not sure how much of that converts to revenue. If it's free plan users, that's just cost. You can say what you want about "creating a training data moat" but that doesn't seem like it's prevented the other labs from putting out excellent models.
        • 0x3f 5 hours ago
          Well we were talking about power and reputation and being well-known and all that. Being more ubiquitous is surely a big part of that. GP seems to think Anthropic is doing better because of the DoD thing. In my estimation, 90% of people do not care about that at all.
      • ninjahawk1 4 hours ago
        They’re all in the negative excluding subsidies, hard core coders are more valuable than high schoolers cheating on homework.
      • hellojimbo 5 hours ago
        Around the same revenue due to Anthropics strong enterprise strategy
        • 0x3f 4 hours ago
          Perhaps, but I'd venture the ear of the regime is even more valuable.
    • estearum 6 hours ago
      makes sense if you think the point of journalism is just to take everyone down a notch instead of... um... informing the public of bad actors

      "the local drug-dealing pimp is so passe, we need to investigate the most upstanding members of the community just to be sure" is a frankly insane strategy

  • quantified 5 hours ago
    A bit of a feeling of "so what" here. Maybe he's less trustworthy than some. We have people of X trustworthiness running the government, crypto exchanges, a certain space exploration and satellite company, social media companies, and so on. We know their trustworthiness. Isn't the real issue how to cope?
    • boc 4 hours ago
      What's the point of living in an advanced society if you just sit around watching it decay around you? Our ancestors fought for our indifference today, and with attitudes like yours we'll watch our children fight for it again tomorrow.
      • quantified 2 hours ago
        What's your proposal? We knew he's as trustworthy as the others, and it sounds like you agree. What are you doing about them? Legally or illegally?

        Mostly we don't need 3,000 words on how untrustworthy he is. We could use 3,000 words on how to remove his influence.

    • Boxxed 5 hours ago
      Your point is that it's ok he's untrustworthy because lots of people in power are?
      • JumpCrisscross 4 hours ago
        > Your point is that it's ok he's untrustworthy because lots of people in power are?

        It's...weirdly a valid question. If Sam fibs as much as the next guy, we don't have a Sam problem. Focussing on him alon is, best case, a waste of resources. Worst case, it's distracting from real evil. If, on the other hand, as this reporting suggests, Sam is an outlier, then focussing on him does make sense.

      • quantified 2 hours ago
        Not sure where I said it's OK? Please point it out.

        We have to deal with it. Or are you suggesting we should purchase a controlling interest and vote him off the board?

      • TheOtherHobbes 4 hours ago
        No, it's that the entire ecosystem is rotten to the core, and it actively selects, rewards, and protects flawed personality types.

        And when you're dealing with a potential existential threat, this is an existential problem.

        • Rury 38 minutes ago
          I don't disagree, but at some point, I think people need to understand we're dealing with laws of nature here. I mean just look at human history, this has been a problem since the dawn of civilization...

          I think if you truly understand social contract theory, how hierarchies are formed, and political theory, you'll realize that oligarchies tend to be nature's equilibrium point for setting social disputes, and all forms of governments regardless of whatever they claim to be, naturally devolve towards them as they tend to represent the highest social entropy (ie equilibrium) state. That's not to say you can't have or move further away from that point and towards another (supposed ideal) form of government, you absolutely can, but it takes work. Perpetual work - of which no set of "rules" can remedy people of having to do in order to sustain it.

          The problem however, is most people get complacent. They eventually tire of that work, or are ignorant, and by doing so create a power vacuum which allows things slide back towards that state.

          As so, people must decide for themselves one of several possible avenues to pursue:

          #1 - Try to convince others (the masses) to join and work together to take power from the few, back to them

          #2 - Find a way to join the ranks of the elite few (which thanks to the prisoner's dilemma, unscrupulous means tends to perform better in the short term, even if at the cost of the long term. And if the elite is already corrupt, well, cooperating with it works well)

          #3 - Settle for their lot in life

          Unfortunately #1 is such a difficult proposition given it requires winning agreement among many whilst many often decide to remain in camp #3 (for complacency/ignorance reasons). And #2 is often easier done without moral integrity, especially at the behest of those in camp #3 whose behavior only helps enable these realities. Thus, is why I think the "ecosystem" as you say, will always tend towards this way - where society tends towards being controlled by an elite few who are rotten.

          Robert Michel's realized this and dubbed it the Iron Law of Oligarchy and embraced his own version of #2 for himself. Although, he came to this conclusion through his own observations and reasoning, rather than through historical political theory.