Why I Joined OpenAI

(brendangregg.com)

170 points | by SerCe 12 hours ago

41 comments

  • brendangregg 10 hours ago
    To answer a few people at once: I did mention compensation as a factor in the post, but I didn't elaborate details, so easy to miss. Comp is important of course, but so are the other factors. It feels like I can't go for a day without reading about the cost of AI datacenters in the news, and I can do something about it.
    • artninja1988 1 minute ago
      I use LLMs daily and they've become an essential part of my life. Really hoping your work can help make LLMs and AI more broadly cheaper and more abundant.

      Would be fantastic if you can find a way to make optimizations you find available more openly. The whole ecosystem benefits when efficiency improvements are shared. Looking forward to seeing where this goes and don't let the negativity from some get to you

    • rybosworld 4 minutes ago
      Hey Brendan - first time listener, first time caller.

      Inferring the overall tone from the comments, I think the folks here are struggling with what sounds like a logical fallacy from someone who is certainly a logical thinker.

      > how I could lead performance efforts and help save the planet.

      The problem on the face of it being: Performance gains will not translate to less energy usage (and by extension less heat released into the atmosphere). Rather, performance gains will mean that more effective compute can be squeezed from the existing hardware.

      If performance gains translate to better utilization of the hardware, it also follows that it will translate to more money for the company, allowing for the purchase of more GPUs. Ad infinitum.

      My stance is that this is just businesses doing what they do. It's always required regulation to slow down the direct/indirect negative byproducts (petro companies being the most obvious example). I don't see how AI would inherently be different.

      Is there another angle that I (we) am (are) missing where the performance efficiencies translate to net benefits for the planet?

    • brendangregg 9 hours ago
      Again, many comments here saying I only care about the money, and while comp is an important factor I think it characterizes me as someone I'm not, and forgets what I've been doing for the past two decades. I've spent thousands of hours of my life writing textbooks for roughly minimum wage, as I want to help others like me (I came from nothing, with no access tech meetups or conferences, and books were the gateway to a better job). I've published technologies as open source that have allowed others to make millions and are the basis for many startups. I'm also helping pioneer remote work and hoping to set a good example for others to follow (as I've published about before). So I think I'm well known for caring about a lot of things during the past couple of decades.
      • altmanaltman 6 hours ago
        It's okay to want to make money. You don't really have to justify it this hard unless you want people to really think you don't think comp is important, which is a bit sus to be fair.
        • matwood 5 hours ago
          Even if people don't want it to be about the money, it's still about the money because of the world we live in. Good vibes don't pay the mortgage or put food on the table. More money equates to better health and future outcomes for a person and their family, so how couldn't it always be about the money?

          Of course once someone has money they can say it's not about the money, but that privilege is literally bought with...money.

          • motbus3 52 minutes ago
            It is ok to make money, as long as it does not involve working with evil stuff.
        • brendangregg 5 hours ago
          When did I say I don't think comp is important?
          • nlitened 5 hours ago
            Sir, your top three comments in the comment section are about how not important the compensation package is for you
          • yosefk 5 hours ago
            Thank you very much for your work. I think people envious of someone's compensation don't deserve a response
          • wakkawukka 5 hours ago
            Thermodynamics though.

            Reducing runtime energy use over years won't really make up for the resource use that goes into building the data center. It's just moved around, similar to how Elon moves around money as needed to bolster the financials of a particular project.

            Like with the airline industry it's not just the smog they blow on our food. Drink carts, seat belts, barf bags all have a resource intensive energy and materials pipeline.

            Every server screw and power cable adds up.

          • mettamage 3 hours ago
            From my reading of what you said, you think comp is important and so are other things. You outlined those a bit too, but I already forgot them.

            Quite frankly, I think some people here are too quickly spooked and think what you say is sus. I simply see that as a sign that they aren't fully having a good faith discussion. Or they simply read things way differently than I do.

            I'm simply writing this because I think there are enough people that have a similar reading to what you wrote. They simply don't mention it as people who feel "outraged" (a bit too dramatic of a term but English is my 2nd language). "Outraged" people seem simply more vocal to me.

            For clarity: I feel neutral about this whole thing. I do appreciate the work you've done in the past.

      • pillefitz 6 hours ago
        The issue is that you're doing lot, but not saving the planet.

        What do you think is happening with the efficiency gains? You're making rich people richer and helping AI to become an integral (i.e. positive ROI from business perspective) part of our lives. And that's perfectly fine if it aligns with your philosophy. It's not for quite a few others, and you not owning up to it leads to all kinds of negativity in the comments.

        • trhway 6 hours ago
          >What do you think is happening with the efficiency gains?

          may it happen that the efficiency gains decrease demand and thus postpone investment into and development of new and better energy sources? If one couldn't get by just by bringing 20 trucks with gas turbines, may be he would have invested in fusion development :)

          • nextaccountic 4 hours ago
            > may it happen that the efficiency gains decrease demand

            What mechanism would make this happen?

            Demand could decrease if AI became worse, but efficiency doesn't make AI worse - it actually makes possible at all to run bigger, better models (see the other comment with a link to Jevon's paradox), which increase, not decrease demand (more powerful models may have new capabilities that people want to use)

            Alternatively, AI demand could decrease through political pressure (either anti-AI sentiment takes a foothold on the public, and/or government regulation strangle demand on the sector like it did for eg. on tobacco industry). But another way to reap the benefits of more efficient AI datacenters is to make it a talking point on how AI environmental impacts can be mitigated, which could curb anti-AI sentiment.

            Either way, those possibilities don't decrease demand for AI - they are either neutral, or increase demand instead.

          • T-A 5 hours ago
            > may it happen that the efficiency gains decrease demand

            https://en.wikipedia.org/wiki/Jevons_paradox

      • solarengineer 2 hours ago
        Hi Brendan,

        I am a long time fan, I have the physical copy of each and every book that you have authored, I have watched each and every video that you are in, and I walk team members and clients through your USE method at every engagement I am on.

        I would say to you that the "make the world a better place" has been excessively misquoted. Even the Silicon Valley episode on Tech Crunch parodies show how anything and everything is intended to "make the world a better place".

        Please reconsider your use of the phrase given the well-earned negativity around it.

      • gizmodo59 1 hour ago
        It’s not a crime if you do something for money. Those who comment are likely doing the same and they couldn’t get into a company like OpenAI and hence the hatred! Keep doing the great work you always did! Excited to see what you ll do with all the resources in the world.
      • bigtones 6 hours ago
        Great to see you're in Sydney Brendan, and let the haters hate.

        You have done a brilliant job elevating your chosen specialty to the world, and encouraging and inspiring others in the industry for a long time - so you should be fairly compensated for that lofty position. I don't envy the late nights or very early mornings you have ahead of you on conference calls with SF, but good luck at OpenAI mate !

      • snayan 9 hours ago
        I mean, I don't know you well, but, I see your posts on here from time to time and from what I gather you are very, very, exceptional at what you do.

        Reality is, these AI giants are here and they are using a massive amount of resources. Love them or hate them, that is where we are. Whether or not you accept the job with them, OpenAI is gonna OpenAI.

        Given how much the detractors scream about resource uses, you'd think they'd welcome the fact that someone of your calibre is going in and attempting to make a difference.

        Which, leads me to believe you're encountering a lot of projecting from people who perhaps can't land the highest of comp roles, and shield their ego by ascribing to the concept of it being selling out, which they would of course never do.

        • gsf_emergency_6 8 hours ago
          It's probably impossible to prove I'm not projecting..

          However. I am putting my curious foot forward here:

            What were the toughest ethical quandaries you faced when deciding to join OpenAI?
          
          To give a purely hypothetical example which is probably not relevant to your case: if I had to choose DeepSeek or OpenAI, I think I would struggle with openness of the weights..
      • belter 7 hours ago
        Brendan, your work has been transformative. I own all your books and have probably read every technical blog post twice.

        I hope there will be harder problem waiting for you, than using flamegraphs to optimize GenAI Porn.

        https://www.axios.com/2025/10/14/openai-chatgpt-erotica-ment...

      • alephnerd 9 hours ago
        Ignore the haters (who sadly have become extremely common on HN now).

        I loved your work back when I was an IC, and I'm sure this is a common sentiment across the industry amongst those of us who started systems adjacent! I still refer to your BPF tools and Systems Performance books despite having not written professional code for years now.

        Can't wait to read content similar to what you wrote about when at Netflix and Intel albeit about the newer generation of GPUs and ASICs and the newer generation of performance problems!

      • biggggtalkguy 8 hours ago
        [flagged]
      • lelanthran 4 hours ago
        > I've spent thousands of hours of my life writing textbooks

        I'm surprised at this; all that experience wasn't enough to flag this article as obviously AI generated?

        More to the point, with all that experience you still weren't able to issue prompts to make the output sound different from generic AI slop?

    • motbus3 53 minutes ago
      Read this Gregg. I'm the first one to always put your work and books in any comments related to you or your work here. But...

      This is a company which at the first opportunity seized and stopped doing open research, cut open source contributions, converted itself to for profit after years of fiscal benefits, that scrapped its ethics committee and removed all engineers who opposed any of this.

      Don't come with the excuse there is any work being done for the better of something.

      One should never input one's own expectations into another, but, I feel disappointed. It is having the guy I saw growing from the first posts working for an evil machine on his own volition.

      Do what you want. But that's what I feel about this disheartening news

    • politelemon 10 hours ago
      It would be good if the performance improvements made can be applicable across the industry so everyone benefits. But it doesn't sound unbelievable that OpenAi may want to keep some of it secret to keep an advantage over others?
      • yunohn 3 hours ago
        Unbelievable? It’s unfathomable - out of all existing AI companies, OpenAI is the least open of them all. They have stopped contributing any useful research into the public domain. Even infamous “villains” like Meta and China are doing leaps and bounds more compared to /Open/AI and the like.
    • bahmboo 10 hours ago
      Thanks for taking the risk in this environment and posting about your experience from a personal standpoint. [environment: people will come at you from all angles with very passionate opinions]
    • AnonHP 10 hours ago
      I’m replying to your comment in the hopes of getting a response. In the blog post, you said:

      > There's so many interesting things to work on, things I have done before and things I haven't.

      What are the things you haven’t done before, if you could mention them?

    • robotpepi 47 minutes ago
      you're so fake, please stop it
    • journal 7 hours ago
      I feel like I can do something about something too but no one is picking me to do anything about anything.
    • jonesetc 7 hours ago
      >Did fixing it from the inside work for any of those other issues?

      No, it never does. Those people somehow delude themselves into thinking it might, but...it might just work for us.

    • vasco 7 hours ago
      Turn them off!
    • jcgrillo 10 hours ago
      Interesting. Out of curiosity, how long do you think OpenAI can survive as a company? Put another way, what would be your guesses for probability of failure on 1yr, 3yr, and 5yr horizons?

      EDIT: possibly a corollary--does Mia pay money for chatgpt or use a free plan?

      • brendangregg 5 hours ago
        As an engineer I can't comment on future company predictions. It sounds like a question for Sam Altman, as he has discussed risks in the past.

        My wife was paying for ChatGPT before I joined. I didn't ask Mia. I probably have three months of hair growth before my next chance to ask.

    • kgraves 10 hours ago
      > I stood on the street after my haircut and let sink in how big this was, how this technology has become an essential aide for so many, how I could lead performance efforts and help save the planet.

      Brendan.

      First of all congratulations on your new job. However,

      It is easier to just say to everyone it is about the money, compensation and the stock options.

      You're not joining a charity, or to save the planet, this company is about to unload on the public markets at an unfathomable $1TN valuation.

      Don't insult your readers.

    • username223 10 hours ago
      [flagged]
      • gghffguhvc 10 hours ago
        I believe him. I don’t know him personally but his blog posts pop up here from time to time and this feels genuine to me.
        • buzzerbetrayed 9 hours ago
          You believe someone taking a fat paycheck isn’t doing it for the fat paycheck?

          Wanna buy a bridge?

          • surajrmal 9 hours ago
            Humans are complex and have multiple sources of motivation. You don't know whether he took the offer with the highest pay. He's likely wealthy enough that he can pay less attention to his income and focus on his other sources of motivation if he wants to. That's not to say pay is not a factor in his choice, but it need not be the only or primary one. This is a luxury of the privileged for sure, which can make it difficult to relate to.
    • PunchTornado 2 hours ago
      [flagged]
    • DeepYogurt 10 hours ago
      You gonna open source it?
  • Banditoz 11 hours ago
    > ...it's not just about saving costs – it's about saving the planet

    There's something that doesn't sit right with me about this statement, and I'm not sure what it is. Are you sure you didn't just join for the money? (edit: cool problems, too)

    • pyrale 6 hours ago
      Probably because "making the world a better place" has been a trope used so much in the industry that it's made it to a TV show [1]. It's fine to be passionate about your job. It's fine to be paid well. You don't need to make us believe that you're mother Theresa on top of it.

      [1]: https://www.youtube.com/watch?v=B8C5sjjhsso

      • trvz 55 minutes ago
        You should stop using mother theresa this way.
      • gbbloke 6 hours ago
        What a gem of a TV show.
      • 7bit 2 hours ago
        The don't need you to believe it. They just need themselves to hear say it.
    • deng 22 minutes ago
      The greatness of human accomplishment has always been measured by size. The bigger, the better. Until now. Nanotech. Smart cars. Small is the new big. In the coming months, Hooli will deliver Nucleus, the most sophisticated compression software platform the world has ever seen. Because if we can make your audio and video files smaller, we can make cancer smaller. And hunger. And AIDS.

      Gavin Belson

    • robby_w_g 10 hours ago
      Reminds me of when I was younger and thought of companies like Google and Tesla as a force for good that will create and use technology to make people's lives better. Surely OpenAI and these LLM companies will change the world for the better, right? They wouldn't burn down our planet for short-term monetary gain, right?

      I've learned over the years that I was naive and it's a coincidence if the tech giants make people's lives better. That's not their goal.

      • ahoka 3 hours ago
        Could US tech companies stop making the world a better place? Like how Airbnb made housing markets "better" and and Facebook made politics "better"? We barely have anything left as regular people as our new feudal lords capture everything they can.
        • pas 24 minutes ago
          airBnB made a very constrained market more efficient, the downsides are classic NIMBY factors. (which are important, but also nothing has been solved in cities that outlawed AirBnB.)

          on the other hand Facebook made the internet hate machine more efficient :(

    • its-kostya 10 hours ago
      Right? Like what an incredibly naive thing to think, that BG is going to contain power consumption lmao. OpenAI is always going to run their hardware hot. If BG frees up compute, a new workload will just fill it.

      Sure you might argue "well if they can do more with less they won't need as many data centers." But who is going to believe that a company that can squeeze more money from their investment won't grow?

      Tangentially, I am looking forward to learn the new innovations that come from this problem space. [Self-rightous] BG certainly is exceptional at presenting hard topics in an approachable and digestible manner. And now it seems he has an unlimited fund to get creative.

      • tayo42 10 hours ago
        They're going to grow either way. Those new workloads are going to be run
        • its-kostya 10 hours ago
          Ya, we know. Just humbling the author ;)
    • Thaxll 10 hours ago
      The AI train is going with or without you, if you can be part of it and improve the situaton, why not.
      • lm28469 3 hours ago
        if you can be part of it and take a fat check!
    • petterroea 10 hours ago
      Even a 25% reduction in resource usage will probably not be enough, AI datacenters are still a huge resource sink after all
      • raincole 5 hours ago
        If you reduce energy consumption of training a new model by 25%, OpenAI will just buy more hardware and try to churn out a new model 25% faster. The total consumption will be exactly the same.

        And they're 100% justified to do so, until they hit another bottleneck (when there is literally not that much Nvidia hardware to buy, for example.)

      • _heimdall 1 hour ago
        There's no gain to be had there at all. Any optimizations that reduce resource usage per output will be gobbled up by just making more output.

        OpenAI released an open source model only because they are capped on growth right now by the amount if hardware they have. Improve resource efficiency and you better believe they'll just crank up use of said resources until they capped again.

      • skybrian 9 hours ago
        I imagine there's a lot more to be gained than that via algorithmic improvements. But at least in the short term, the more you cut costs (and prices), the more usage will increase.
    • wheelerwj 11 hours ago
      I stopped reading just after that. “I joined PhilipsMorris to make smoking cigarette smoking safer…”

      The problems are interesting and the pay is exceptional. Just fucking own it.

      • selectodude 10 hours ago
        He interviewed everywhere and took the biggest offer. Good! Don’t piss on my face and tell me it’s raining.
        • ahf8Aithaex7Nai 10 hours ago
          It's raining anyway. If I piss on your face, I can at least try to make the experience as positive as possible for you.
    • biggggtalkguy 9 hours ago
      [flagged]
      • seanhunter 7 hours ago
        Firstly, you would do well to read the guidelines about avoiding snark, and then actually say whatever it is you’re trying to say rather than make insinuations. As is, this response comes across as a very shallow read. It’s hard to get to the root of what you’re actually saying in your post other than it quotes two paragraphs about how it’s not fun to push through the bureaucracy of a large organisation, which - I would agree. Probably most people who’ve worked at a big company would.

        So why does that make him a “big shot”? Are you perhaps envious of him?

        Why does openAI deserve him or anyone? Hard to say.

        • trvz 51 minutes ago
          [flagged]
    • lm28469 3 hours ago
      [flagged]
    • mewpmewp2 11 hours ago
      [flagged]
  • perf99999999999 7 hours ago
    Brendan, I'm a big fan of your book, and work. I don't have a problem with you joining OpenAI; best of luck there!

    However, I'm not sure your analysis is quite correct, in this case.

    If OpenAI can mobilize X (giga)dollars to buy Y amounts of energy, your work there will not reduce X or Y, it will simply help them produce more "tokens" (or whatever "unit of AI") for a given amount of energy.

    So in a sense you're helping make OpenAI tools better, more effective, but it's not helping reduce resource usage.

    https://en.wikipedia.org/wiki/Jevons_paradox

    • bagacrap 35 minutes ago
      One day if openai becomes a real company (and public), like the kind that takes money from customers and employe accountants and turns a profit, etc, there may be downward pressure on the "costs" side of the equation.

      Also while the thirst for training may be insatiable, I could see the energy cost of "hey chat can you check the basketball score" coming down.

    • ulnarkressty 5 hours ago
      Was going to say the same thing, but I'm pretty sure he already knows that. Smart people can convince themselves of everything.
    • pillefitz 6 hours ago
      And the consequence of burning more tokens, of course, is more widespread adoption, weaving AI more deeply into the fabric of our reality.
      • bigyabai 6 hours ago
        That's a possible second-order effect, but not guaranteed.
  • padolsey 4 hours ago
    The AI industry, and SV tech generally, has a pattern of recruiting talent by flattering people's self-image as builders and discoverers, which makes it psychologically very difficult for those people to reckon honestly with downstream harm.
  • indigodaddy 10 hours ago
    This article is so full of itself I can hardly stand to read it. I had to just sort of skim it instead. Sorry! This style just doesn't do it for me.
    • notepad0x90 10 hours ago
      It's a blog post, not an article. A narrative of events, not an interesting write-up on a topic.
    • biggggtalkguy 9 hours ago
      Not the first time either. See this person's previous blog when leaving his earlier company. Lots of Kim kardashian vibe of self inflated self worth.
      • pstuart 4 hours ago
        The guy is pretty much god-tier in performance engineering -- I'm not seeing the kardashian vibe at all. There's an element of "dear diary" but it reads (to me) as just trying to catalog what he thinks is important to note.
  • matt_daemon 10 hours ago
    > Mia the hairstylist got to work, and casually asked what I do for a living. "I'm an Intel fellow, I work on datacenter performance." Silence.

    How could she not know?

    • Upvoter33 2 hours ago
      This part of the article was cringe for me. Like he wanted to impress Mia and once she didn’t react he realized he needed to change jobs.

      BG and eBPF are awesome but this article read like a midlife crisis to me.

    • Insanity 10 hours ago
      For people who’s main computing devices are phones, this isn’t hard to believe at all.

      Interacting outside of the tech bubble is eye opening. Conversely, the hair stylist might have mentioned the brand of a super popular scissor supplier/other equipment you’d have never heard of.

  • import 2 hours ago
    As a big fan of you; There are a lot of things feels off in the post, and as others mentioned it feels like you’re trying to convince yourself that you’re going to save the world but everyone knows it’s something else.
  • FattiMei 47 minutes ago
    I found funny the hairstylist did provide a pretty distopic reason to use ChatGPT... it seems that you are trying to please your new employer... Nevertheless I respect performance work and I'm studying for something similar. I hope to land a job in HPC
  • lxrogers 20 minutes ago
    according to most in the industry, the cheaper AI is the more we will need of it. So to actually reduce the energy used by AI, you should try to make it as inefficient as possible.
  • selfawareMammal 4 hours ago
    > it's not just about saving costs – it's about saving the planet. I have joined OpenAI to work on this challenge directly.

    I couldn't go on reading.

    • buran77 1 hour ago
      I'm fine with people never justifying their personal choices. It's their business. But if they do bother to justify it then it's a show they put on for me. And reading this kind of explanation is like the show runner takes me for a fool. The net result is that I lose all respect for the person.

      Unless they put on a show for themselves and that's who they try to fool. Probably why nobody mentions money in these shows. They're self motivational.

    • bspammer 3 hours ago
      Plenty of other hints too

      > Do anything, do it at scale, and do it today

      > It's not just GPUs, it's everything.

      > I'm not the first, I'm just the latest.

    • wscott 1 hour ago
      That is normal on his blog. He is a brand that he has developed over many years, and he is constantly promoting that brand.

      Yes, he has done a lot of good work in the past, but he has put as much effort into self-promotion and landed a series of interesting and well-paying gigs.

      I can't blame him for that. It just makes me tired to watch.

      • jsnell 59 minutes ago
        What the OP was pointing out is two typical tells for lazy ChatGPT-generated text, right in the intro. (The m-dash, "it's not just X, it's Y").

        Of course that kind of heuristic can have false positives, and not every accusation of AI-written content on HN is correct. But given how much stuff Gregg has written over the years, it's easy to spot-check a few previous posts. This clearly isn't his normal style of writing.

        Once we know this blog was generated by a chatbot, why would the reader care about any of it? Was there a Mia, or did the prompt ask for a humanizing anecdote? Basically, show us the prompt rather than the slop.

    • robotpepi 37 minutes ago
      me neither. are all these self declared fans commenting here real? I hope they're bots.
    • throwa356262 4 hours ago
      Reminds me of the TechCrunch episode of Silicon Valley TV show. Everyone was there to make the big buck but all collectively pretended they were doing their work for the good of humankind.

      This guy and Rob Pike should have a talk.

      • heeton 3 hours ago
        That episode, and this Gavin quote, encapsulate the attitude perfectly.

        “I don't want to live in a world where someone makes the world a better place, better than we do.”

      • pelagicAustral 2 hours ago
        "Making the world a better place by constructing elegant hierarchies for maximum code reuse and extensibility"

        Beautiful satire in that show. I'm still throwing my own version of this quote every now and again at the office.

    • Phelinofist 43 minutes ago
      Agree, that statement sounds a bit like gaslighting yourself
    • moomoo11 3 hours ago
      I meet people like this irl. I block them.
      • H8crilA 2 hours ago
        Do you think some of them are honestly like that? I can never quite figure out how many levels of irony^H^H^H delusion there are. Spoken as a person that would totally have his job, but just because it most certainly pays plenty and is likely fun to do.
    • allovertheworld 3 hours ago
      [flagged]
    • unfunco 2 hours ago
      [flagged]
  • amluto 11 hours ago
    > She was worried about a friend who was travelling in a far-away city, with little timezone overlap when they could chat, but she could talk to ChatGPT anytime about what the city was like and what tourist activities her friend might be doing, which helped her feel connected. She liked the memory feature too, saying it was like talking to a person who was living there.

    This seems rather sad. Is this really what AI is for?

    And we do not need gigawatts and gigawatts for this use case anyway. A small local model or batched inference of a small model should do just fine.

    • georgemcbay 9 hours ago
      > And we do not need gigawatts and gigawatts for this use case anyway. A small local model or batched inference of a small model should do just fine.

      I guess I'm a dinosaur but I think emailing the friend to ask what they are actually up to would be even better than involving an LLM to imagine it.

      Asynchronous human to human communication is a pretty solved problem.

      • lufenialif2 7 hours ago
        A common cited use case of LLMs is scheduling travel, so being able to pretend it’s somebody somewhere else is for sure important to incentivize going somewhere!
    • mschild 6 hours ago
      > A small local model or batched inference of a small model should do just fine.

      Or, you know, Signal/Matrix/WhatsApp/{your_preferred_chat_app}. If you're already texting things, might as well do that.

    • fragmede 2 hours ago
      It's sad that we aren't all rich enough to have a personal assistant to tend to our sides 24/7? mean, it seems more useful than, say, cruise ships, but they get to exist.
      • _heimdall 1 hour ago
        Things don't "get to exist", that implies that there's a person or people who have decided not to use some power of theirs to make all cruise ships disappear.

        The GP quote also wasn't about a personal assistant use case, it was about filling a hole in personal connection. Its sad because we are more often today having less human connections and more digital, aka fake, ones.

    • UltraSane 10 hours ago
      I use it as something to talk to about incredibly nerdy and/or obscure things no one else would be willing to talk about.
      • rotis 4 hours ago
        Asking ChatGPT about safety of someone traveling instead of asking that person is the nerdy thing to do. Somehow a hairstylist doesn't invoke image of a nerd in me. That is why I find this story implausible.
      • boxedemp 5 hours ago
        Same. I have a lot of ideas I like to explore that people find boring or tedious. I used to just read, but it's pleasant to have the option to play with those thoughts more.
        • fragmede 2 hours ago
          I love pitching scifi premise/half book ideas at ChatGPT and having it write short stories that end the way I want them to, dammit.
      • manuelmoreale 6 hours ago
        That’s honestly just sad. Not the fact you doing it, but rather the fact you have nobody to talk to about those things.
        • rkomorn 6 hours ago
          I feel like this kind of response is a good example of why someone wouldn't talk to others about things.
    • peyton 10 hours ago
      It’s super dope, and you can have it talk to people for you in the local language when you go there. I’ve busted it out to explain what I’m thinking for me. Watching travel shows on TV or reading travel magazines is sadder.
  • fulafel 4 hours ago
    Saving the planet and the Trump alliance (https://www.theverge.com/ai-artificial-intelligence/867947/o...) don't seem to relaly jive.
  • ahf8Aithaex7Nai 9 hours ago
    Apparently, there's this guy who's really good at optimizing computer performance and makes a lot of money doing it. At the same time, he writes mediocre school essays that are actually a bit embarrassing. Guys, if you have the opportunity to land a very well-paid job, then do it. Take the money. Live your life. But please spare us the public self-castration.
  • SanjayMehta 10 hours ago
    [flagged]
  • kopollo 5 hours ago
    Could you please provide information about the efficiency optimization you plan to implement?
  • pyrale 7 hours ago
    Strong LinkedIn vibes in this entry.
  • stonecharioteer 1 hour ago
    I want to be as good as you at performance engineering. It's the direction I want for my career.

    Something tells me that in a year we'll see a post about why you left OpenAI.

    Sama won't listen to anyone. That's why. None of these CEOs are going to listen.

  • tominous 9 hours ago
    Performance and efficiency are important, but we need you to invent the monitoring tools and visualisations that will underpin alignment!
  • thinkingkong 11 hours ago
    Brendan can do whatever he wants. Hes that good. If anybody seriously needed to interview him 20+ times to figure it out, then the burden is now on them to not fuck it up.
    • ojbyrne 11 hours ago
      The article says "I ended up having 26 interviews and meetings (of course I kept a log) with various AI tech giants."

      I don't think that indicates that any one company interviewed him 20+ times.

    • sgarland 10 hours ago
      Seriously. I would expect him to be more of an offer-only scenario.
    • 7e 11 hours ago
      He's summing interviews across all AI giants. But the ones about to IPO can interview someone almost infinitely many times, because everyone wants on the bandwagon.
  • testfrequency 1 hour ago
    If OpenAI is responsible for “saving the planet” we are so fucked.

    We are currently fucked as well to be clear as people genuinely have this disconnected mindset of reality.

  • puttycat 6 hours ago
    > it's not just about saving costs – it's about saving the planet.

    You're in for a surprise buddy.

  • light_triad 10 hours ago
    Mia was right. Listen to Mia
  • jhhh 5 hours ago
    Did the article intentionally start with a LLM cliche to filter out all the people who hate reading obviously generated content? I would say it worked.
    • laluser 5 hours ago
      I have been attempting to write a lot more with AI, but it's so gimmicky. It's always spitting lines like this: " it's not just about x – it's about y." like in this post. I find it so frustrating that no matter the prompt I throw at it, it eventually repeats itself again after some time. Good technical and succinct writing is almost impossible to iterate on with AI for me.
      • estearum 1 hour ago
        I spend a lot of time on LinkedIn due to job and I have an actual physical gag reflex at this point now from the "it's not X, it's Y" pattern.

        I didn't know it was possible for a sentence structure to cause such a thing.

        • icepush 1 hour ago
          It's not the sentence structure—it's the lack of sincerity
      • xnxnxkx 4 hours ago
        [flagged]
    • mawadev 4 hours ago
      I like how my eyes went over the first sentence, barely parsing it and already discarding the information, because its obviously ai generated. Its like the circumstances we live in added a new layer of perception to my brain to guard itself against the flood of useless information!
      • Nasrudith 3 hours ago
        It isn't AI generated it is just plain a vacuous cliche. Seriously what is with people who think 'they can always tell it is AI' when really AI is living rent free in their head and they fixate on anything they don't like and are oh so convinced it must be the AI they hate. They're exactly like Fundamentalists and the devil. Or Communists and how they think capitalism literally intentionally created everything as harmful as possible just to spite them.
    • raincole 4 hours ago
      I really hope it's intentional. The author is a smart, accomplished person. He even published books. It's sad if this kind of person thinks it's okay to just outsource their writing to AI.
      • gritspants 1 hour ago
        Reads like a love letter to his new employer. Hopefully it earns him points there. I admire his work and will just pretend I never read this.
      • dvfjsdhgfv 2 hours ago
        [flagged]
    • jofzar 5 hours ago
      [flagged]
  • I_am_tiberius 11 hours ago
    If it's in your power, make sure user prompts and llm responses are never read, never analyzed and never used for training - not anonymized, not derived, not at all.
    • surajrmal 11 hours ago
      No single person other than Sam Altman can stop them from using anonymized interactions for training and metrics. At least in the consumer tiers.
    • satvikpendem 10 hours ago
      It's a little too late for that, all the models train on prompts and responses.
  • dforsythe 11 hours ago
    [flagged]
  • zombiwoof 11 hours ago
    [dead]
  • rvz 11 hours ago
    [flagged]
    • patrickaljord 11 hours ago
      Unless OpenAI goes with a very liberal definition of AGI, he's going to wait decades for AGI.
      • Insanity 10 hours ago
        They’re already trying to redefine the AGI playing field by doing so.
    • thefounder 10 hours ago
      I think OpenAI will IPO at 1T. I don’t want to say bubble but it could be one of these stocks super hyped that never goes anywhere after the IPO(I.e airbnb during Covid)
      • Kina 7 hours ago
        I believe that OpenAI wants to IPO at that valuation. I don’t think it can IPO.
  • jasonvorhe 2 hours ago
    [flagged]
  • r33b33 6 hours ago
    [flagged]
  • falloutx 2 hours ago
    [flagged]
  • brendangreggg 10 hours ago
    [dead]
  • dgoxow 3 hours ago
    [flagged]
  • yomismoaqui 4 hours ago
    [flagged]
  • throwawee 2 hours ago
    [flagged]
  • zeroonetwothree 6 hours ago
    [flagged]
  • moltar 4 hours ago
    [flagged]
  • bilekas 2 hours ago
    [flagged]
  • LittlePeter 4 hours ago
    [flagged]
  • llmslop 5 hours ago
    [flagged]