One day, you won't be able to delete your social network account anymore. There will be a delete button, but the account will stay, and it will keep posting after you're gone, it won't care whether you are doing something else entirely or whether you're dead, the show will go on.
The shareholders will be content, because they see value in that. The users might not, but not many of them are actual humans, nowadays they're mostly AI, who has time to read and/or post on social media? Just ask your favorite AI what's the hottest trends on social networks, it should suffice to scratch the itch.
I'm having trouble finding it now but I recall a mostly dead physics forum using LLMs to make new posts under the names of their once prolific users. So this has already happened at least on a small scale.
It seems nuts to me shareholders would be happy about a bunch of fake users, at least ones that don't have any money.
We crawled the Internet, identified stores, found item listings, extracted prices and product details, consolidated results for the same item together, and made the whole thing searchable.
And this was the pre-LLM days, so that was all a lot of work, and not "hey magic oracle, please use an amount of compute previously reserved for cancer research to find these fields in this HTML and put them in this JSON format".
We never really found a user base, and neither did most of our competitors (one or two of them lasted longer, but I'm not sure any survived to this day). Users basically always just went to Google or Amazon and searched there instead.
However, shortly after we ran out of money and laid off most of the company, one of our engineers mastered the basics of SEO, and we discovered that users would click through Google to our site to an item listing, then through to make a purchase at a merchant site, and we became profitable.
I suppose we were providing some value in the exchange, since the users were visiting our item listings which displayed the prices from all the various stores selling the item, and not just a naked redirect to Amazon or whatever, but we never turned any significant number of these click-throughs into actual users, and boy howdy was that demoralizing as the person working on the search functionality.
Our shareholders had mostly written us off by that point, since comparison shopping had proven itself to not be the explosive growth area they'd hoped it was when investing, but they did get their money back through a modest sale a few years later.
Someone created a tiktok account using my email address. Tiktok won’t let me delete the account without first verifying it with my phone number. I refuse to give tiktok my phone number because I don’t want my phone tied to social media. I don’t have tiktok (or any other social media accounts) and don’t look at it. But I’m stuck getting several email notifications a day from them.
Not quite what you’re saying, but a couple of steps in that direction.
Someone signed up for a Walmart account with my email address. Once every few weeks they order either sex toys, Dolly Parton paraphernalia, or beef jerky in incredible quantities, or some combination of the above, and I get the email receipt.
I am never, ever requesting that they delete the account.
Report the emails as spam, report the sender address to spamhaus. When enough people do this and tiktok's emails stop getting delivered, a one-click unsubscribe button in the email body that actually works will very quickly be born.
I made a tiktok account to write a comment on a video I hated. Now when i sign in again I am presented with lots of awful videos from the guy I dislike. I cannot delete my viewing history using the website, and following other accounts doesn't remove the obsession tiktok has with always showing me his videos as the default.
I'm not installing the app, so the only way around this is to delete my account completely.
> the only way around this is to delete my account completely
You can choose the option to tell TikTok you are 'not interested' in videos like these, or block the account entirely. There are legitimate criticisms about social media algorithms, but I don't understand why you jump to the conclusion that you have to delete your account.
Just recently, Twitter started making the default view "For You" instead of "Following" with no way to switch back. Fortunately there's an extension that fixes that and lets you eliminate the For You view entirely.
I don't know, I've heard for years that everything your write will be forever on the internet, but from my experience, it's the opposite. I tried looking into my old blogger, photobucket or AIM conversations and they're nowhere to be found.
Sure maybe they exist in some corporate servers when the companies were sold for scraps. And I suppose if I became famous and someone wanted to write an expose about my youthful debauchery, but for all practical purposes all this stuff has disappeared. Or maybe not. How much do we know about the digital presence of someone like the guy who shot Trump or Las Vegas shooter. Or maybe it's known but hidden? I'm impressed that Amazon has my very first order from over 10 years ago, but that's just not par for the course.
Why would AI steal my identity and post as me? I'm not that interesting.
My data is just not the valuable and I imagine that within the next 5-10 years AI will be trained almost entirely on synthetic data.
About 20 years ago, my name showed up on a handful of websites that I could find. Was related to school activities I participated in. Used to surprise me then.
Even my damn personal website was in the top 5 Google results for my name, despite no attempt at SEO and no popularity.
Today those sites are all gone and it’s as if I no longer exist according to Google .
Instead a new breed of idiots with my name have their life chronicled. I even get a lot of their email because they can’t spell their name properly. One of them even claimed that they owned my domain name in a 3-way email squabble.
Individual actions like this will never do anything, because the average person is not going to spend hours upon hours investigating platforms. They just want an easy way to connect with their friends and family, follow artists, etc.
Which is why I think the only solution has to come at the governmental regulatory level. In “freedom” terms it could be framed as freedom from, as in freedom from exploitation, unlawful use of data, etc. but unfortunately freedom to seems to be the most corporate friendly interpretation of freedom.
>They just want an easy way to connect with their friends and family
You'd be surprised how many people in your life can be introduced to secure messaging apps like Signal (which is still centralized, so not perfect, but a big step in the right direction compared to Whatsapp, Facebook, etc) by YOU refusing to use any other communication apps, and helping them learn how to install and use Signal.
I got my parents and siblings all to use Signal by refusing to use WhatsApp myself. And yet all of them still use WhatsApp to communicate among each other. They have Signal installed, they have an account, they know how to use it, and yet they fall back to WhatsApp. Some people really do want to choose Hell over Heaven.
The platforms and their convenience that one "only" has to write the post yet the internet needs so much metadata, so it tried to autogenerate it, instead of asking for it. People are put off by need to write a bloody subject for an email already, imagine if they were shown what's actually the "content" is.
About convincing: get the few that matters on deltachat, so they don't need anything new or extra - it's just email on steroids.
As for Mastodon: it's still someone else's system, there's nothing stopping them from adding AI metadata either on those nodes.
Delta.Chat is really underappreciated, open-source and distributed. I recommend you at least look into it.
Signal, on the other hand, is a closed "opensource" ecosystem (you cannot run your own server or client), requires a phone number (still -_-) and the opensource part of it does not have great track record (I remember some periods where the server for example was not updated in the public repo).
But yeah, if you want the more popular option, Signal is the one.
Not even knowing what deltachat is, however Signal was suspected from the start of being developed by the NSA (read the story about the founder and the funding from the CIA) and later received tens of million USD each year from the US government to keep running. So it is never advisable option when the goal is to acquire some sense of privacy.
Nowadays even YouTube comments are more anonymous than using a "deltachat" or "signal". On the first case there is zero verification on their claims, on the second case there is plenty of evidence of funding from the CIA.
At least commenting from an unknown account on any random youtube video won't land you immediately at a "Person of Interest" list and your comments will be ignored as a drop of water inside an ocean of comments.
This is such a recurring topic that it might be better for me to one day write a blog post that collects the details and sources.
In absence of that blog post:
Start by the beginning, how Moxley left Twitter as director of cyber over there (a company nowhere focused on privacy at the time) to found the Whisper Foundation (if memory serves me the right name). His seed funding money came from Radio Free Asia, which is a well-known CIA front for financing their operations. That guy is a surf-fan, so he decided to invite crypto-experts to surf with him while brainstorming the next big privacy-minded messenger.
So, used his CIA money to pay for everyone's trip and surf in Hawaii which by coincidence also happens to be the exact location of the headquarters for an NSA department that is responsible for breaking privacy-minded algorithms (notably, Snowden was working and siphoning data from there for a while).
Anyways: those geeks somehow happily combined wave-surf with deep algo development in a short time and came up with what would later be known as "signal" (btw, "signal" is a well-known keyword on the intelligence community, again a coincidence). A small startup was founded and shortly after that a giant called "whatsapp" decided to apply the same encryption from an unknown startup onto the billion-sized people-audience of their app. Something for sure very common to happen and for sure without any backdoors as often developed in Hawaii for decades before any outsiders discover them.
Only TOR and a few new tools remain funded, signal was never really a "hit" because most of their (target) audience insists on using telegram. Whatsapp that uses the same algorithm as signal recently admitted (this year) that internal staff had access to the the supposedly encrypted message contents, so there goes any hopes for privacy from a company that makes their money from selling user data.
Most people write to be read. Surely I can write on my own blog, but no one would read them (not that my social media is much more worth reading though.)
Plus, what about videos? How is a non-tech savvy creator going to host their content if it's best in video format?
I left Insta the day FB bought it; closed my FB, twitter, and Google accounts a couple of years later; WA was the hardest to leave, I'll grant. Since I left, I've used: phone; email; Signal; Telegram; letters; post cards; meeting up in person; sms; Mastodon; tried a couple of crypto chats. There are so many options it's not worth worrying about.
In the cases of special interest groups (think school/club/street/building groups), I just miss out, or ask for updates when I meet people. I am a bit out of the loop sometimes. No-one's died as a result of my leaving. When someone did actually die that I needed to know about, I got a phone call.
Honestly... just leave. Just leave. It's not worth your time worrying about these kind of "what ifs".
How about close friends who live on the other side of the world?
Telegram and Signal are, to me, about as trustworthy as WhatsApp. Well, actually, nobody really uses Signal, and Telegram is about the same as WhatsApp so who cares.
Waiting to meet my friends once every 1-2 years is not enough. I want to chat daily with them, because they are my close friends.
Daily telephone conversations with a group of them? Nope. Snail mail? It doesn't work for daily conversation.
> I already felt immense pain and anger by the decision of my husband to suddenly end our marriage. And now I feel a double sense of violation that the men who design and maintain and profit from the internet have literally impersonated my voice behind the closed doors of hidden metadata to tell a more palatable version of the story they think will sell.
That's a bit dismissive of women, does she think that women aren't capable of designing and maintaining software too?
It's easier to swallow when you can blame a group of people for things that are bad. You can other them and not sit with the possibility that people who look like you (maybe even are you) can also do things that harm others. I couldn't do something bad, I'm not one of _them_.
You see this later as well when she slyly glides over women who do what her husband did. When her husband decided to end their marriage, it was representative of men. When women do it, it's their choice to make.
It makes perfect sense if you include the two sentence before your quote:
> We already know that in a patriarchal society, women’s pain is dismissed, belittled, and ignored. This kind of AI-generated language also depoliticizes patriarchal power dynamics.
Man does something bad, it's the fault of patriarchy. Woman do something bad, it's also men's fault because patriarchy made her do it. Either way you cannot win with a person like that. I think I understand why the husband wanted a divorce.
I disagree with her argument as well but it’s a huge leap from that to “I understand why the husband wanted a divorce.” That’s a pretty shitty thing to say (especially given the trauma of the divorce she writes about) and has nothing at all to do with what she’s saying.
It's the bigotry of low expectations that the right often accuses the left of (arguably justifiably). Each side has their shibboleths and hypocrisies, and this is a very "left" one. Everything is the fault of the "other", in this case "all men", apparently.
As someone else said, the red flags of insufferability abound here, first and foremost with announcing something like this which is as personal and momentous as it is, on public social media.
I'm a little bit confused about what's going on here. Is this nothing more than an LLM-generated summary of her post? She shows the metadata but also shows it coming up in the post. I don't use any of these apps so I'm not really sure what a normal user would have seen. ie, would that text have been appended visibly to her post, making users think she wrote that, but also have been in tags which would have optimized for search engines?
Either way, I don't know what to tell people. Social media exists to take advantage of you. If you use it, your choices are "takes more advantage" vs. "takes less advantage," but that's as good as gets.
It looks like it's a third-party UI, her Mastodon client, using the description metadata in a way that kind of makes it look like that metadata is part of the post.
Auto-generating said description tag in the first person is a bit of a weird product decision - probably a bad one that upsets users more than it's useful - but the presentation layer isn't owned by Meta here.
Thanks for the explanation, that makes a lot of sense. I'll bet that when it's not a sensitive topic, this totally goes unnoticed by a lot of users. Frustratingly, I would imagine that the response from most people would just be that the LLM summarizations / metadata tagging should be censored in "sensitive cases," but will otherwise be accepted by the user base.
It's unacceptable that Meta did something like this.
But this doesn’t change the fact that she shouldn’t share anything personal on social media. Consider social media the new "streets". A street with dim lights or an alley that you go at 3am and shout something or showing your images/videos to strangers there. This is exactly what you should keep in mind before you share anything personal on social media.
And either way, who wants to be an unpaid Meta employee that provides any kind of content for free?
I'll lay odds that the Meta employee that made the decision to do this, has an HN account. I notice how quickly this story is descending through the pages. It's already off the front page.
> I share my pain publicly as a gesture of solidarity with other people, but especially women, who have been profoundly traumatized by those they thought they could love and trust.
This is about her husband divorcing her. I find this to be a very unfair way to frame someone else's decision to not spend their life with you anymore. Your partner does not owe you a relationship. Interestingly it is not even me coming up with the word "framing". She herself describes her Instagram post as deliberate framing.
She also claims that the AI chose words dismissive of her pain because she is a woman (rather than just because it's fake-positive corpo slop) and does not substantiate that in any way.
I'm all against this AI slop BS, especially when it's impersonating people. The blog post is mostly not about that.
If anything there's an interesting angle in the facts of this story about a new form of "mansplaining," but it's the algorithm doing "robosplaining" for the human race.
there is a part of the marriage vows where a loving couple promise each other til death do us part ... it's selfish to the max to go back on a promise like that for a reason outside of your partners control ... after retyping this a dozen times to stamp the snark out , I am now genuinely curious as to what has reversed the victim role in your mind ...
People being allowed to part ways and not having to stick with their partner until death is one of the great achievements of feminism. It goes both ways.
You cannot control that you will love someone forever, so you cannot promise that. What you can promise someone is that you plan on spending the rest of your life with them and that you have so much love that you trust it will last forever. Sometimes that does not work out. That is no one's fault and no one owes to anyone to stay together with a person they no longer love.
> People being allowed to part ways and not having to stick with their partner until death is one of the great achievements of feminism.
And it has been one of the greatest mistakes humanity has ever made. If there is a good reason, sure, you cannot be expected to live with someone who has been cruel or irresponsible towards you. But no-fault divorce just because you got bored? Fuck off, you made a commitment at the time. Relationships do take work, always have and always will. Especially when there are children a no-fault divorce is pure selfishness.
With that said, we only know one side of this story, so I'm not going to argue for either side in this particular case. I'm talking in general here.
That's just not how humans work. Love can fade. People change. It's the natural course of things. Sometimes there is just no one at fault for love being lost and no way to prevent that. We just gave up the illusion that love in marriage is always forever.
The problem here is the usage of "no-fault". It can be interpreted differently by everyone.
Does fault only include cheating? Can the fault be on the same one who initiated the divorce? What if the fault is simply someone has changed so much that they're no longer compatible with person they fell in love with before? The fault could be on oneself without any inkling of infidelity.
Til death do us part has been ironically dead for decades now since people have been divorcing at high rates for long enough that it doesn't really mean much anymore, and that's okay. Things change.
It might be painful short term, but excellent long-term. Many people already realized they gave away control over many aspects of their lives, especially the most important one, attention, to big corporations who are exploiting whatever they can ruthlessly. Many people already quit Facebook and the like; the one who remain are bound to experience quite a few surprises.
”I posted content to a proprietary social network, then got upset when it generated a page description with AI”
Sure, the description is garbage, it may not be obvious it’s not written by the user, but people need to understand what partaking in closed and proprietary social media actually means. You are not paying anything, you do not control the content, you are the product.
If you don’t enjoy using a service that does this to the content you post then don’t use that service.
I’ll stick to this point only even if I feel that there are other things in the post that are terribly annoying.
When the behavior is not only something something you "don't like" but is also (as this woman perceives it) a professional threat (she makes a living out of carefully choosing her words; she felt this attributed to her words she would never have said) and furthermore is unexpected, to simply quietly leave the platform seems insufficient. One ought to warn other users about the unexpected dangerous practice -- which is precisely what this article accomplishes!
The misleading aspect is that the AI generated content was in first person, so any reasonable reader would falsely attribute the statement to the person involved, when in fact it was concocted entirely by Meta's AI.
I’ve been noticing DuckDuckGo search results increasingly frequently doing this. They used to either use the <meta name=description> (which is subject to abuse by the site) or show an excerpt from the page text highlighting the keyword matches (which is often most helpful), but from time to time now I see useful meta descriptions or keyword matches sidelined in favour of what I presume is Microsoft-generated clickbaity slop of a “learn more about such-and-such” kind, occasionally irrelevant to the actual article’s text or even inconsistent with it.
> Because what this AI-generated SEO slop formed from an extremely vulnerable and honest place shows is that women’s pain is still not taken seriously.
Companies putting words in people's mouth on social media using "AI" is horrible and shouldn't be allowed.
But I completely fail to see what this has to do with misogyny. Did Instagram have their LLM analyze the post and then only post generated slob when it concluded the post came from a woman? Certainly not.
Obviously I am putting words in the author's mouth here, so take with a grain of salt, but I think the reasoning is something like: such LLM-generated content disproportionately negatively affects women, and the fact that this got pushed through shows that they didn't take those consequences into account, e.g. by not testing what it would look like in situations like these.
> Did Instagram have their LLM analyze the post and then only post generated slob when it concluded the post came from a woman? Certainly not.
I actually am sympathetic to your confusion—perhaps this is semantics, but I agree with the trivialization of the human experience assessment from the author and your post, but don't read it as an attack on women's pain as such. I think the algorithm sensed that the essay would touch people and engender a response.
--
However, I am certain that Instagram knows the author is a woman, and that the LLM they deployed can do sentiment analysis (or just call the Instagram API and ask whether the post is by a woman). So I don't think we can somehow absolve them of cultural awareness. I wonder how this sort of thing influences its output (and wish we didn't have to puzzle over such things).
That’s a pretty horrifying story, and Meta’s crassness is kind of stunning. It sort of reminds me of the old “Clippy Helps with A Suicide Note” meme.
> My story is absolutely layered through with trauma, humiliation, and sudden financial insecurity and I truly resent that this AI-generated garbage erases the deliberately uncomfortable and provocative words I chose to include in my original framing.
I truly feel for her, and wish her luck. Also, I feel that, of any of the large megacorps, Meta is the one I would peg to do this. I’m not even sure they feel any shame over it. They may actually appreciate the publicity this generates.
I’m thinking that Facebook could do something like slightly alter the text in your posts, to incite rage in others. They already arrange your feed to induce “engagement” (their term for rage).
For example, if you write a post about how you failed to get a job, some “extra spice” could be added, inferring that you lost to an immigrant, or that you are angry at the company that turned you down, as opposed to just disappointed.
Meta added it in "<meta>" tag(no pun intended) intended for search engine. And some other app crawled it and displayed it in main text. Not defending Meta but the text is not visible in instagram or any other Meta app.
og:description is exactly the meta tag to use for link descriptions in embeds. Not all meta tags are only for search engines. The app acted correctly here.
It's not uncommon. My cousin sent out a Christmas card announcing her divorce - I think it stops a lot of 1-1 conversations with people which can be quite draining when you're already pretty raw.
What is the alternative to announcing a divorce? Keeping it secret? Not using social media to communicate?
In this case she explicitly did NOT make any mention of the divorce on social media when her husband first sprung it on her, nor during the process. She wrote this piece after it had been finalized.
I guess a private announcement makes more sense to people than a public announcement, unless you wanted to make a blog post about a phenomenon related to it, which she appears to be trying to do
I haven't posted on IG for years, but read it sometime and see that a slop-description is added below some (not all) posts. I assumed that it was something creators have added manually, but now you are telling me that Facebook does it automatically?
In the very first place... What's the freaking point to announce the divorce in the social media??? Why? Especially in the social media run by people known for having problems with moral behaviors (ask Winklevoss brothers), where 3/4 of their platform is either scam/fraud or infomercial.
There are people who love divorces, love interacting with them, and love watching people go through them. It’s a cottage market in gossip futures. Social media is designed around gossip futures by people with questionable character. So you answered your own question ;) more shit piled onto the heap!
2. You a have a public image that includes you being married and social media is one of the main channels over which you reach the people who know you. Now you get divorced and you do not want these people to have the false image of you being happily married and potentially even getting comments referencing your marriage anymore.
Could be a lot easier to rip off the band-aid all at once rather than pen a hand written note to dozens or more mutuals with subtle hints of "please stop sending couples invitations to social events"
This article confirms all the reasons I stay away from social media platforms. What happened in this situation is awful. It also makes clear that even where legal bounds may have been crossed, it doesn’t really matter because who has the time, energy, and financial resources to challenge them? The big platforms know this and will continue to exploit not just user-created content, but the user’s own hard-earned reputation in order to feed more drivel to the masses.
Another thing I've noticed recently on youtube suddenly my feed is full of AI fakes of well known speakers like Sarah Paine an eminent historian who talks about Russia and the like but there's all this slop with her face speaking and "Why Putin's War Was ALWAYS Inevitable - Sarah Paine" but with AI generated words. They usually say somewhere in the small print that it's an AI fan tribute but it's all a bit weird.
(update they now say 'video taken down' but were there for a while)
Surely, if the slop is generated by looking at the image and the text, then it seems someone could manipulate it into hallucinating all manner of wonderful things.
what was supposed to be the important part, "AI bad"? the author is not some clueless pedestrian, they are clearly online enough to be fully aware that all social media companies treat their users like cattle. so why the pikachu face when Instagram (of all things!) does something it is designed to do - squeezing every last bit of value from its digital serfs?
Llama was not great, it was barely good, it wasn't very smart nor creative and had it's guardrails cranked up to 11. Local models didn't get interesting until Mistral and China entered the game. Meta still hasn't released it's image models which has been trained on 10s of thousands of my photos.
The shareholders will be content, because they see value in that. The users might not, but not many of them are actual humans, nowadays they're mostly AI, who has time to read and/or post on social media? Just ask your favorite AI what's the hottest trends on social networks, it should suffice to scratch the itch.
It seems nuts to me shareholders would be happy about a bunch of fake users, at least ones that don't have any money.
We crawled the Internet, identified stores, found item listings, extracted prices and product details, consolidated results for the same item together, and made the whole thing searchable.
And this was the pre-LLM days, so that was all a lot of work, and not "hey magic oracle, please use an amount of compute previously reserved for cancer research to find these fields in this HTML and put them in this JSON format".
We never really found a user base, and neither did most of our competitors (one or two of them lasted longer, but I'm not sure any survived to this day). Users basically always just went to Google or Amazon and searched there instead.
However, shortly after we ran out of money and laid off most of the company, one of our engineers mastered the basics of SEO, and we discovered that users would click through Google to our site to an item listing, then through to make a purchase at a merchant site, and we became profitable.
I suppose we were providing some value in the exchange, since the users were visiting our item listings which displayed the prices from all the various stores selling the item, and not just a naked redirect to Amazon or whatever, but we never turned any significant number of these click-throughs into actual users, and boy howdy was that demoralizing as the person working on the search functionality.
Our shareholders had mostly written us off by that point, since comparison shopping had proven itself to not be the explosive growth area they'd hoped it was when investing, but they did get their money back through a modest sale a few years later.
Users are $$$. Nobody wants to talk about which are human and which aren’t. It’s all a game of hot potato.
Not quite what you’re saying, but a couple of steps in that direction.
I am never, ever requesting that they delete the account.
You can choose the option to tell TikTok you are 'not interested' in videos like these, or block the account entirely. There are legitimate criticisms about social media algorithms, but I don't understand why you jump to the conclusion that you have to delete your account.
Do not try LinkedIn. Not even once.
They track and log every reel viewed.
I suppose everyone does it but actually seeing it is another level of creepy.
Via discounts, promo codes, gamification, whatever else they’re using today to get people to install their apps and sign over their privacy.
Sure maybe they exist in some corporate servers when the companies were sold for scraps. And I suppose if I became famous and someone wanted to write an expose about my youthful debauchery, but for all practical purposes all this stuff has disappeared. Or maybe not. How much do we know about the digital presence of someone like the guy who shot Trump or Las Vegas shooter. Or maybe it's known but hidden? I'm impressed that Amazon has my very first order from over 10 years ago, but that's just not par for the course.
Why would AI steal my identity and post as me? I'm not that interesting.
My data is just not the valuable and I imagine that within the next 5-10 years AI will be trained almost entirely on synthetic data.
Even my damn personal website was in the top 5 Google results for my name, despite no attempt at SEO and no popularity.
Today those sites are all gone and it’s as if I no longer exist according to Google .
Instead a new breed of idiots with my name have their life chronicled. I even get a lot of their email because they can’t spell their name properly. One of them even claimed that they owned my domain name in a 3-way email squabble.
I almost no longer exist and it’s kinda nice.
Only PeopleFinder and such show otherwise.
I keep trying to convince people not to use Instagram, WhatsApp, Facebook, Twitter/X, but I'm not getting anywhere.
Write your own content and post it on your own terms using services that you either own or that can't be overtaken by corporate greed (like Mastodon).
Which is why I think the only solution has to come at the governmental regulatory level. In “freedom” terms it could be framed as freedom from, as in freedom from exploitation, unlawful use of data, etc. but unfortunately freedom to seems to be the most corporate friendly interpretation of freedom.
You'd be surprised how many people in your life can be introduced to secure messaging apps like Signal (which is still centralized, so not perfect, but a big step in the right direction compared to Whatsapp, Facebook, etc) by YOU refusing to use any other communication apps, and helping them learn how to install and use Signal.
The platforms and their convenience that one "only" has to write the post yet the internet needs so much metadata, so it tried to autogenerate it, instead of asking for it. People are put off by need to write a bloody subject for an email already, imagine if they were shown what's actually the "content" is.
About convincing: get the few that matters on deltachat, so they don't need anything new or extra - it's just email on steroids.
As for Mastodon: it's still someone else's system, there's nothing stopping them from adding AI metadata either on those nodes.
And other mastodon servers, just like other email servers, can of course still modify the data they receive how they'd like.
Signal, on the other hand, is a closed "opensource" ecosystem (you cannot run your own server or client), requires a phone number (still -_-) and the opensource part of it does not have great track record (I remember some periods where the server for example was not updated in the public repo).
But yeah, if you want the more popular option, Signal is the one.
Would this depend on threat model?
At least commenting from an unknown account on any random youtube video won't land you immediately at a "Person of Interest" list and your comments will be ignored as a drop of water inside an ocean of comments.
And where can I find such a story from a trusworthy source? Quick google search rather turned up this:
https://euvsdisinfo.eu/report/us-intelligences-services-cont...
(Debunking it as russian information warfare)
In absence of that blog post:
Start by the beginning, how Moxley left Twitter as director of cyber over there (a company nowhere focused on privacy at the time) to found the Whisper Foundation (if memory serves me the right name). His seed funding money came from Radio Free Asia, which is a well-known CIA front for financing their operations. That guy is a surf-fan, so he decided to invite crypto-experts to surf with him while brainstorming the next big privacy-minded messenger.
So, used his CIA money to pay for everyone's trip and surf in Hawaii which by coincidence also happens to be the exact location of the headquarters for an NSA department that is responsible for breaking privacy-minded algorithms (notably, Snowden was working and siphoning data from there for a while).
Anyways: those geeks somehow happily combined wave-surf with deep algo development in a short time and came up with what would later be known as "signal" (btw, "signal" is a well-known keyword on the intelligence community, again a coincidence). A small startup was founded and shortly after that a giant called "whatsapp" decided to apply the same encryption from an unknown startup onto the billion-sized people-audience of their app. Something for sure very common to happen and for sure without any backdoors as often developed in Hawaii for decades before any outsiders discover them.
Signal kept being advertised over the years as "private" to the tune of 14 million USD in funding per year provided by the US government (CIA) until it ran out some two years ago: https://english.almayadeen.net/articles/analysis/signal-faci...
Only TOR and a few new tools remain funded, signal was never really a "hit" because most of their (target) audience insists on using telegram. Whatsapp that uses the same algorithm as signal recently admitted (this year) that internal staff had access to the the supposedly encrypted message contents, so there goes any hopes for privacy from a company that makes their money from selling user data.
Plus, what about videos? How is a non-tech savvy creator going to host their content if it's best in video format?
I'm with you, but WhatsApp is tough. How do you keep in touch?
In the cases of special interest groups (think school/club/street/building groups), I just miss out, or ask for updates when I meet people. I am a bit out of the loop sometimes. No-one's died as a result of my leaving. When someone did actually die that I needed to know about, I got a phone call.
Honestly... just leave. Just leave. It's not worth your time worrying about these kind of "what ifs".
Telegram and Signal are, to me, about as trustworthy as WhatsApp. Well, actually, nobody really uses Signal, and Telegram is about the same as WhatsApp so who cares.
Waiting to meet my friends once every 1-2 years is not enough. I want to chat daily with them, because they are my close friends.
Daily telephone conversations with a group of them? Nope. Snail mail? It doesn't work for daily conversation.
So WhatsApp it is!
That's a bit dismissive of women, does she think that women aren't capable of designing and maintaining software too?
You see this later as well when she slyly glides over women who do what her husband did. When her husband decided to end their marriage, it was representative of men. When women do it, it's their choice to make.
> We already know that in a patriarchal society, women’s pain is dismissed, belittled, and ignored. This kind of AI-generated language also depoliticizes patriarchal power dynamics.
Man does something bad, it's the fault of patriarchy. Woman do something bad, it's also men's fault because patriarchy made her do it. Either way you cannot win with a person like that. I think I understand why the husband wanted a divorce.
As someone else said, the red flags of insufferability abound here, first and foremost with announcing something like this which is as personal and momentous as it is, on public social media.
Either way, I don't know what to tell people. Social media exists to take advantage of you. If you use it, your choices are "takes more advantage" vs. "takes less advantage," but that's as good as gets.
Auto-generating said description tag in the first person is a bit of a weird product decision - probably a bad one that upsets users more than it's useful - but the presentation layer isn't owned by Meta here.
But this doesn’t change the fact that she shouldn’t share anything personal on social media. Consider social media the new "streets". A street with dim lights or an alley that you go at 3am and shout something or showing your images/videos to strangers there. This is exactly what you should keep in mind before you share anything personal on social media.
And either way, who wants to be an unpaid Meta employee that provides any kind of content for free?
HN doesn't even have a downvote button.
It’s fascinating to see which stories take a dive.
This is about her husband divorcing her. I find this to be a very unfair way to frame someone else's decision to not spend their life with you anymore. Your partner does not owe you a relationship. Interestingly it is not even me coming up with the word "framing". She herself describes her Instagram post as deliberate framing.
She also claims that the AI chose words dismissive of her pain because she is a woman (rather than just because it's fake-positive corpo slop) and does not substantiate that in any way.
I'm all against this AI slop BS, especially when it's impersonating people. The blog post is mostly not about that.
That would probably be her default position: whoever it is did not sufficiently empathize, and only "I" can be the judge of what sufficient means.
You cannot control that you will love someone forever, so you cannot promise that. What you can promise someone is that you plan on spending the rest of your life with them and that you have so much love that you trust it will last forever. Sometimes that does not work out. That is no one's fault and no one owes to anyone to stay together with a person they no longer love.
And it has been one of the greatest mistakes humanity has ever made. If there is a good reason, sure, you cannot be expected to live with someone who has been cruel or irresponsible towards you. But no-fault divorce just because you got bored? Fuck off, you made a commitment at the time. Relationships do take work, always have and always will. Especially when there are children a no-fault divorce is pure selfishness.
With that said, we only know one side of this story, so I'm not going to argue for either side in this particular case. I'm talking in general here.
Does fault only include cheating? Can the fault be on the same one who initiated the divorce? What if the fault is simply someone has changed so much that they're no longer compatible with person they fell in love with before? The fault could be on oneself without any inkling of infidelity.
Til death do us part has been ironically dead for decades now since people have been divorcing at high rates for long enough that it doesn't really mean much anymore, and that's okay. Things change.
But I'd pay for a social media site that respected my preferences / content choices and had everyone using real names / validated and so on.
Sure, the description is garbage, it may not be obvious it’s not written by the user, but people need to understand what partaking in closed and proprietary social media actually means. You are not paying anything, you do not control the content, you are the product.
If you don’t enjoy using a service that does this to the content you post then don’t use that service.
I’ll stick to this point only even if I feel that there are other things in the post that are terribly annoying.
I guess it should have been marked clearly as such.
Companies putting words in people's mouth on social media using "AI" is horrible and shouldn't be allowed.
But I completely fail to see what this has to do with misogyny. Did Instagram have their LLM analyze the post and then only post generated slob when it concluded the post came from a woman? Certainly not.
Major citation needed
I actually am sympathetic to your confusion—perhaps this is semantics, but I agree with the trivialization of the human experience assessment from the author and your post, but don't read it as an attack on women's pain as such. I think the algorithm sensed that the essay would touch people and engender a response.
--
However, I am certain that Instagram knows the author is a woman, and that the LLM they deployed can do sentiment analysis (or just call the Instagram API and ask whether the post is by a woman). So I don't think we can somehow absolve them of cultural awareness. I wonder how this sort of thing influences its output (and wish we didn't have to puzzle over such things).
> My story is absolutely layered through with trauma, humiliation, and sudden financial insecurity and I truly resent that this AI-generated garbage erases the deliberately uncomfortable and provocative words I chose to include in my original framing.
I truly feel for her, and wish her luck. Also, I feel that, of any of the large megacorps, Meta is the one I would peg to do this. I’m not even sure they feel any shame over it. They may actually appreciate the publicity this generates.
I’m thinking that Facebook could do something like slightly alter the text in your posts, to incite rage in others. They already arrange your feed to induce “engagement” (their term for rage).
For example, if you write a post about how you failed to get a job, some “extra spice” could be added, inferring that you lost to an immigrant, or that you are angry at the company that turned you down, as opposed to just disappointed.
Many apps, like Slack and LinkedIn, use it to display a link card with a description.
In this case she explicitly did NOT make any mention of the divorce on social media when her husband first sprung it on her, nor during the process. She wrote this piece after it had been finalized.
Unlikely.
https://web.archive.org/web/20251222092511/https://eiratanse...
No one(*almost) is posting to reddits AITA(am I the asshole) expecting to hear that they are wrong.
This is also how echo chambers form.
1. Attention.
2. You a have a public image that includes you being married and social media is one of the main channels over which you reach the people who know you. Now you get divorced and you do not want these people to have the false image of you being happily married and potentially even getting comments referencing your marriage anymore.
Another thing I've noticed recently on youtube suddenly my feed is full of AI fakes of well known speakers like Sarah Paine an eminent historian who talks about Russia and the like but there's all this slop with her face speaking and "Why Putin's War Was ALWAYS Inevitable - Sarah Paine" but with AI generated words. They usually say somewhere in the small print that it's an AI fan tribute but it's all a bit weird.
(update they now say 'video taken down' but were there for a while)
All that sweet, sweet innovation!