Lovable is marketed to non developers, so their core users wouldn't understand a security flow if it flashed red. A lot of my non dev friends were posting their cool new apps they built on LinkedIn last year [0]. Several were made on lovable. It's not on their users to understand these flaws
The apps all look the same with a different color palette, and makes for an engaging AI post on LinkedIn. Now they are mostly abandoned, waiting for the subscription to expire... and their personal data to get exposed I guess
Developers with decades of experience still make basic security holes. The general public are screwed once they start hosting their own apps and serving on the Internet.
There's something so innocent about the early days when even Microsoft thought we'd be running Personal Web Servers and hosting our own websites in a peer-to-peer fashion.
Although cynically, in 1996 Microsoft would probably tell you anything you wanted to hear if it got you using Internet Explorer.
The Personal Web Server is ideal for intranets, homes, schools, small business workgroups and anyone who wants to set up a personal Web server.
I always held the belief that we (as programmers and industry) failed the initial premise of the "distributed internet". On one hand, the core of the internet (whether its arpanet or even tcp/ip) was designed to be fully distributed, trustless, selfhostable, etc. The idea that you if you want an email you do a `pkg_add email`, want a file server, `pkg_add file-server`, want remote access, `pkg_add openssh` and you're done. But what we have today is [1].
Securing all that got very technical and nuanced with hundreds of complex scenarios and tools and protocols. Tech companies raced to produce services the mass public can use, hiring hordes of very smart, expensive and technical developers to develop and secure, and they still get it wrong frequently. While the FOSS community adopted the "get good or gtfo" approach as in [1].
The average person has no chance. That's why closed wall-gardened platforms like iOS and Android are winning.
The hardest part about this stuff is that as a user, you don't necessarily know if an app is vibe-coded or not. Previously, you were able to have _some_ reasonable expectation of security in that trained engineers were the ones building these things out, but that's no longer the case.
There's a lot of cool stuff being built, but also as a user, it's a scary time to be trying new things.
The frequency with which I see contemporary apps updating (sometimes multiple times a day) says there's a change in culture that also makes professionals prone to mistakes.
I get that we'll never ship a perfect release, but if you have to push fixes once a day it seems you've lost perspective.
Vibe coding slopiness is more acceptable now because we've lowered our standards
Devs' newfound ability to patch on the fly is absolutely being overleveraged. It's a wonderful capability to have that can do wonders in terms of disaster mitigation, but it's clearly become a crutch and has resulted in a situation where software has become a horrific amalgamation of haphazardly-developed panic-patches, taking the classic "ball of mud" problem and putting it into overdrive.
Yeah, my trust for new open source projects is in the toilet. Hopefully we will eventually start taking security seriously again after the vibe code gold rush.
Of course there were. Don't be pedantic. Anybody could write a program and put it on the internet. But to get a reasonably polished version with decent features and an enjoyable enough UX for someone to sign up and even pay money more, it generally took people who kind of knew what they were doing.
Of course shortcuts were taken. They always were and always will be. But don't try to compare shipping software today to even just 3 years ago.
Yes - AI has completely destroyed the set of "Signals" people used to judge quality of much software. They weren't ever 100% accurate, sure, but they were often pretty good heuristics for "level of care", what the devs considered important (or didn't consider important) and similar.
And I mean that as both "end user" software signals, and "library" signals for other devs.
I assume that set of signals will slowly be updated. If one of those ends up being "Any Use of AI At All" is still an open question, depending on if the promised hype actually ends up meeting capability as much as anything.
Vibe coding democratized shipping without democratizing the accountability. The 18,000 users absorbed the downside of a risk they didn't know they were taking.
I don't think you know what democracy means, democracy means that users can reject poorly made apps. If you can't reject or destroy something, it's not a democratic process.
Having someone dump shitty wares onto the public is only democracy if you think being held unaccountable as democratic.
It has a more broad meaning of sharing like when a factory is dumping waste in a river, they are democratizing pollution. (i.e they get the benefits but everybody pays the cost.)
One dev of a Lovable competitor pointed me to the rules thats supposed to ensure queries are limited to that user's data. This seems like "pretty please?" to my amateur eyes.
I've been thinking a bit about how to do security well with my generated code. I've been using tools that check deps for CVEs, static tools that check for sql injection and similar problems, and baking some security requirements into the specs I hand claude. I can't tell yet if this is better than what I did before or just theater. It seems like in this case you'd need/want to specify some tests around access.
I'm interested to hear how other people approach this.
Ask the LLM to create for you a POC for the vulnerability you have in mind. Last time I did this I had to repeatedly make a promise to the LLM that it was for educational purposes as it assumed this information is "dangerous".
Same way you handle preserving any other property you want to preserve while "vibecoding" -- ensure tests capture it, ensure the tests can't be skipped. It really is this simple.
> One example of this was a malformed authentication function. The AI that vibe-coded the Supabase backend, which uses remote procedure calls, implemented it with flawed access control logic, essentially blocking authenticated users and allowing access to unauthenticated users.
Actually sounds like a typical mistake a human developer would make. Forget a `!` or get confused for a second about whether you want true or false returned, and the logic flips.
The difference is a human is more likely to actually test the output of the change.
https://www.youtube.com/watch?v=m-W8vUXRfxU
The apps all look the same with a different color palette, and makes for an engaging AI post on LinkedIn. Now they are mostly abandoned, waiting for the subscription to expire... and their personal data to get exposed I guess
[0]: https://idiallo.com/blog/my-non-programmer-friends-built-app...
Although cynically, in 1996 Microsoft would probably tell you anything you wanted to hear if it got you using Internet Explorer.
The Personal Web Server is ideal for intranets, homes, schools, small business workgroups and anyone who wants to set up a personal Web server.
https://news.microsoft.com/source/1996/10/24/microsoft-annou...
Securing all that got very technical and nuanced with hundreds of complex scenarios and tools and protocols. Tech companies raced to produce services the mass public can use, hiring hordes of very smart, expensive and technical developers to develop and secure, and they still get it wrong frequently. While the FOSS community adopted the "get good or gtfo" approach as in [1].
The average person has no chance. That's why closed wall-gardened platforms like iOS and Android are winning.
1: https://www.youtube.com/watch?v=40SnEd1RWUU
There's a lot of cool stuff being built, but also as a user, it's a scary time to be trying new things.
I get that we'll never ship a perfect release, but if you have to push fixes once a day it seems you've lost perspective.
Vibe coding slopiness is more acceptable now because we've lowered our standards
Companies don't take security seriously now (and predating vibe coding)
> Previously, you were able to have _some_ reasonable expectation of security in that trained engineers were the ones building these things
When was this? What world? Did I skip worldlines? Is this a new Universe?
The world I remember is that anybody could write a program and put it on the Internet. Is this not the world you remember?
Further, when those engineers were "trained" ... were there no data breaches before 2022?
Of course shortcuts were taken. They always were and always will be. But don't try to compare shipping software today to even just 3 years ago.
And I mean that as both "end user" software signals, and "library" signals for other devs.
I assume that set of signals will slowly be updated. If one of those ends up being "Any Use of AI At All" is still an open question, depending on if the promised hype actually ends up meeting capability as much as anything.
Having someone dump shitty wares onto the public is only democracy if you think being held unaccountable as democratic.
https://github.com/dyad-sh/dyad/blob/de2cc2b48f2c8bfa401608c...
I'm interested to hear how other people approach this.
Actually sounds like a typical mistake a human developer would make. Forget a `!` or get confused for a second about whether you want true or false returned, and the logic flips.
The difference is a human is more likely to actually test the output of the change.