I think there are quite some misconceptions about F-Droid in the comments :
- you can be your own F-Droid server
In fact it's a basic static HTTP(S) server that is generated with the list of .apk and meta-data so it rely doesn't require much.
I think what is concerning to people is that the most popular INSTANCE of F-Droid, the one that is by default when one downloads the F-Droid CLIENT, is "centralized" but again that's a misconception. It's only popular, it's not really central to F-Droid itself. Adding another repository in the F-Droid parlance is just a simple option of changing or adding a URL to more instances.
That being said if anybody here would like to volunteer to be provider a fallback to the build system to that popular instance, I imagine the F-Droid team would welcome that with open arms.
I don't think it's necessarily a misconception but rather people having different conceptions of what the term "F-Droid" refers to. It could refer to the client, the server tools, a specific server instance, the project, the collection of applications, or possibly other things.
Some people might use "F-Droid" in the same sense as the main page [1] does, to mean "an installable catalogue of FOSS (Free and Open Source Software) applications" but others in the sense the about page [2] uses it, referring to the "non-profit volunteer project", which is consistent with the project statues [3]:
> F-Droid is the name of a not-for-profit technical, scientific and creative community effort serving the public benefit.
The documentation start page [4] makes it a bit more clear:
> F-Droid is both a repository of verified free software Android apps as well as a whole “app store kit”, providing all the tools needed to setup and run an app store. It is a community-run free software project developed by a wide range of contributors. It also includes complete build and release tools for managing the process of turning app source code into published builds.
Likely they are trying to make said list of open-source software easily accessible. The vast majority of users are incapable of compiling their own software. Probably it's better (for users' freedom, privacy, and a healthy Android FOSS ecosystem) to have these users obtaining software through an F-Droid "app store" than through Google Play.
The goal that you suggest is interesting. It reminds me of Guix, where one can obtain binaries or one can build the entirety of packages oneself. All from the same system.
Perhaps you could share how you are currently building software from source and/or F-Droid?
Is F-Droid intended for "the vast majority of users"
Is popularity, e.g., user majorities versus user minorities, always equivalent to "importance". For web traffic and associated data collection, ad services, etc., popularity is obviously important. But what if one is not focused on such things
Consider the statement "It's only popular, it's not really central for F-Droid itself"
> this server is physically held by a long time contributor with a proven track record of securely hosting services. We can control it remotely, we know exactly where it is, and we know who has access.
I can’t be the only one who read this and had flashbacks to projects that fell apart because one person had the physical server in their basement or a rack at their workplace and it became a sticking point when an argument arose.
I know self-hosting is held as a point of pride by many, but in my experience you’re still better off putting lower cost hardware in a cheap colo with the contract going to the business entity which has defined ownership and procedures. Sending it over to a single member to put somewhere puts a lot of control into that one person’s domain.
I hope for the best for this team and I’m leaning toward believing that this person really is trusted and capable, but I would strongly recommend against these arrangements in any form in general.
EDIT: F-Droid received a $400,000 grant from a single source this year ( https://f-droid.org/2025/02/05/f-droid-awarded-otf-grant.htm... ) so now I’m even more confused about how they decided to hand this server to a single team member to host in unspoken conditions instead of paying basic colocation expenses.
>We worked out a special arrangement so that this server is physically held by a long time contributor with a proven track record of securely hosting services.
Not clear if "contributor" is a person or an entity. The "hosting services" part make it sound more like a company rather than a natural person.
There is nothing wrong with hosting prod at home. A free and open source project needs to be as sustainable and low maintenance as possible. Better to have a service up and running than down when the funds run out.
> I know self-hosting is held as a point of pride by many, but in my experience you’re still better off putting lower cost hardware in a cheap colo with the contract going to the business entity which has defined ownership and procedures. Sending it over to a single member to put somewhere puts a lot of control into that one person’s domain.
If they really want to run it out of a computer in their living room they should at least keep a couple servers on standby at different locations. Trusting a single person to manage the whole thing is fragile, but trusting a few people with boxes that are kept up to date seems pretty safe. What are the odds they'd all die together? Paying a colo or cloud provider is probably better if you care about more 9s of uptime, but do they really need it?
> one person had the physical server in their basement
Unless you have even the faintest idea about how F-Droid does it, please stop spreading FUD. All the article says is that it is not a normal contract but a special arrangement where one or a select few have physical access. It could be in a locked basement, it could be in a sealed off cage in a data center, it could be a private research area at a university. We don't know.
A special arrangement with an academic institution providing data center services wouldn't be at all surprising, that has been the case for many large open source projects since long before the term was invented, including Linux, Debian and GNU itself.
Many of these are run by professionals with high standards. The Debian project has done pioneering work with reproducible builds, for example, something the F-Droid project is also very much involved with. Those things are what creates trust in the project.
Yup. But the same can happen in shared hosting/colo/aws just as easily if only one person controls the keys to the kingdom. I know of at least a handful of open source projects that had to essentially start over because the leader went AWOL or a big fight happened.
That said, I still think that hosting a server in a member's house is a terrible decision for a project.
> if only one person controls the keys to the kingdom
True, which is why I said the important parts need to be held by the legal entity representing the organization. If one person tries to hold it hostage, it becomes a matter of demonstrating that person doesn’t legally have access any more.
I’ve also seen projects fall apart because they forgot to transfer some key element into the legal entity. A common one is the domain name, which might have been registered by one person and then just never transferred over. Nobody notices until that person has a falling out and starts holding the domain name hostage.
Ultimately hosting is not the most critical part as long as backups are stored in places other members of the projects have access to (and one copy could be in their own home, I don't think the f-droid repos have grown to be that big they can't be hosted on commodity NAS).
What is usually more critical is who has the credentials for the domain management.
> 400K would go -fast- if they stuck to a traditional colo setup.
I don’t know where you’re pricing coloration, but I could host a single server indefinitely from the interest alone on $400K at the (very nice) data centers I’ve used.
Collocation is not that expensive. I’m not understanding how you think $400K would disappear “fast” unless you think it’s thousands of dollars per month?
I, personally, have a cabinet in a colo. With $400k, I can host it at that datacentre with the income from risk-free return never exercising the capital with 10 GigE, 3 kW of power. If I can do it, they can do it.
Modern computers are super efficient. A 9755 has 128 cores and you can get it for cheap. If you've been doing this for a while you'd have gotten the RAM for cheap too.
If I, a normie, can have terabytes of RAM and hundreds of cores in a colo, I'm pretty sure they can unless they have some specific requests.
And dude, I'm in the Bay Area. Think about that. I'm in one of the highest cost localities and I can do this. I bet there are Colorado or Washington DCs that are even cheaper.
I to am in the bay area, and clearly I have been shopping at the wrong colos. I expected to find nothing with unlimited bandwidth for under $1k/mo given past experience with what may have been higher end DCs.
In any event if I was the volunteer sysadmin that had to babysit the box, I would rather have it at my home with business fiber where I am on premises most of the time because getting in and out of a colo is always a whole thing if their security is worth a damn.
Even given a frugal and accessible setup like that I can imagine 400k lasting 5 years tops especially if paying for the volunteers business fiber and much more especially given I expect some of it is to provide a sustainable compensation to key team members as well. Every cent will count.
Stupid question from me: What are their other costs? I'm a total newbie about data center colo setups, but as I understand, it includes: power and internet access with ingress and egress. Are you thinking their egress will be very high, thus thus need to pay additional bandwidth charges?
USD money market funds from Vanguard pay about 3.7% now. Personally, I would recommend a 50/50 split between a Bloomberg Agg bond ETF and a high-yield bond ETF. You can easily boost that yield by 100bps with a modest increase in risk.
Another thing overlooked in this debate: Data center costs normally increase at the rate of inflation. This is not included in most estimates. That said, I still agree with the broad sentiment here: 400K USD is plenty of money to run a colo server for 10+ years from the risk-free interest rate.
For reference, in the US at least, there was/is a company called Joes Data Center in KC who would colo a 1U for $30 or $40 a month. I'd used them for years before not needing it anymore, so not some fly by night company(despite the name).
At that rate, that would buy you nearly 1000 years of hosting.
I was trying to avoid naming exact prices because it becomes argument fodder, but locally I can get good quality colo for $50/month and excellent quality coloration with high bandwidth and good interconnects for under $100 for 1U
I really don’t know where the commenter above was getting the idea that $400K wouldn’t last very long
400k could get you 10 Dell Poweredges with a 128 core CPU, 256GB of RAM and multiple terabytes of storage _multiple times_. 400k easily covers two of these machines, and colocation space is about 2k per year.
Cloud hosting only makes sense at a very, very small scale, or absurdly large ones.
Colo is when you want to bring your own hardware, not when you want physical access to your devices. Many (most?) colo datacenters are still secure sites that you can't visit.
I've only ever seen that at data centers that offer colo as more of a side service or cater to little guys who are coloing by the rack unit. All of the serious colocation services I've used or quoted from offer 24/7 site access.
Basically anywhere with cage or cabinet colocation is going to have site access, because those delineations only make sense to restrict on-site human access.
Every colo I've visited has a system for allowing physical access for our equipment, generally during specific operating hours with secure access card.
To be quite honest I've never seen a colo that didn't offer access at all. The cheapest locations may require a prearranged escort because they don't have any way to restrict access on the floors, but by the time you get to 1/4 rack scale you should expect 24/7 access as standard.
I don't think so. I don't think anybody is going to hand off their server and ask someone else to hook it up. Also, you need access so you can troubleshoot hardware issues.
Clearly I don't have an MBA because this mindset doesn't make sense to me. Burning money unnecessarily is burning money unnecessarily, no matter where it's burned.
Ugh. This 100% shows how janky and unmaintained their setup is.
All the hand waving and excuses around global supply chains, quotes, etc...it took pretty long for them to acquire commodity hardware and shove it in a special someone's basement and they're trying to make it seem like a good thing?
F-Droid is often discussed in the GrapheneOS community, the concerns around centralization and signing are valid.
I understand this is a volunteer effort, but it's not a good look.
As someone that has run many volunteer open source communities and projects for more than 2 decades, I totally get how big "small" wins like this are.
The internet is run on binaries compiled in servers in random basements and you should be thankful for those basements because the corpos are never going to actually help fund any of it.
"I understand this is a volunteer effort, but it's not a good look."
I would agree, that it is not a good look for this society, to lament so much about the big evil corporations and invest so little in the free alternatives.
I don't have a problem with an open source project I use (and I do use F-Froid) hosting a server in a basement. I do have a problem with having the entire project hosted on one server in a basement, because it means that the entire project goes down if that basement gets flooded or the house burns down or the power goes out for an extended period of time, etc.
Having two servers in two basements not near each other would be good, having five would be better, and honestly paying money to put them in colo facilities to have more reliable power, cooling, etc. would be better still. Computer hardware is very cheap today and it doesn't cost that much money to get a substantial amount of redundancy, without being dependent on any single big company.
This sounds reasonable. But this is a build server, not the entire project infrastructure.
I bet the server should be quite powerful, with tons of CPU, RAM and SSD/NVMe to allow for fast builds. Memory of all kinds was getting more and more expensive this year, so the prolonged sourcing is understandable.
The trusted contributor, as the text says, is considered more trustworthy than an average colocation company. Maybe they have an adequate "basement", e.g. run their own colo company, or something.
It would be great to have a spare server, but likely it's not that simple, including the organization and the trust. A build server would be a very juicy attack target to clandestinely implant spyware.
I concur, and given the amount of apps they build it makes sense to spend the money on a good build server to me, especially if it is someone with experience hosting trusted servers as mentioned as well as a contributor already. If people do not want to use it, the source code to build yourself is still available for the apps they supply.
It is not your bank. You don't need 99.999999999999999% availability of the build server of an app store. Especially if the apps packages can still be downloaded from regular https servers.
As long as you don't need RAM or hard drives. It's getting more expensive all the time too. This isn't the ideal moment to replace a laptop let alone a server.
> this server is physically held by a long time contributor with a proven track record of securely hosting services.
This is effectively a rando's basement. It doesn't matter that they've been a contributor or whatever. Individuals change, relationships sour. Securely hosting how ? By locking the front door ? By being a random tech company in the midwest ? Or by having proper access control ?
As a little reminder, F-Droid has _all_ the signing keys on its build server. Compromising that is somewhere between "oh that's awful" and "stop the world". These builds go out as automatic updates too. So uh, yeah, I'd like it if it was hosted by someone serious and not my buddy joe who's a sysadmin don't worry
The not knowing is the point. From a security perspective, you have to assume the worst.
And maybe that is F-Droid's point: Security through obscurity. If the build infrastructure with the signing keys is unknown, then it's that much harder for Bad Actor to do things like backdoor E2E encrypted communication apps. This is, of course, the weakness in E2E encryption in apps obtained from mainstream/commercial app stores. For all we know, these may already be backdoored depending on where it came from.
However, the obscurity makes F-Droid hard to trust as an outsider to the project.
> their incessant mud slinging at any service that isn't theirs is tiresome at best.
100%. But you know, sadly I've noticed that non-experts are impressed by elitism. So you don't have to be good, you just have to shit on others, and passerbys will interpret that as being very competent.
Which is super ironic, from a project which about privacy but only supports hardware built by the biggest surveillance company.
> F-Droid is often discussed in the GrapheneOS community, the concerns around centralization and signing are valid.
Clearly the GrapheneOS community is clueless then.
You can host F-Droid yourself, which is the opposite of centralized. If the GrapheneOS community actually is concerned about centralization they can host an instance as well.
Futhermore, each author signs their own software, which again is the opposite of centralized. One authority signing everything would be centralized.
So F-Droid is decentralized in authorship and distribution. Google store is only decentralized in authorship.
If I were running a volunteer project, I would be dumping thousands a month into top-tier hosting across multiple datacenters around the world with global failover.
the _if_ is doing a lot of heavy lifting there. You're free to complain about it but Fdroid has been running fine for years and I'd rather have a volunteer manage the servers than some big corporation
They quite notably haven't been running fine for years: https://news.ycombinator.com/item?id=44884709 Their recent public embarrassment resulting from having such an outdated build server is likely what triggered them to finally start the process of obtaining a replacement for their 12 year old server (that was apparently already 7 years old when they started using it?).
In what world is it embarrassing to not buy hardware you don't need? The servers worked fine for years. When there was an actual reason to spend money, they bought something new. Sounds like good stewardship of the donations they receive.
I finally just upgraded my 9 year old computer with an i5-6600k to a Ryzen 9 5950x because I wanted to be able to edit home videos. I already rarely even used 1 core on the old CPU, the new one is 7x more powerful, and it's an ebay part from 5 years ago. I don't foresee needing to upgrade again for another decade. I probably would've been good for another 15-20 years if I had upgraded to a DDR5 platform, but RAM prices had already spiked, so I just swapped the motherboard and CPU.
Nah, if you actually read into what's available there, it's clear that the compilers have never implemented features to make this broadly usable. You only get runtime instruction selection if you've manually tagged each individual function that uses SIMD to be compiled with function multi-versioning, so that's only really useful for known hot spots that are intended to use autovectorization. If you just want to enable the latest SIMD across the whole program, GCC and clang can't automatically generate fallback versions of every function they end up deciding could use AVX or whatever.
The alternative is to make big changes to your build system and packaging to compile N different versions of the executable/library. There's no easy way to just add a compiler flag that means "use AXV512 and generate SSE2 fallbacks where necessary".
The people that want to keep running new third-party binaries on 12+ year old CPUs might want to work with the compiler teams to make it feasible for those third parties to automatically generate the necessary fallback code paths. Otherwise, there will just be more and more instances of companies like Google deciding to start using the hardware features they've been deploying for 15+ years.
But you already know all that, since we discussed it four months ago. So why are you pretending like what you're asking for is easy when you know the tools that exist today aren't up to the task?
Apart from the "someone's basement", as objected to in this thread, it also doesn't say they acquired "commodity hardware"; I took it to suggest the opposite, presumably for good reason.
> it also doesn't say they acquired "commodity hardware"; I took it to suggest the opposite, presumably for good reason.
This seems entirely like wishful thinking. They were using a 12 year old server that was increasingly unfit for the day-to-day task of building Android applications. It doesn't seem like they were in a position to acquire and deploy any exotic hardware (except to the extent that really old hardware can be considered exotic and no longer a commodity). I'd be surprised if the new server is anything other than off the shelf x86 hardware, and if we're lucky then maybe they know how to do something useful with a TPM or other hardware root of trust to secure the OS they're running on this server and protect the keys they're signing builds with.
I'm just reading what was written, especially "the specific components we needed", and assuming they're not as incompetent as is being suggested, given they've served me well. Perhaps you haven't been tendering for server hardware recently, even bog-standard stuff, and seen the responses that even say they can't quote a fixed price currently. At least, that's in my part of the world, in an operation buying a good deal of hardware. We also have systems over ten years old running.
They didn't say what conditions it's held in. You're just adding FUD, please stop. It could be under the bed, it could be in a professional server room of the company ran by the mentioned contributor.
100%. Just as an example I have several racks at home, business fiber, battery backup, and a propane generator as a last resort. Also 4th amendment protections so no one gets access without me knowing about it. I host a lot of things at home and trust it more than any DC.
> Also 4th amendment protections so no one gets access without me knowing about it.
If there's ever a need for a warrant for any of the projects, the warrant would likely involve seizure of every computer and data storage device in the home. Without a 3rd party handling billing and resource allocation they can't tell which specific device contains the relevant data, so everything goes.
So having something hosted at home comes with downsides, too. Especially if you don't control all of the data that goes into the servers on your property.
Isn't a business line quite expensive to maintain per month along with a hefty upfront cost? For a smaller team with a tight budget, just going somewhere with all of that stuff included is probably cheaper and easier like a colo DC.
> Also 4th amendment protections so no one gets access without me knowing about it
> Also 4th amendment protections so no one gets access without me knowing about it.
Hahaha
at best you're getting a warrant. Slightly better you're getting a warrant _and_ a gag order. Then it escalates, and having your door kicked in at 6AM is about the best you can hope for.
But sure, you'll know about it. Most likely. Maybe.
Just don't keep anything important in there eh ?
(Note, this definitely applies to colocations too. It's just maybe a tiny bit harder to find which rack is yours, and companies of that size generally have lawyers to prevent that from happening. I'll take my chance with the hosting company.)
A home setup might be able to rival or beat an “edge” enterprise network closet.
It’s not going to even remotely rival a tier 3/4 data center in any way.
The physical security, infrastructure, and connectivity will never come close. E.g. nobody is doing full 2N electrical and environmental in their homelab. And they certainly aren’t building attack resistant perimeter fences and gates around their homes, unless they’re home labbing on a compound in a war torn country.
> The physical security, infrastructure, and connectivity will never come close. E.g. nobody is doing full 2N electrical and environmental in their homelab. And they certainly aren’t building attack resistant perimeter fences and gates around their homes, unless they’re home labbing on a compound in a war torn country.
Why would you need all of that if what they have works? Nobody is going to raid a repo of open source software, you can just download everything for free.
I'd bet F-Droid probably is colocated. Nothing in their statement precludes this.
But the assertion by commenters above that home-hosting is a viable or even a better option for a project like this is silly. Colocating a single server is cheaper than a single a Comcast Business internet connection. Air conditioners fail. Electrical failures happen. These things might not be a problem for a personal project, but they're easily and cheaply mitigable risks at commercial scale.
I read it a bit differently: you don't need to be a mega-corp with millions of servers to actually make a difference for the better. It really doesn't take much!
The issue isn’t the hardware, it’s the fact that it’s hosted somewhere private in conditions they wont name under the control of a single member. Typically colo providers are used for this.
Eh. It's just a different set of trade-offs unless you start doing things super-seriously like Let's Encrypt.
With f-droid their main strength has always been replicable builds. We ideally just need to start hosting a second f-droid server somewhere else and then compare the results.
I publish an app to the App Store, Google Play, and F-Droid. For years, F-Droid took absolute ages to reflect a new release.
People used to criticize the walled gardens for having capricious reviewers and slow review times, but I found F-Droid much more frustrating to get approval from and much slower to get a release out.
So this development is much appreciated. In fact I had an inkling that build times had improved recently when an update made it out to F-Droid in only a day or two.
I don't understand why governments haven't started to fund F-Droid, almost all govt. apps are open-source.
Countries which fear they could be cut off from the duopoly mobile ecosystem should be forcing android manufacturers to bundle in F-Droid; For the amount of nonsense regulations they force phone manufacturers to adhere to, bundling F-Droid wouldn't be that hard.
Google won't be happy, but anti-trust regulations would take care of it.
I wrote a few times to my local MPs ("député", as we call them in France). I usually got a response, though I suspect it was written by their secretary with no other consequence. In one case (related to privacy against surveillance), they raised a question in the congress, which had just a symbolic impact.
It may be different in other countries. In France, Parliament is de-facto a marginal power against a strong executive power. Even the legal terms are symptomatic of this situation: the government submits a "project of law" while MPs submit a "proposal of law" (which, for members of the governing party, is almost always written by the government then endorsed by some loyal MP).
“F-Droid is not hosted in just any data center where commodity hardware is managed by some unknown staff. We worked out a special arrangement so that this server is physically held by a long time contributor with a proven track record of securely hosting services. We can control it remotely, we know exactly where it is, and we know who has access.”
Yikes. They don't need a "special arrangement" for those requirements. This is the bare minimum at many professionally run colocation data centers. There is not a security requirement that can't be met by a data center -- being secure to customer requirements is a critical part of their business.
Maybe the person who wrote that is only familiar with web hosting services or colo-by-the-rack-unit type services where remote-hands services are more commonly relied on. But they don't need to use these services. They can easily get a locked cabinet (or even just a 1/4 cabinet) only they could access.
A super duper secure locked cabinet acessible only to them or anyone with a bolt cutter.
You want to host servers on your own hardare? Uh yikes. Let's unpack this. As a certified AWS Kubernetes professional time & money waster, I can say with authority that this goes against professional standards (?) and is therefore not a good look. Furthermore, I can confirm that this isn't it chief.
Colocation is when you use your own hardware. That's what the word means.
And you're not going to even get close to the cabinet in a data center with a set of bolt cutters. But even if you did, you brought the wrong tool, because they're not padlocked.
Bolt cutters will probably cut through the cabinet door or side if you can find a spot to get them started and you have a lot of time.
Otoh, maybe you've got a cabinet in a DC with very secure locks from europe.... But all are keyed alike. Whoops.
A drill would be easier to bring in (especially if it just looks like a power screwdriver) and probably get in faster though. Drill around the locks/hinges until the door wiggles off.
I'd go with a drill -- but I'm not sure what possible threat vector would have access to the cabinet who would be able to get to the cabinet in any decent data center.
Because it's a secret, we don't know if it's mom's basement where the door doesn't really lock anyways, just pull it real hard, or if it's at Uncle Joey's with the compound and the man trap and laser sensors he bought at government auction through a buddy who really actually works at the CIA.
"F-Droid is not hosted in a data centre with proper procedures, access controls, and people whose jobs are on the line. Instead it's in some guy's bedroom."
It could just be a colo, there are still plenty of data centres around the globe that will sell you a space in a shared rack with a certain power density per U of space. The list of people who can access that shared locked rack is likely a known quantity with most such organisations and I know in the past we had some details of the people who were responsible for it
In some respects, having your entire reputation on the line matters just as much. And sure, someone might have a server cage in their residence, or maybe they run their own small business and it's there. But the vagueness is troubling, I agree.
A picture of the "living conditions" for the server would go a long way.
> State actor? Gets into data centre, or has to break into a privately owned apartment.
I don’t think a state actor would actually break in to either in this case, but if they did then breaking into the private apartment would be a dream come true. Breaking into a data center requires coordination and ensuring a lot of people with access and visibility stay quiet. Breaking into someone’s apartment means waiting until they’re away from the premises for a while and then going in.
Getting a warrant for a private residence also would likely give them access to all electronic devices there as no 3rd party is keeping billing records of which hardware is used for the service.
> Dumb accidents? Well, all buildings can burn or have an power outage.
Data centers are built with redundant network connectivity, backup power, and fire suppression. Accidents can happen at both, but that’s not the question. The question is their relative frequency, which is where the data center is far superior.
>Breaking into a data center requires coordination and ensuring a lot of people with access and visibility stay quiet
Or just a warrant and a phone call to set up remote access? In the UK under RIPA you might not even need a warrant. In USA you can probably bribe someone to get a National Security Letter issued.
Depending on the sympathies of the hosting company's management you might be able to get access with promises.
I dare say F-Droid trust their friends/colleagues more than they trust randos at a hosting company.
As an F-Droid user, I think I might too? It's a tough call.
> Data centers are built with redundant network connectivity, backup power, and fire suppression. [...] The question is their relative frequency, which is where the data center is far superior.
Well, I remember one incident were a 'professional' data center burned down including the backups.
I know no such incident for some basement hosting.
Doesn't mean much. I'm just a bit surprised so many people are worried because of the server location and no one had mentioned yet the quite outstanding OVH incident.
I'm not going to pretend datacenters are magical places immune to damage. I worked at a company where the 630 Third Street datacenter couldn't keep temperatures stable during a San Francisco heatwave and the Okex crypto exchange has experienced downtime because the Alibaba Zone C datacenter their matching engine is on experienced A/C failure. So it's not all magic, but if you didn't encounter home-lab failure it's because you did not sample the population appropriately.
I don't have a bone to pick here. If F-Droid wants to free-ball it I think that's fine. You can usually run things for max cheap by just sticking them on a residential Google Fiber line in one of the cheap power states and then just making sure your software can quickly be deployed elsewhere in times of outage. It's not a huge deal unless you need always-on.
But the arguments being made here are not correct.
The set of people who can maliciously modify it is the people who run f-droid, instead of the cloud provider and the people who run f-droid.
It'd be nice if we didn't have to trust the people who run f-droid, but given we do I see an argument that it's better for them to run the hardware so we only have to trust them and not someone else as well.
You actually do not have to trust the people who run f-droid for those apps whose maintainers enroll in reproducible builds and multi-party signing, which only f-droid supports unlike any alternatives.
That looks cool, which might just be the point of your comment, but I don't think it actually changes the argument here.
You still have to trust the app store to some extent. On first use, you're trusting f-droid to give you the copy of the app with appropriate signatures. Running in someone else's data-center still means you need to trust that data-center plus the people setting up the app store, instead of just the app store. It's just a breach of trust is less consequential since the attacker needs to catch the first install (of apps that even use that technology).
F-droid makes the most sense when shipped as the system appstore, along with pinned CA keychains as Calyxos did. Ideally f-droid was compiled from source and validated by the rom devs.
The F-droid app itself can then verify signatures from both third party developers and first party builds on an f-droid machine.
For all its faults (of which there are many) it is still a leaps and bounds better trust story than say Google Play. Developers can only publish code, and optional signatures, but not binaries.
Combine that with distributed reproducible builds with signed evidence validated by the app and you end up not having to trust anything but the f-droid app itself on your device.
None of this mitigates the fact that apriori you don't know if you're being served the same package manifest/packages as everyone else - and as such you don't know how many signatures any given package you are installing should have.
Yes, theoretically you can personally rebuild every package and check hashes or whatever, but that's preventative steps that no reasonable threat model assumes you are doing.
Why have we normalized "app stores" that build software whose authors likely already provide packages of?
I've been using Obtainium more recently, and the idea is simple: a friendly UI that pulls packages directly from the original source. If I already trust the authors with the source code, then I'm inclined to trust them to provide safe binaries for me to use. Involving a middleman is just asking for trouble.
App stores should only be distributors of binaries uploaded and signed by the original authors. When they're also maintainers, it not only significantly increases their operational burden, but requires an additional layer of trust from users.
I never questioned or thought twice about F-Droid's trustworthiness until I read that. It makes it sound like a very amateurish operation.
I had passively assumed something like this would be a Cloud VM + DB + buckets. The "hardware upgrade" they are talking about would have been a couple clicks to change the VM type, a total nothingburger. Now I can only imagine a janky setup in some random (to me) guy's closet.
In any case, I'm more curious to know exactly what kind hardware is required for F-Droid, they didn't mention any specifics about CPU, Memory, Storage etc.
A "single server" covers a pretty large range of scale, its more about how F-droid is used and perceived. Package repos are infrastructure, and reliability is important. A server behind someone's TV is much more susceptible to power outages, network issues, accidents, and tampering. Again, I don't know that's the case since they didn't really say anything specific.
> not hosted in just any data center where commodity hardware is managed by some unknown staff
I took this to mean it's not in a colo facility either, assumed it mean't someone's home, AKA residential power and internet.
If this is the hidden master server that only the mirrors talk to, then it's redundancy is largely irrelevant. Yes, if it's down, new packages can't be uploaded, but that doesn't affect downloads at all. We also know nothing about the backup setup they have.
A lot depends on the threat model they're operating under. If state-level actors and supply chain attacks are the primary threats, they may be better off having their system under the control of a few trusted contributors versus a large corporation that they have little to no influence over.
Even if it's just the build server, it's really hard to defend just having 1 physical server for a project that aspires to be a core part of the software distribution infrastructure for thousands of users.
The build server going down means that no one's app can be updated, even for critical security updates.
For something that important, they should aspire to 99.999% ("five nines of") reliability. With a single physical server, achieving five nines over a long period of time usually means that you were both lucky (no hardware failures other than redundant storage) and probably irresponsible (applied kernel updates infrequently - even if only on the hypervisor level).
Now... 2 servers in 2 different basements? That could achieve five nines ;)
> It makes it sound like a very amateurish operation.
Wait until you find out how every major Linux distributions and software that powers the internet is maintained. It is all a wildly under-funded shit show, and yet we do it anyway because letting the corpos run it all is even worse.
e.g. AS41231 has upstreams with Cogent, HE, Lumen, etc... they're definitely not running a shoestring operation in a basement. https://bgp.tools/as/41231
Is there anything that can be done about F-Droid downloading very big files (over 50MB) every time it needs to update the repository? I'd expect at the very least regular checkpoint files, then difference files that get you from one checkpoint to the next.
Modern machines go up to really mental levels of performance when you think about it and for a lot of small scale things like F droid I doubt it takes a lot of hardware to actually host it. A lot of its going to be static files so a basic web server could put through 100s of thousands of requests and even on a modest machine saturate 10 gbps which I suspect is enough for what they do.
This just reads to me like they have racked a box in a colo with a known person running the shared rack rather than someone’s basement but who really knows they aren't exactly handing out details.
Which is itself kind of suspicious - why can't they say "yeah we pay for Colo in such-and-such region" if that is what they are doing? Why should that be a secret?
> not hosted in just any data center [...] a long time contributor with a proven track record of securely hosting services
This is ambiguous, it could mean either a contributor's rack in a colocation centre or their home lab in their basement. I'd like to think they meant the former, but I can't deny I understood the latter in my first read.
which is pretty mad. You can buy a second hand system with tons of ram and a 16-core Ryzen for like $400. 12-year old hardware is only marginally faster than a RPi 5.
> 12-year old hardware is only marginally faster than a RPi 5
My 14yo laptop-used-as-server disagrees. Also keep in mind that CPU speeds barely improved between about 2012 and 2017, and 2025 is again a lull https://www.cpubenchmark.net/year-on-year.html
I'm also factoring in the ability to use battery bypass in phones I buy now because they are so powerful, I might want to use them as free noiseless server in the future. You can do a heck of a lot on phone hardware nowadays, paying next to nothing for power and no additional cost on your existing internet connection. A RPi 5 is that same ballpark
Plus the fact that it's been running for 5 years. Does that mean they bought 7 year old hardware back then? Or is that just when it was last restarted?
Building a budget AM4 system for roughly $500 would be within the realm of reason. ($150 mobo, $100 cpu, $150 RAM, that leaves $100 for storage, still likely need power and case.)
I think all the criticism of what F-Droid is doing here (or perceived as doing) reflects more on the ones criticising than the ones being criticised.
How many things went upside down and all the "right" things were done (corporate governance, cloud native deployment, automation, etc.). The truth is none of these processes are actually going to make things more secure, and many projects went belly up despite following these kinds of recommendations.
That being said, I am grateful to F-Droid fighting the good fight. They are providing an invaluable service and I, for one, am even more grateful that they are doing it as uncompromisingly as possible (well into discomfort) according to their principles.
Good. But I wish PostmarketOS supported more devices. On battery,
tons of kernel patches could be set per device plus a config package in order to achieve the best settings. On software and security...you will find more malware in Play Store than the repos from PmOS/Alpine.
I know it's not a 100% libre (FSF) system, but that's a much greater step towards freedom than Android, where you don't even own your device.
The issue with Linux-based phones is and remains apps. Waydroid works pretty well, but since you need to rely on it so much, you are better off using Graphene or Lineage in the first place.
But Android it's a clusterfuck. Look Lemuroid, a Retroarch based emulator with a nice GUI. With the new SAF related permissions you can't make the emulator work any more.
And that being a libre package from F-Droid. And I noticed several other bugs. Tyr for instance (an Yggmail service which bundles Yggdrasil) doesn't have an armv7a version. Tyr could be really useful with DeltaChat because you could talk with any relative without depending on 3rd party mail services. And because of arbitrary limitations, compiling a 32 bit binary it's damn difficult for maintainers, yet I could compile yggmail for Go under Termux without no issues.
Thus, that's why I prefer PostMarketOS, software would just run once it's installed and for sure I wouldn't need to set an SDK weighting several GB's.
Those are valid criticisms of Android, but I see at least two problems that prevent wider adoption of PostmarketOS (even among HN readers). First, it only supports what seems to be ancient hardware. Contrast this with Graphene and latest-gen Pixel support. Second, compatibility with Android is critical. People just want to run their Starbucks app and expect it to work.
I wonder if anyone knows about Droid-ify. Whether it it a safe option, or better to stay away of it?
It showed up one day while I searched about why F-Droid was always so extremely slow to update and download... then trying Droid-ify, that was never a problem any more, it clearly had much better connectivity (or simply less users?)
It's frankly embarrassing how many of the comments on this thread are some version of looking at the XKCD "dependency" meme and deciding the best course of action is to throw spitballs at the maintainers of the critical project holding everything else up.
F Droid is no where near being a critical project holding Android up. The Play Store, and the Play Services themselves are much more critical. Being open source doesn't make you immune from criticism for not following industry standards or being called out for poor security.
> The Play Store, and the Play Services themselves are much more critical.
Critical for serving malware and spyware to the masses, yes. GrapheneOS is based on Android and is far better than a Googled Android variant precisely because it is free of Google junk and OEM crapware.
The internet itself is also critical for serving malware and spyware, but that doesn't mean that the internet is garbage. Google invests much more into removing malicous apps from the app store than fdroid does.
If you have nothing to install on your device, what's the point of being able to? For me, f-droid is a cornerstone in the android ecosystem. I could source apks elsewhere but it would be much more of a hassle and not necessarily have automatic updates. iOS would become a lot more attractive to me if Android didn't have the ecosystem that's centered around the open apps that you can find on f-droid
At the very least, it's reasonable to expect the maintainers of such a project to be open about their situation when it's that precarious. Why wouldn't you take every opportunity to let your users and downstream projects know that the dependency you're providing is operating with no redundancy and barely enough resources to carry on when things aren't breaking? Why wouldn't they want to share with a highly technical audience any details about how their infrastructure operates?
They're building all the software on a single server, and at best their fallback is a 12 year old server they might be able to put back in production. I'm not making any unreasonable assumptions, and they're not being forthcoming with any reassuring details.
I think both of those POVs are wrong. The whole thing about F-Droid is that they have worked hard on not being a central point of trust and failure. The apps in their store are all in a repo (https://gitlab.com/fdroid/fdroiddata) and they are reproducibly built from source. You could replicate it with not too much effort, and clients just need to add the new repository.
Criticism is good when it comes with feasible suggestions or even a little help.
I wonder how many of HN audience does know someone, or a guy who knows a guy, which works in a data center able to manage the hardware and a simple email/message/hello there could open a new opportunity.
I wish they could give more clarity on whether its hosted in a professional server or someone's bedroom, because just saying that "it's held by a long time contributor with a proven track record of securely hosting services" is not very reassuring.
> Another important part of this story is where the server lives and how it is managed. F-Droid is not hosted in just any data center where commodity hardware is managed by some unknown staff.
> The previous server was 12 year old hardware and had been running for about five years. In infrastructure terms, that is a lifetime. It served F-Droid well, but it was reaching the point where speed and maintenance overhead were becoming a daily burden.
lol. if they're gonna use gitlab just use a proper setup - bigco is already in the critical path...
- you can be your own F-Droid server
In fact it's a basic static HTTP(S) server that is generated with the list of .apk and meta-data so it rely doesn't require much.
I think what is concerning to people is that the most popular INSTANCE of F-Droid, the one that is by default when one downloads the F-Droid CLIENT, is "centralized" but again that's a misconception. It's only popular, it's not really central to F-Droid itself. Adding another repository in the F-Droid parlance is just a simple option of changing or adding a URL to more instances.
That being said if anybody here would like to volunteer to be provider a fallback to the build system to that popular instance, I imagine the F-Droid team would welcome that with open arms.
Some people might use "F-Droid" in the same sense as the main page [1] does, to mean "an installable catalogue of FOSS (Free and Open Source Software) applications" but others in the sense the about page [2] uses it, referring to the "non-profit volunteer project", which is consistent with the project statues [3]:
> F-Droid is the name of a not-for-profit technical, scientific and creative community effort serving the public benefit.
The documentation start page [4] makes it a bit more clear:
> F-Droid is both a repository of verified free software Android apps as well as a whole “app store kit”, providing all the tools needed to setup and run an app store. It is a community-run free software project developed by a wide range of contributors. It also includes complete build and release tools for managing the process of turning app source code into published builds.
[1] https://f-droid.org/en/
[2] https://f-droid.org/en/about/
[3] https://commonsconservancy.org/dracc/0039/
[4] https://f-droid.org/en/docs/
I've used F-Droid for years and I've never used the client ("the F-Droid app")
For me the value of F-Droid is as a list of open-source software with (a) pointers to source code and (b) sample binaries
The goal of F-Droid could be to enable Android users to read, edit and compile the software they choose to run on their "phones"
But F-Droid promotes their own app ("the client") so maybe the project's goal is something more like an "app store"
The goal that you suggest is interesting. It reminds me of Guix, where one can obtain binaries or one can build the entirety of packages oneself. All from the same system.
Perhaps you could share how you are currently building software from source and/or F-Droid?
Is popularity, e.g., user majorities versus user minorities, always equivalent to "importance". For web traffic and associated data collection, ad services, etc., popularity is obviously important. But what if one is not focused on such things
Consider the statement "It's only popular, it's not really central for F-Droid itself"
I can’t be the only one who read this and had flashbacks to projects that fell apart because one person had the physical server in their basement or a rack at their workplace and it became a sticking point when an argument arose.
I know self-hosting is held as a point of pride by many, but in my experience you’re still better off putting lower cost hardware in a cheap colo with the contract going to the business entity which has defined ownership and procedures. Sending it over to a single member to put somewhere puts a lot of control into that one person’s domain.
I hope for the best for this team and I’m leaning toward believing that this person really is trusted and capable, but I would strongly recommend against these arrangements in any form in general.
EDIT: F-Droid received a $400,000 grant from a single source this year ( https://f-droid.org/2025/02/05/f-droid-awarded-otf-grant.htm... ) so now I’m even more confused about how they decided to hand this server to a single team member to host in unspoken conditions instead of paying basic colocation expenses.
Not clear if "contributor" is a person or an entity. The "hosting services" part make it sound more like a company rather than a natural person.
It has hosted quite a few famous services.
I doubt OSU is going to host F-Droid. It doesn't even sound like F-Droid would want them to host it.
It's a critical load-bearing component of FOSS on Android.
If they really want to run it out of a computer in their living room they should at least keep a couple servers on standby at different locations. Trusting a single person to manage the whole thing is fragile, but trusting a few people with boxes that are kept up to date seems pretty safe. What are the odds they'd all die together? Paying a colo or cloud provider is probably better if you care about more 9s of uptime, but do they really need it?
Unless you have even the faintest idea about how F-Droid does it, please stop spreading FUD. All the article says is that it is not a normal contract but a special arrangement where one or a select few have physical access. It could be in a locked basement, it could be in a sealed off cage in a data center, it could be a private research area at a university. We don't know.
A special arrangement with an academic institution providing data center services wouldn't be at all surprising, that has been the case for many large open source projects since long before the term was invented, including Linux, Debian and GNU itself.
Many of these are run by professionals with high standards. The Debian project has done pioneering work with reproducible builds, for example, something the F-Droid project is also very much involved with. Those things are what creates trust in the project.
That said, I still think that hosting a server in a member's house is a terrible decision for a project.
True, which is why I said the important parts need to be held by the legal entity representing the organization. If one person tries to hold it hostage, it becomes a matter of demonstrating that person doesn’t legally have access any more.
I’ve also seen projects fall apart because they forgot to transfer some key element into the legal entity. A common one is the domain name, which might have been registered by one person and then just never transferred over. Nobody notices until that person has a falling out and starts holding the domain name hostage.
At least they know where it is. They can go knock on the door.
What is usually more critical is who has the credentials for the domain management.
Or does it also serve the APKs?
Personally I would feel better about round robin across multiple maintainer-home-hosted machines.
I don’t know where you’re pricing coloration, but I could host a single server indefinitely from the interest alone on $400K at the (very nice) data centers I’ve used.
Collocation is not that expensive. I’m not understanding how you think $400K would disappear “fast” unless you think it’s thousands of dollars per month?
Modern computers are super efficient. A 9755 has 128 cores and you can get it for cheap. If you've been doing this for a while you'd have gotten the RAM for cheap too.
If I, a normie, can have terabytes of RAM and hundreds of cores in a colo, I'm pretty sure they can unless they have some specific requests.
And dude, I'm in the Bay Area. Think about that. I'm in one of the highest cost localities and I can do this. I bet there are Colorado or Washington DCs that are even cheaper.
In any event if I was the volunteer sysadmin that had to babysit the box, I would rather have it at my home with business fiber where I am on premises most of the time because getting in and out of a colo is always a whole thing if their security is worth a damn.
Even given a frugal and accessible setup like that I can imagine 400k lasting 5 years tops especially if paying for the volunteers business fiber and much more especially given I expect some of it is to provide a sustainable compensation to key team members as well. Every cent will count.
Regardless, the ongoing interest on $400K alone would be enough to pay colo fees.
I don't know what kind of rates are available to non-profits, but with 400k in hand you can find nicer rates than 3.3 (as of today, at least).
that covers quite a few colo possibilities.
Another thing overlooked in this debate: Data center costs normally increase at the rate of inflation. This is not included in most estimates. That said, I still agree with the broad sentiment here: 400K USD is plenty of money to run a colo server for 10+ years from the risk-free interest rate.
At that rate, that would buy you nearly 1000 years of hosting.
I really don’t know where the commenter above was getting the idea that $400K wouldn’t last very long
I Googled for that brand and got a few hits:
The homepage now redirects here: https://patmos.tech/Another under appreciated point about that data center: It has excellent geographical location to cover North America.
The jury's still out on whether or not this is a good thing.
Of course you have to buy the switches and servers…
IDK if they could bag this kind of grant every year, but isn't this the scale where cloud hosting starts to make sense?
Cloud hosting only makes sense at a very, very small scale, or absurdly large ones.
Basically anywhere with cage or cabinet colocation is going to have site access, because those delineations only make sense to restrict on-site human access.
A lot of these places are like fortresses
All the hand waving and excuses around global supply chains, quotes, etc...it took pretty long for them to acquire commodity hardware and shove it in a special someone's basement and they're trying to make it seem like a good thing?
F-Droid is often discussed in the GrapheneOS community, the concerns around centralization and signing are valid.
I understand this is a volunteer effort, but it's not a good look.
The internet is run on binaries compiled in servers in random basements and you should be thankful for those basements because the corpos are never going to actually help fund any of it.
I would agree, that it is not a good look for this society, to lament so much about the big evil corporations and invest so little in the free alternatives.
Having two servers in two basements not near each other would be good, having five would be better, and honestly paying money to put them in colo facilities to have more reliable power, cooling, etc. would be better still. Computer hardware is very cheap today and it doesn't cost that much money to get a substantial amount of redundancy, without being dependent on any single big company.
I bet the server should be quite powerful, with tons of CPU, RAM and SSD/NVMe to allow for fast builds. Memory of all kinds was getting more and more expensive this year, so the prolonged sourcing is understandable.
The trusted contributor, as the text says, is considered more trustworthy than an average colocation company. Maybe they have an adequate "basement", e.g. run their own colo company, or something.
It would be great to have a spare server, but likely it's not that simple, including the organization and the trust. A build server would be a very juicy attack target to clandestinely implant spyware.
They can then probably whip up a new hosted server to take over within a few days, at most. Big deal.
They are not hosting a critical service, and running on donations. They are doing everything right.
As long as you don't need RAM or hard drives. It's getting more expensive all the time too. This isn't the ideal moment to replace a laptop let alone a server.
> this server is physically held by a long time contributor with a proven track record of securely hosting services.
So you are assuming it's a rando's basement when they never said anything like that.
If their way of doing business is so offensive either don't use them, disrupt them or pitch in and help.
> I understand this is a volunteer effort, but it's not a good look.
What does make a "good look" for a volunteer project?
It's an open-source project. It should be... open. Not mysterious or secretive about overdue replacements of critical infrastructure.
This is effectively a rando's basement. It doesn't matter that they've been a contributor or whatever. Individuals change, relationships sour. Securely hosting how ? By locking the front door ? By being a random tech company in the midwest ? Or by having proper access control ?
As a little reminder, F-Droid has _all_ the signing keys on its build server. Compromising that is somewhere between "oh that's awful" and "stop the world". These builds go out as automatic updates too. So uh, yeah, I'd like it if it was hosted by someone serious and not my buddy joe who's a sysadmin don't worry
And maybe that is F-Droid's point: Security through obscurity. If the build infrastructure with the signing keys is unknown, then it's that much harder for Bad Actor to do things like backdoor E2E encrypted communication apps. This is, of course, the weakness in E2E encryption in apps obtained from mainstream/commercial app stores. For all we know, these may already be backdoored depending on where it came from.
However, the obscurity makes F-Droid hard to trust as an outsider to the project.
Some of their points are valid but way too often they're unable to accept that different services aren't always trying to solve the same problem.
100%. But you know, sadly I've noticed that non-experts are impressed by elitism. So you don't have to be good, you just have to shit on others, and passerbys will interpret that as being very competent.
Which is super ironic, from a project which about privacy but only supports hardware built by the biggest surveillance company.
Clearly the GrapheneOS community is clueless then.
You can host F-Droid yourself, which is the opposite of centralized. If the GrapheneOS community actually is concerned about centralization they can host an instance as well.
Futhermore, each author signs their own software, which again is the opposite of centralized. One authority signing everything would be centralized.
So F-Droid is decentralized in authorship and distribution. Google store is only decentralized in authorship.
I finally just upgraded my 9 year old computer with an i5-6600k to a Ryzen 9 5950x because I wanted to be able to edit home videos. I already rarely even used 1 core on the old CPU, the new one is 7x more powerful, and it's an ebay part from 5 years ago. I don't foresee needing to upgrade again for another decade. I probably would've been good for another 15-20 years if I had upgraded to a DDR5 platform, but RAM prices had already spiked, so I just swapped the motherboard and CPU.
https://wiki.debian.org/InstructionSelection
The alternative is to make big changes to your build system and packaging to compile N different versions of the executable/library. There's no easy way to just add a compiler flag that means "use AXV512 and generate SSE2 fallbacks where necessary".
The people that want to keep running new third-party binaries on 12+ year old CPUs might want to work with the compiler teams to make it feasible for those third parties to automatically generate the necessary fallback code paths. Otherwise, there will just be more and more instances of companies like Google deciding to start using the hardware features they've been deploying for 15+ years.
But you already know all that, since we discussed it four months ago. So why are you pretending like what you're asking for is easy when you know the tools that exist today aren't up to the task?
Apart from the "someone's basement", as objected to in this thread, it also doesn't say they acquired "commodity hardware"; I took it to suggest the opposite, presumably for good reason.
This seems entirely like wishful thinking. They were using a 12 year old server that was increasingly unfit for the day-to-day task of building Android applications. It doesn't seem like they were in a position to acquire and deploy any exotic hardware (except to the extent that really old hardware can be considered exotic and no longer a commodity). I'd be surprised if the new server is anything other than off the shelf x86 hardware, and if we're lucky then maybe they know how to do something useful with a TPM or other hardware root of trust to secure the OS they're running on this server and protect the keys they're signing builds with.
They didn't say what conditions it's held in. You're just adding FUD, please stop. It could be under the bed, it could be in a professional server room of the company ran by the mentioned contributor.
If there's ever a need for a warrant for any of the projects, the warrant would likely involve seizure of every computer and data storage device in the home. Without a 3rd party handling billing and resource allocation they can't tell which specific device contains the relevant data, so everything goes.
So having something hosted at home comes with downsides, too. Especially if you don't control all of the data that goes into the servers on your property.
> Also 4th amendment protections so no one gets access without me knowing about it
laughs in FISA
Hahaha
at best you're getting a warrant. Slightly better you're getting a warrant _and_ a gag order. Then it escalates, and having your door kicked in at 6AM is about the best you can hope for.
But sure, you'll know about it. Most likely. Maybe.
Just don't keep anything important in there eh ?
(Note, this definitely applies to colocations too. It's just maybe a tiny bit harder to find which rack is yours, and companies of that size generally have lawyers to prevent that from happening. I'll take my chance with the hosting company.)
It’s not going to even remotely rival a tier 3/4 data center in any way.
The physical security, infrastructure, and connectivity will never come close. E.g. nobody is doing full 2N electrical and environmental in their homelab. And they certainly aren’t building attack resistant perimeter fences and gates around their homes, unless they’re home labbing on a compound in a war torn country.
Why would you need all of that if what they have works? Nobody is going to raid a repo of open source software, you can just download everything for free.
But the assertion by commenters above that home-hosting is a viable or even a better option for a project like this is silly. Colocating a single server is cheaper than a single a Comcast Business internet connection. Air conditioners fail. Electrical failures happen. These things might not be a problem for a personal project, but they're easily and cheaply mitigable risks at commercial scale.
Also, even 12-year-old hardware is wicked fast.
With f-droid their main strength has always been replicable builds. We ideally just need to start hosting a second f-droid server somewhere else and then compare the results.
People used to criticize the walled gardens for having capricious reviewers and slow review times, but I found F-Droid much more frustrating to get approval from and much slower to get a release out.
So this development is much appreciated. In fact I had an inkling that build times had improved recently when an update made it out to F-Droid in only a day or two.
Countries which fear they could be cut off from the duopoly mobile ecosystem should be forcing android manufacturers to bundle in F-Droid; For the amount of nonsense regulations they force phone manufacturers to adhere to, bundling F-Droid wouldn't be that hard.
Google won't be happy, but anti-trust regulations would take care of it.
(I've worked with several politicians. You'd be surprised what a well timed letter or meeting can achieve.)
I wrote a few times to my local MPs ("député", as we call them in France). I usually got a response, though I suspect it was written by their secretary with no other consequence. In one case (related to privacy against surveillance), they raised a question in the congress, which had just a symbolic impact.
It may be different in other countries. In France, Parliament is de-facto a marginal power against a strong executive power. Even the legal terms are symptomatic of this situation: the government submits a "project of law" while MPs submit a "proposal of law" (which, for members of the governing party, is almost always written by the government then endorsed by some loyal MP).
A project like F-Droid is dumb to begin with where they're the one to build the apps.
I heartily disagree. Linux distributions also build the packages themselves, and that adds a layer of trust.
It ensures that everything in the fdroid repo is free software, and can be self-built.
There are other ways to ensure something is free software and can be self built. Their approach is highly inefficient.
“F-Droid is not hosted in just any data center where commodity hardware is managed by some unknown staff. We worked out a special arrangement so that this server is physically held by a long time contributor with a proven track record of securely hosting services. We can control it remotely, we know exactly where it is, and we know who has access.”
Maybe the person who wrote that is only familiar with web hosting services or colo-by-the-rack-unit type services where remote-hands services are more commonly relied on. But they don't need to use these services. They can easily get a locked cabinet (or even just a 1/4 cabinet) only they could access.
You want to host servers on your own hardare? Uh yikes. Let's unpack this. As a certified AWS Kubernetes professional time & money waster, I can say with authority that this goes against professional standards (?) and is therefore not a good look. Furthermore, I can confirm that this isn't it chief.
And you're not going to even get close to the cabinet in a data center with a set of bolt cutters. But even if you did, you brought the wrong tool, because they're not padlocked.
Otoh, maybe you've got a cabinet in a DC with very secure locks from europe.... But all are keyed alike. Whoops.
A drill would be easier to bring in (especially if it just looks like a power screwdriver) and probably get in faster though. Drill around the locks/hinges until the door wiggles off.
Not reassuring.
A picture of the "living conditions" for the server would go a long way.
State actor? Gets into data centre, or has to break into a privately owned apartment.
Criminal/3rd party state intelligence service? Could get into both, at a risk or with blackmail, threats, or violence.
Dumb accidents? Well, all buildings can burn or have an power outage.
I don’t think a state actor would actually break in to either in this case, but if they did then breaking into the private apartment would be a dream come true. Breaking into a data center requires coordination and ensuring a lot of people with access and visibility stay quiet. Breaking into someone’s apartment means waiting until they’re away from the premises for a while and then going in.
Getting a warrant for a private residence also would likely give them access to all electronic devices there as no 3rd party is keeping billing records of which hardware is used for the service.
> Dumb accidents? Well, all buildings can burn or have an power outage.
Data centers are built with redundant network connectivity, backup power, and fire suppression. Accidents can happen at both, but that’s not the question. The question is their relative frequency, which is where the data center is far superior.
Or just a warrant and a phone call to set up remote access? In the UK under RIPA you might not even need a warrant. In USA you can probably bribe someone to get a National Security Letter issued.
Depending on the sympathies of the hosting company's management you might be able to get access with promises.
I dare say F-Droid trust their friends/colleagues more than they trust randos at a hosting company.
As an F-Droid user, I think I might too? It's a tough call.
Read Jabber.ru Hetzner accident: https://notes.valdikss.org.ru/jabber.ru-mitm/
Well, I remember one incident were a 'professional' data center burned down including the backups.
https://en.wikipedia.org/wiki/OVHcloud#Incidents
I know no such incident for some basement hosting.
Doesn't mean much. I'm just a bit surprised so many people are worried because of the server location and no one had mentioned yet the quite outstanding OVH incident.
https://www.reddit.com/r/homelab/comments/wvqxs7/my_homelab_...
I don't have a bone to pick here. If F-Droid wants to free-ball it I think that's fine. You can usually run things for max cheap by just sticking them on a residential Google Fiber line in one of the cheap power states and then just making sure your software can quickly be deployed elsewhere in times of outage. It's not a huge deal unless you need always-on.
But the arguments being made here are not correct.
as a year long f-droid user I can't complain
The set of people who can maliciously modify it is the people who run f-droid, instead of the cloud provider and the people who run f-droid.
It'd be nice if we didn't have to trust the people who run f-droid, but given we do I see an argument that it's better for them to run the hardware so we only have to trust them and not someone else as well.
You still have to trust the app store to some extent. On first use, you're trusting f-droid to give you the copy of the app with appropriate signatures. Running in someone else's data-center still means you need to trust that data-center plus the people setting up the app store, instead of just the app store. It's just a breach of trust is less consequential since the attacker needs to catch the first install (of apps that even use that technology).
The F-droid app itself can then verify signatures from both third party developers and first party builds on an f-droid machine.
For all its faults (of which there are many) it is still a leaps and bounds better trust story than say Google Play. Developers can only publish code, and optional signatures, but not binaries.
Combine that with distributed reproducible builds with signed evidence validated by the app and you end up not having to trust anything but the f-droid app itself on your device.
Yes, theoretically you can personally rebuild every package and check hashes or whatever, but that's preventative steps that no reasonable threat model assumes you are doing.
I've been using Obtainium more recently, and the idea is simple: a friendly UI that pulls packages directly from the original source. If I already trust the authors with the source code, then I'm inclined to trust them to provide safe binaries for me to use. Involving a middleman is just asking for trouble.
App stores should only be distributors of binaries uploaded and signed by the original authors. When they're also maintainers, it not only significantly increases their operational burden, but requires an additional layer of trust from users.
I had passively assumed something like this would be a Cloud VM + DB + buckets. The "hardware upgrade" they are talking about would have been a couple clicks to change the VM type, a total nothingburger. Now I can only imagine a janky setup in some random (to me) guy's closet.
In any case, I'm more curious to know exactly what kind hardware is required for F-Droid, they didn't mention any specifics about CPU, Memory, Storage etc.
> not hosted in just any data center where commodity hardware is managed by some unknown staff
I took this to mean it's not in a colo facility either, assumed it mean't someone's home, AKA residential power and internet.
If this is the hidden master server that only the mirrors talk to, then it's redundancy is largely irrelevant. Yes, if it's down, new packages can't be uploaded, but that doesn't affect downloads at all. We also know nothing about the backup setup they have.
A lot depends on the threat model they're operating under. If state-level actors and supply chain attacks are the primary threats, they may be better off having their system under the control of a few trusted contributors versus a large corporation that they have little to no influence over.
The build server going down means that no one's app can be updated, even for critical security updates.
For something that important, they should aspire to 99.999% ("five nines of") reliability. With a single physical server, achieving five nines over a long period of time usually means that you were both lucky (no hardware failures other than redundant storage) and probably irresponsible (applied kernel updates infrequently - even if only on the hypervisor level).
Now... 2 servers in 2 different basements? That could achieve five nines ;)
I agree that "behind someone's TV" would be a terrible idea.
Wait until you find out how every major Linux distributions and software that powers the internet is maintained. It is all a wildly under-funded shit show, and yet we do it anyway because letting the corpos run it all is even worse.
e.g. AS41231 has upstreams with Cogent, HE, Lumen, etc... they're definitely not running a shoestring operation in a basement. https://bgp.tools/as/41231
I'm curious why supply chain issues got in the way and why they couldn't just configure a Dell Poweredge and get delivery in a couple weeks.
I'm assuming they have some special requirements that weren't met by an off-the-shelf server, so I'm just curious what those requirements are.
This just reads to me like they have racked a box in a colo with a known person running the shared rack rather than someone’s basement but who really knows they aren't exactly handing out details.
This is ambiguous, it could mean either a contributor's rack in a colocation centre or their home lab in their basement. I'd like to think they meant the former, but I can't deny I understood the latter in my first read.
Also, no details on the hardware?
> The previous server was 12 year old hardware
which is pretty mad. You can buy a second hand system with tons of ram and a 16-core Ryzen for like $400. 12-year old hardware is only marginally faster than a RPi 5.
A Dell R620 is over 12 years old and WAY faster than a RPi 5 though...
Sure, it'll be way less power efficient, but I'd definitely trust it to serve more concurrent users than a RPi.
My 14yo laptop-used-as-server disagrees. Also keep in mind that CPU speeds barely improved between about 2012 and 2017, and 2025 is again a lull https://www.cpubenchmark.net/year-on-year.html
I'm also factoring in the ability to use battery bypass in phones I buy now because they are so powerful, I might want to use them as free noiseless server in the future. You can do a heck of a lot on phone hardware nowadays, paying next to nothing for power and no additional cost on your existing internet connection. A RPi 5 is that same ballpark
Building a budget AM4 system for roughly $500 would be within the realm of reason. ($150 mobo, $100 cpu, $150 RAM, that leaves $100 for storage, still likely need power and case.)
https://www.amazon.com/Timetec-Premium-PC4-19200-Unbuffered-...
https://www.amazon.com/MSI-MAG-B550-TOMAHAWK-Motherboard/dp/...
For a server that's replacing a 12 year old system, you don't need DDR5 and other bleeding edge hardware.
(I might be spoiled by sane reproducible build systems. Maybe F-droid isn't.)
Saying this on HN, of course.
How many things went upside down and all the "right" things were done (corporate governance, cloud native deployment, automation, etc.). The truth is none of these processes are actually going to make things more secure, and many projects went belly up despite following these kinds of recommendations.
That being said, I am grateful to F-Droid fighting the good fight. They are providing an invaluable service and I, for one, am even more grateful that they are doing it as uncompromisingly as possible (well into discomfort) according to their principles.
Not to mention it also simplifies the security of controlling signing keys significantly.
And that being a libre package from F-Droid. And I noticed several other bugs. Tyr for instance (an Yggmail service which bundles Yggdrasil) doesn't have an armv7a version. Tyr could be really useful with DeltaChat because you could talk with any relative without depending on 3rd party mail services. And because of arbitrary limitations, compiling a 32 bit binary it's damn difficult for maintainers, yet I could compile yggmail for Go under Termux without no issues.
Thus, that's why I prefer PostMarketOS, software would just run once it's installed and for sure I wouldn't need to set an SDK weighting several GB's.
It showed up one day while I searched about why F-Droid was always so extremely slow to update and download... then trying Droid-ify, that was never a problem any more, it clearly had much better connectivity (or simply less users?)
Critical for serving malware and spyware to the masses, yes. GrapheneOS is based on Android and is far better than a Googled Android variant precisely because it is free of Google junk and OEM crapware.
>I could source apks elsewhere
Do you or do you not have apps you want to install?
assumptions
I wonder how many of HN audience does know someone, or a guy who knows a guy, which works in a data center able to manage the hardware and a simple email/message/hello there could open a new opportunity.
Brought to you by the helpful folks who managed to bully WinAmp into retreating from open source. Very productive.
> The previous server was 12 year old hardware and had been running for about five years. In infrastructure terms, that is a lifetime. It served F-Droid well, but it was reaching the point where speed and maintenance overhead were becoming a daily burden.
lol. if they're gonna use gitlab just use a proper setup - bigco is already in the critical path...