They are definitely on the horizon! I am a HUGE fan of both of those projects and they are definitely on the roadmap for the architecture...
Right now, ShadowBroker is really optimized for 'blinking blip' real-time radar tracking (streaming the raw GeoJSON payload from the FastAPI backend directly to MapLibre every 60s), so we get as close to as smooth 60fps entity animations across the map.
Moving to something like Martin would be incredible for handling EVEN MORE entities if we start archiving historical flight and AIS data into a proper PostGIS database, but the trade-off of having to invalidate the vector tile cache every few seconds for live-moving targets makes it a bit overkill right now....
Ah, that's my fault for not making the error handling clearer in the UI. If the map is blank, it usually means the backend is missing the .env file with the free API keys (AISSTREAM_API_KEY and N2YO_API_KEY), so it's silently failing to fetch the streams.
Did the terminal throw any Python FastAPI errors, or did it just serve the Next.js frontend? I'm going to push an update later today to show a prominent "Backend Disconnected / Missing API Keys" warning on the UI so it doesn't just look dead. Thanks for testing it!
It's pretty interesting to see. My very first real software job was working on ground processing algorithms for the US Navy's Maritime Domain Awareness system, which is the "real" version of something like this that actually gives centimeter scale live activity detections of basically the entire world. The engineering effort that goes into something like that is immense. Bush announced in like 2004 or something and we didn't go into full operational capability until 2015. Thousands of developers across intel, military, commercial contractors, for over a decade, inventing and launching new sensor platforms, along with build outs of the data centers to collect, process, store, and make sense of all this.
I wish these weekend warriors would work on a project like that someday, to see what capabilities truly take. You want to know what's happening in the world, you need to place physical sensors out there, deal with the fact that your own signals are being jammed and blocked, the things you're trying to see are also trying to hide and disguise themselves.
The attention to detail is something I've never seen replicated outside. Every time we changed or put out a new algorithm, we had to process old data with it and explain to analysts and scientists every single pixel that changed in the end product and why.
Let me ask a dumb question. Can this be run on a public server (I use dreamhost) with a web interface for others to see? Or is this strictly something that gets run on a local computer?
If you want to host for friends/trusted devices, you can put it on a Tailscale or Zerotier style network and just let trusted devices access the server wrt to the OP's point about open secrets. Or you could probably make a PR to load the settings from somewhere else.
Well, I have to make some modifications, but that isn't recommended right now because I have a settings option with the API key right there for the free world to see, lol. I will work on making a version for hosting it, though.
You can throw it on a server and run it for you to see (or anyone else if you trust people or dont care about losing your free API keys) It's just a standard Next.js and FastAPI stack, and there are Dockerfiles in the repo so it should be pretty straightforward to spin up on a cheap VPS (like a DigitalOcean droplet or Hetzner).
Honestly, if you just want to show it off to a few people, running it locally and exposing it with a Cloudflare Tunnel or Ngrok is probably the path of least resistance.
I WILL work on having a version to host it where users have to bring their own keys to see it in the future though
Cloudflare Tunnel is solid for quick demos. One thing though — if you're planning the "bring your own keys" version, don't just throw them in a settings page. I went down that road and ended up with keys sitting in localStorage where any XSS could grab them. What worked better for me was having the backend hold the keys and issuing short-lived session tokens to the frontend. More moving parts but way less surface area if something goes wrong.
I don't understand why that youtuber was acting like spy satellites going over was such a big deal, they are going over the entire planet, all the time.
I'm excited to see tooling of this nature and scope. Looking forward to seeing similar tooling oriented around all human needs so we can start tracking the meeting of needs to better meet needs, particularly in ways that don't require money.
As was already said in one of the reference videos, it's impressive what one person can do.
But the next step is to define an architecture where authors can defined/implement plug-ins with particular modular capabilities instead of one big monolith. For example, instead of front-end (GUI) and back-end (feeds), there ought to be a middle layer that models some of the domain logic (events: surces, filters, sinks; stories/time lines etc.).
What's with so many people creating new accounts to promote LLM generated projects? Are they people who don't care about HN and just trying to self promote? Existing users creating new accounts? Lurkers?
It's a bummer because sometimes the headline seems cool, but its always generated blah blah recently. I don't think I've seen a non-AI readme on here in months..
Everyone has their own hueristic, but if it took someone 6 hours or whatever to make some whole big app, my confidence that they will continue to maintain or care about it even next week is pretty much zero... How could they? They've already made three other apps in that time!
I don't care if the code is perfect, all this stuff just has the feel of plastic cutlery, if that makes sense.
“The first Matrix I designed was quite naturally perfect, it was a work of art, flawless, sublime; a triumph equaled only by its monumental failure. The inevitability of its doom is apparent to me now as a consequence of the imperfection inherent in every human being. Thus I redesigned it, based on your history, to more accurately reflect the varying grotesqueries of your nature. However, I was again frustrated by failure. ”
dont give these OSINT quality signals away ... that's one of the indicators that allow you on first scan to id (potentially) low quality content. Ie: fully llm gen; the author doesnt look over the docs or doesnt care for 'details'.
We are sympathetic, but it's still not OK to fulminate on HN, no matter what it's about. It just makes the place miserable. Please flag it or email us (hn@ycombinator.com) if you think a post is unfit for HN.
@dang - HN tearing itself apart over use of AI isn't conducive to a strong cohesive community.
Nobody here is at fault, we're in very trying times - we need to adjust with patience and consideration.
Use of AI to launch rapid prototypes is like breadboarding a new product. It has a place but it's moving so fast that it's hard to lock down at the moment.
No point everyone throwing excess cortisol in this direction. <3
Very true, I see people increasingly polarized on this topic. I also see it in the rollercoaster of votes on my post.
If it wasn't clear, I think we're (as a society) destroying ourselves by believing in all this generative AI crap, even contrary to the evidence of how wrong it often is, the hallucinations, the awful quality etc.
I think we're witnessing the death of intellect: when you discard the evidence in favor of something that only looks right but is nonsense, there's no telling where it will end. If your profession requires you to think and produce output accordingly, but suddenly nobody thinks wrong answers matter, then your profession no longer exists.
Standing up against it and refusing to accept any form of AI anywhere is the only reasonable thing to do. And I don't know if it will make a difference.
This is actually really good. Like this kind of app built before AI everyone would praise it.
It's only slop because anyone can make it now and we're all sick of clones.
The app is good, but the effort required to make it is not impressive at all. I think calling this slop is a misnomer. It's not slop. It's better than what most of us can do and done in a significantly faster amount of time. Calling it slop implies you can do better... which you can't.
I find non-constructive feedback more tiring. People just dismiss things as soon as it has the faintest trace of AI without judging them for what they actually are.
Not saying the AI slop noise isn’t annoying though.
Why are you entitled to receiving constructive feedback on "your" project when you couldn't be bothered to write the project yourself in the first place?
If you want "feedback" of the same quality and effort as the project itself, you can always go ask your beloved AI for feedback instead of wasting precious human time.
> Never mind the fact that AIs of the LLM-variety haven't and aren't going to find solutions to mathematical problems.
This is empirically wrong as of early 2026.
Since Christmas 2025, 15 Erdos problems have been moved from "open" to "solved" on erdosproblems.com, 11 of them crediting AI models. Problems #397, #728, and #729 were solved by GPT-5.2 Pro generating original arguments (not literature lookups), formalized in Lean, and verified by Terence Tao himself. Problem #1026 was solved more or less autonomously by Harmonic's Aristotle model in Lean.
At IMO 2025, three separate systems (Gemini Deep Think, an OpenAI system, and Aristotle) independently achieved gold-medal performance, solving 5 of 6 problems.
DeepSeek-Prover-V2 hits 88.9% on MiniF2F-test. Top models solve 40% of postdoc-level problems on FrontierMath, up from 2%.
Tao's own assessment as of March 2026: AI is "ready for primetime" in math and theoretical physics because it "saves more time than it wastes."
You can disagree about where this is heading, but "haven't and aren't going to" doesn't survive contact with the data.
You got really specific to help prove your point. We were generalising to projects built by AI, not web apps that don’t run, which isn’t relevant since LLMs can clearly build fully working projects.
Also how does getting into the specifics of which type of AI can solve mathematical problems helps the comparison here?
Man, the overwhelming majority of your comments over the past several months are you whining about AI or being extremely salty about anything remotely AI related. You bash AI content, people who use AI to make cool stuff, AI companies, people who say anything positive about said companies... I really wonder what exactly you think your negative attitude contributes to these discussions.
It contributes far more than yet another low effort AI-generated Show HN on top of the dozens already submitted every day.
If you think you made "cool stuff" with AI, great, enjoy it, but also please keep it to yourself because anyone else can generate the exact same thing if they want it, you are not special, and are actively downing out real human effort and passion.
The first one I got was about how apparently for U.S.-Americans, health insurance does not cover dental and ocular health. Reading that actually made me feel really bad for U.S.-Americans.
Have you seen these projects?
https://github.com/protomaps/PMTiles
https://github.com/maplibre/martin
Right now, ShadowBroker is really optimized for 'blinking blip' real-time radar tracking (streaming the raw GeoJSON payload from the FastAPI backend directly to MapLibre every 60s), so we get as close to as smooth 60fps entity animations across the map.
Moving to something like Martin would be incredible for handling EVEN MORE entities if we start archiving historical flight and AIS data into a proper PostGIS database, but the trade-off of having to invalidate the vector tile cache every few seconds for live-moving targets makes it a bit overkill right now....
Great project, will be contributing!
everything is open source
https://github.com/blue-monads/potato-apps/tree/master/cimpl...
i should finish but have not have time
Nothing wrong with that. Beats a boring corporate dashboard any day. Video game and similar interfaces work for a reason.
No planes etc.
No helpful output in the command window.
Seems fun but doesn't seem to be working.
Did the terminal throw any Python FastAPI errors, or did it just serve the Next.js frontend? I'm going to push an update later today to show a prominent "Backend Disconnected / Missing API Keys" warning on the UI so it doesn't just look dead. Thanks for testing it!
fastapi==0.103.1
uvicorn==0.23.2
yfinance>=0.2.40
feedparser==6.0.10
legacy-cgi==2.6.1
requests==2.31.0
apscheduler==3.10.3
pydantic==2.11.0
pydantic-settings==2.8.0
playwright>=1.58.0
beautifulsoup4>=4.12.0
sgp4>=2.22
cachetools>=5.3.0
cloudscraper>=1.2.71
reverse_geocoder>=1.5.1
lxml>=5.0
python-dotenv>=1.0
and be on python 3.13 and it should get you up and running
Archive version...
https://web.archive.org/web/20120112012912/http://henchmansh...
I need a realtime OSINT dashboard for OSINT dashboards.
I wish these weekend warriors would work on a project like that someday, to see what capabilities truly take. You want to know what's happening in the world, you need to place physical sensors out there, deal with the fact that your own signals are being jammed and blocked, the things you're trying to see are also trying to hide and disguise themselves.
The attention to detail is something I've never seen replicated outside. Every time we changed or put out a new algorithm, we had to process old data with it and explain to analysts and scientists every single pixel that changed in the end product and why.
apples and oranges
Let me ask a dumb question. Can this be run on a public server (I use dreamhost) with a web interface for others to see? Or is this strictly something that gets run on a local computer?
You can throw it on a server and run it for you to see (or anyone else if you trust people or dont care about losing your free API keys) It's just a standard Next.js and FastAPI stack, and there are Dockerfiles in the repo so it should be pretty straightforward to spin up on a cheap VPS (like a DigitalOcean droplet or Hetzner).
Honestly, if you just want to show it off to a few people, running it locally and exposing it with a Cloudflare Tunnel or Ngrok is probably the path of least resistance.
I WILL work on having a version to host it where users have to bring their own keys to see it in the future though
How long before we see this UI in some Iran related news story
edit: no idea why they deleted the comment but they linked to this video https://www.youtube.com/watch?v=0p8o7AeHDzg
first llm to stop using those damn colors for every single transparent modal in existence is going to be a big step forward.
As was already said in one of the reference videos, it's impressive what one person can do.
But the next step is to define an architecture where authors can defined/implement plug-ins with particular modular capabilities instead of one big monolith. For example, instead of front-end (GUI) and back-end (feeds), there ought to be a middle layer that models some of the domain logic (events: surces, filters, sinks; stories/time lines etc.).
I would like to see a plug-in for EMM (European Media Monitor) integrated, for instance ( https://emm.newsbrief.eu/NewsBrief/alertedition/en/ECnews.ht... ).
And add chronological feeds of govtrack.us along with all politicians social media feeds
Everyone has their own hueristic, but if it took someone 6 hours or whatever to make some whole big app, my confidence that they will continue to maintain or care about it even next week is pretty much zero... How could they? They've already made three other apps in that time!
I don't care if the code is perfect, all this stuff just has the feel of plastic cutlery, if that makes sense.
Of course it's commoditized and a dime-a-dozen today, but if this is what HN terms as "AI slop" then apparently human SWEs weren't that much better.
https://news.ycombinator.com/newsguidelines.html
Nobody here is at fault, we're in very trying times - we need to adjust with patience and consideration.
Use of AI to launch rapid prototypes is like breadboarding a new product. It has a place but it's moving so fast that it's hard to lock down at the moment.
No point everyone throwing excess cortisol in this direction. <3
If it wasn't clear, I think we're (as a society) destroying ourselves by believing in all this generative AI crap, even contrary to the evidence of how wrong it often is, the hallucinations, the awful quality etc.
I think we're witnessing the death of intellect: when you discard the evidence in favor of something that only looks right but is nonsense, there's no telling where it will end. If your profession requires you to think and produce output accordingly, but suddenly nobody thinks wrong answers matter, then your profession no longer exists.
Standing up against it and refusing to accept any form of AI anywhere is the only reasonable thing to do. And I don't know if it will make a difference.
It's only slop because anyone can make it now and we're all sick of clones.
The app is good, but the effort required to make it is not impressive at all. I think calling this slop is a misnomer. It's not slop. It's better than what most of us can do and done in a significantly faster amount of time. Calling it slop implies you can do better... which you can't.
Not saying the AI slop noise isn’t annoying though.
If you want "feedback" of the same quality and effort as the project itself, you can always go ask your beloved AI for feedback instead of wasting precious human time.
If I’m driving an AI towards finding a solution, would it be any different for a software project?
Never mind the fact that AIs of the LLM-variety haven't and aren't going to find solutions to mathematical problems.
This is empirically wrong as of early 2026.
Since Christmas 2025, 15 Erdos problems have been moved from "open" to "solved" on erdosproblems.com, 11 of them crediting AI models. Problems #397, #728, and #729 were solved by GPT-5.2 Pro generating original arguments (not literature lookups), formalized in Lean, and verified by Terence Tao himself. Problem #1026 was solved more or less autonomously by Harmonic's Aristotle model in Lean.
At IMO 2025, three separate systems (Gemini Deep Think, an OpenAI system, and Aristotle) independently achieved gold-medal performance, solving 5 of 6 problems.
DeepSeek-Prover-V2 hits 88.9% on MiniF2F-test. Top models solve 40% of postdoc-level problems on FrontierMath, up from 2%.
Tao's own assessment as of March 2026: AI is "ready for primetime" in math and theoretical physics because it "saves more time than it wastes."
You can disagree about where this is heading, but "haven't and aren't going to" doesn't survive contact with the data.
So, not autonomously.
Also how does getting into the specifics of which type of AI can solve mathematical problems helps the comparison here?
If you think you made "cool stuff" with AI, great, enjoy it, but also please keep it to yourself because anyone else can generate the exact same thing if they want it, you are not special, and are actively downing out real human effort and passion.
performance is easy. you can craft a test suite that will allow a ralph loop to iterate until it hits the metrics.
the hard part of style/feel/usability. LLMs still suck at that stuff, and crafting tests to produce those metrics is nigh impossible.