A quick note on scope: this is not meant to replace existing monitoring or observability tools. It’s designed for those moments when you SSH into a box and need to quickly understand “why is this running” without digging through configs, cron jobs, or service trees manually.
Happy to answer questions or adjust direction based on feedback.
This is very clever. I've often needed to figure out what some running process was actually for (e.g. because it just started consuming a lot of some limited resource) but it never occurred to me that one could have a tool to answer that question. Well done.
---
Edit: Ah, ok, I slightly misunderstood - skimmed the README too quickly. I thought it was also explaining what the process did :D Still a clever tool, but thought it went a step further.
Perhaps you should add that though - combine Man page output with a database of known processes that run on various Linux systems and a mechanism for contributing PRs to extend that database...? Unlesss it's just me that often wants to know "what the fsck does /tmp/hax0r/deeploysketchyd actually do?" :P
Looking up the binary in the package management system would also provide another source of useful information. Of course this would dramatically increase the complexity but would, I think, be useful.
If you could look it up using APT/dpkg first, that would be lovely :-)
I left a different comment, but I think this is good. You're example is 3306 and has a useful breakdown. Not everyone has that port memorized by trauma, and not every mysql instance uses that port.
New tools are always welcome, and having a purpose to explain a purpose seems like a good pitch.
Totally. Just to clarify, witr isn’t limited to ports. You can run it directly on a process too, like `witr mysql`. I used the 3306 example to emphasize this use case.
This is great. Small, trivial suggestion: the gif that loops in the README should pause on the screen w/ the output for a few seconds longer - it disappears (restarts) too quickly to take in all of the output.
Thanks everyone for the feedback on the GIF! I though it looked good but when I went back to see it from a user's POV, it was really miserable, haha. I've already switched it to a static image, appreaciate everyone's input and suggestions.
I would also argue it shouldn't be a gif. It's nice that it shows the command is fast I guess but it's one command that's still visible in the final frame. Not as bandwidth efficient and agreed I can't read it all in time
Sounds like something I could use, but installing a binary via `curl` doesn't sit right with me. Next problem you have is "explain how this thing was installed on my system" followed "is it up to date (including security patches).
I understand that installing via `curl` isn’t for everyone, but since this is the first release, I intentionally kept it simple. Now that the tool is gaining some traction, I can definitely plan proper packages for future releases. Thanks for your inputs.
Just to update, witr is currently available on brew and AUR.
deb, rpm and apk packages are also available in the release, and can be run directly via nix without installation.
This is amazing and really useful to me.
Great job.
However, I can’t use it in a production business environment for the same reasons other users mentioned earlier.
A Debian or RPM package would be fantastic.
Thank you, glad you liked it. Since this is the first release, I intentionally kept it simple. Now that the tool is gaining some traction, I can definitely plan proper packages for future releases. Thanks for your inputs.
> supervised by a human who occasionally knew what he was doing.
seems in jest but I could be wrong. If omitted or flagged as actual sarcasm I would feel a lot better about the project overall. As long as you’re auditing the LLM’s outputs and doing a decent code review I think it’s reasonable to trust this tool during incidents.
I’ll admit I did go straight to the end of the readme to look for this exact statement. I appreciate they chose to disclose.
Nobody who was writing code before LLMs existed "needs" an LLM, but they can still be handy. Procfs parsing trivialities are the kind of thing LLMs are good at, although apparently it still takes a human to say "why not using an existing library that solves this, like https://pkg.go.dev/github.com/prometheus/procfs"
Sometimes LLMs will give a "why not..." or just mention something related, that's how I found out about https://recoll.org/ and https://www.ventoy.net/ But people should probably more often explicitly prompt them to suggest alternatives before diving in to produce something new...
Neither do you need and IDE, syntax highlighting or third party libraries, yet you use all of them.
There's nothing wrong for a software engineer about using LLMs as an additional tool in his toolbox. The problem arises when people stops doing software engineering because they believe the LLM is doing the engineering for them.
Every IDE I've used just worked out of the box, be it Visual Studio, Eclipse, or anything using the language server protocol.
Having the ability to have things like method auto-completion, go-to-definition and symbol renaming is a net productivity gain from the minute you start using it and I couldn't imagine this being a controversial take in 2025…
> I don't know what “tarpit” you're talking about.
Really? You don't know software developers that would rather futz around with editor configs and tooling and libraries and etc, etc, all day every day instead of actually shipping the boring code?
I'd not trust any app that parses /proc to obtain process information (for reasons [0]), specially if the machine has been compromised (unless by "incident", the author means another thing):
I’m struggling with the utility of this logic. The argument seems to be "because malware can intercept /proc output, any tool relying on it is inherently unreliable."
While that’s theoretically true in a security context, it feels like a 'perfect is the enemy of the good' situation. Unless the author is discussing high-stakes incident response on a compromised system, discarding /proc-based tools for debugging and troubleshooting seems like throwing the baby out with the bathwater. If your environment is so compromised that /proc is lying to you, you've likely moved past standard tooling anyway.
No to me. It just has to demonstrate to work well, which is plenty possible with a developer focused on outcome rather than process (though hopefully they cared a bit about process/architecture too).
What does this means for context:
“Git repository name and branch”
Does this mean it detects if something is running from within a git repository folder? Couldn’t find the code that checked this.
seems handy but mostly the ppid is outputted as the reason for starting. its 'who dun it', not really _why_ it was started. (service file, autorun, execve etc.)
i see you support multiple output format including json thats nice. id recommend to assume automation (ssh script/commands) and make the default output really easily greppable , or json (jq) since itll be more appealing to parse (shouldnt reduce readability, for the default output it looks like just removing some linebreaks to make it parse more consistently. (maybe the lines are wrapped tho? unclear from the img)
Thanks for the feedback! I’ll look into showing who and why in a more distinct way.
The default output is human-first, hence some extra line breaks, but the JSON flag is already there for automation. We can also see if it can be made more easily greppable.
Great tool! I was looking to convert my decades-old shell script into something a bit more modern and user-friendly, and lo and behold, this appeared right at the same time :) I'll just use yours instead. Well done! :)
This is great. One of those things that just formats and does all the little niggling things you have to do sometimes. I like that it is simple, and doesn't (thank god) need npm or some other package manager.
to quote the top comment: just show a screenshot of its results, if its useful its fine, being fast is just gravy.
> This project was developed with assistance from AI/LLMs (including GitHub Copilot, ChatGPT, and related tools), supervised by a human who occasionally knew what he was doing.
That's the good part of AI. Lowers effort and knowledge barrier and makes things possible.
`witr` is trying to be a bit different. Here are few use cases to consider:
- When a process started.
- Which ports a process is using.
- Which user started it.
- From which directory it started.
- env flag to list all the variables attached to the process.
- json flag to use it programmatically.
Worth mentioning: I had claude code find a crypto miner on an infected system which had been running for ~5 months undetected. Up-to-date windows 10 machine. Single prompt saying "This PC is using too much power or fans, investigate". Took minutes, completely cleaned up the infection (I hope) and identified its source. Fantastic use-case.
Happy to answer questions or adjust direction based on feedback.
---
Edit: Ah, ok, I slightly misunderstood - skimmed the README too quickly. I thought it was also explaining what the process did :D Still a clever tool, but thought it went a step further.
Perhaps you should add that though - combine Man page output with a database of known processes that run on various Linux systems and a mechanism for contributing PRs to extend that database...? Unlesss it's just me that often wants to know "what the fsck does /tmp/hax0r/deeploysketchyd actually do?" :P
If you could look it up using APT/dpkg first, that would be lovely :-)
https://www.man7.org/linux/man-pages/man1/whatis.1.html
New tools are always welcome, and having a purpose to explain a purpose seems like a good pitch.
The gif is adding no value. I already know what typing text into a terminal looks like.
I hope they have deb package or snap some day.
CGO_ENABLED=0 go build -ldflags "-X main.version=dev -X main.commit=$(git rev-parse --short HEAD) -X 'main.buildDate=$(date +%Y-%m-%d)'" -o witr ./cmd/witr
Call me old-fashioned, but if there's an install.sh, I would hope it would prefer the local src over binaries.
Very cool utility! Simple tools like these keep me glued to the terminal. Thank you!
However, I can’t use it in a production business environment for the same reasons other users mentioned earlier. A Debian or RPM package would be fantastic.
> This project was developed with assistance from AI/LLMs [...] supervised by a human who occasionally knew what he was doing.
This seems contradictory to me.
> supervised by a human who occasionally knew what he was doing.
seems in jest but I could be wrong. If omitted or flagged as actual sarcasm I would feel a lot better about the project overall. As long as you’re auditing the LLM’s outputs and doing a decent code review I think it’s reasonable to trust this tool during incidents.
I’ll admit I did go straight to the end of the readme to look for this exact statement. I appreciate they chose to disclose.
Have you tried it? Procfs trivialities is exactly the kind of thing where an LLM will hallucinate something plausible-looking.
Fixing LLM hallucinations takes more work and time than just reading manpages and writing code yourself.
But at the moment I feel like all that sounds suspiciously like actual work.
There's nothing wrong for a software engineer about using LLMs as an additional tool in his toolbox. The problem arises when people stops doing software engineering because they believe the LLM is doing the engineering for them.
You mileage may vary, though. Lots of software engineers love those time and effort tarpits.
Every IDE I've used just worked out of the box, be it Visual Studio, Eclipse, or anything using the language server protocol.
Having the ability to have things like method auto-completion, go-to-definition and symbol renaming is a net productivity gain from the minute you start using it and I couldn't imagine this being a controversial take in 2025…
Really? You don't know software developers that would rather futz around with editor configs and tooling and libraries and etc, etc, all day every day instead of actually shipping the boring code?
You must be working in a different industry.
https://github.com/pranshuparmar/witr/tree/main/internal/lin...
It should be the last option.
[0] https://news.ycombinator.com/item?id=46364057
While that’s theoretically true in a security context, it feels like a 'perfect is the enemy of the good' situation. Unless the author is discussing high-stakes incident response on a compromised system, discarding /proc-based tools for debugging and troubleshooting seems like throwing the baby out with the bathwater. If your environment is so compromised that /proc is lying to you, you've likely moved past standard tooling anyway.
Do you have any qualms about me making an entry in the AUR for this?
My favorite thing about arch is how insanely quickly AURs pop up for interesting tools.
i see you support multiple output format including json thats nice. id recommend to assume automation (ssh script/commands) and make the default output really easily greppable , or json (jq) since itll be more appealing to parse (shouldnt reduce readability, for the default output it looks like just removing some linebreaks to make it parse more consistently. (maybe the lines are wrapped tho? unclear from the img)
'Responsibility chain' will become a trendy phrase.
to quote the top comment: just show a screenshot of its results, if its useful its fine, being fast is just gravy.
That's the good part of AI. Lowers effort and knowledge barrier and makes things possible.
Also I don't think this approach works correctly, because a disowned/nohup process will show up as PPID 1 (systemd), which is not correct
BTW any chance you would make MacOS version of this?