Probably because you can't actually read anything more than the initial post without getting a login-wall: "Join X now to read replies on this post." (Not to mention "X" is a trash site now)
A good technical project, but honestly useless in like 90% of scenarios.
You want to use an NVidia GPU for LLM ? just buy a basic PC on second hand (the GPU is the primary cost anyway), you want to use Mac for good amount of VRAM ? Buy a Mac.
With this proposed solution you have an half-backed system, the GPU is limited by the Thunderbolt port and you don’t have access to all of NVidia tool and library, and on other hand you have a system who doesn’t have the integration of native solution like MLX and a risk of breakage in future macOS update.
Nvidia GPUs were usable on Intel Macs, but compatibility got worse over time, and Apple stopped making a Mac Pro with regular PCIe slots in 2013. People then got hopeful about eGPUs, but they have their own caveats on top of macOS only fully working with AMD cards. So I've gotten numb to any news about Mac + GPU. The answer was always to just get a non-Apple PC with PCIe slots instead of giving yourself hoops to jump through.
Until there is official support for Mac coming from nvidia, I don't think anything will happen.
> the hardware wasn't usable on macOS
This eGPU thing is from a third-party if I understand correctly. I don't see why nvidia would get excited about that. If they cared about the platform, they would have released something already.
There's a third option that might fit some of the "I'm on a Mac but need CUDA" cases: network-mounting an Nvidia GPU from another machine on the same LAN. The GPU stays wherever it lives (office server, lab machine, a roommate's PC), your Mac runs the CUDA workload locally without any code changes — same PyTorch/CUDA calls, just intercepted by a stub library that forwards them over the local network.
The tradeoff vs. a physical eGPU: no Thunderbolt bandwidth ceiling or cabling, but you do need to be on the same LAN and there's ~4% overhead vs. native. Doesn't help if you need the GPU while traveling, and won't fix the physical macOS driver situation for native GPU access.
Disclosure: I work on GPU Go (tensor-fusion.ai/products/gpu-go), so I'm obviously biased toward this approach — but it genuinely is a different point in the design space from eGPU.
Evidence that NVIDIA has even been trying? My understanding is that Apple didn’t allow 3rd parties to write graphics drivers past 10.13, but they could’ve done a non-graphics driver like this.
The government doesn’t care? They’re a minority of the market? The vast majority of their computers didn’t have slots to put Nvidia GPUs in, and now none of them do?
An internal PCIe slot can be had in up to 16x 5.0, whereas Thunderbolt 5 maxes out at 4x of 4.0.
Plus you have another Thunderbolt controller in between the CPU and the hardware, and it takes more energy to push that many bits 1m over a cable vs a few dozen cm over traces.
Also Thunderbolt is trivially disconnected, which in many critical workflows is not a positive, but an opportunity for ill-timed interruptions. Plus I don't have to buy a fucking dongle/dock for a real goddamn slot, make room for external power supplies, etc.
It depends how you define the market. In the 2001 microsoft case [0], the courts ruled Microsoft had a monopoly over the "Intel-based personal computer market".
Apple has a monopoly over the "M-chip" personal computer market. They have a monopoly over the iOS market with the app store. They have a monopoly over the driver market on macOS.
Like, Microsoft was found guilty of exploiting its monopoly for installing IE by default while still allowing other browser engines. On iOS, apple bundles safari by default and doesn't allow other browser engines.
If we apply the same standard that found MS a monopoly in the past, then Apple is obviously a monopoly, so at the very least I think it's fair to say that reasonable people can disagree about whether Apple is a monopoly or not.
I wouldn’t say it is obvious. Apple does not have the monopoly of ARM based PCs. Labeling it as a monopoly of M chips is not fair or accurate when comparing to MS on Intel. It’s also probably relevant that MS was not selling PCs or their own hardware. They had a monopoly on a market where you effectively had to use their software to use the hardware you bought from a different company. Because Apple is selling their own hardware and software as a single product, the consumer is not forced into restricting the hardware they bought by a second company’s policies.
Well “had to use” is a strong phrase here. Linux was already around and you could have used it too with your hardware. I think you can always bend an argument to fit your point.
Didn’t knew that, but only if they also sold windows pc? Like, if a company would only sold blank PCs without any offering associated to MS they wouldn’t need to pay MS anything.
> Labeling it as a monopoly of M chips is not fair or accurate when comparing to MS on Intel.
The relevant thing here isn't the chips, it's tying things to the chips, because those would otherwise be separate markets. If you could feasibly buy an iPhone and install Android or Lineage OS on it or use Google Play or F-Droid on iOS then no one would be saying that Apple has a monopoly on operating systems or app stores for iOS since there would actually be alternatives to theirs.
The fake alternative is that you could use a different store by buying a different phone, but this is like saying that if Toyota is the only one who can change the brake pads on a Toyota and Ford is the only one who can change the brake pads on a Ford then there is competition for "brake pads" because when your Toyota needs new brake pads you can just buy a Ford vehicle. It's obvious why this is different than anyone being able to buy third party brake pads for your Toyota from Autozone, right?
> It’s also probably relevant that MS was not selling PCs or their own hardware.
This is the thing that unambiguously should never be relevant. It can't be a real thing that you can avoid being a monopoly by owning more of the supply chain. It's like saying that Microsoft could have avoided being a monopoly by buying Intel and AMD, or buying one of them and then exterminating the other by refusing to put Windows on it. That's a preposterous perverse incentive.
> It can't be a real thing that you can avoid being a monopoly by owning more of the supply chain.
Move the most important aspects of your software to hardware. Hard for MacOS but for a Chromebook style thing you could write the browser into its own pice of wafer.
> Move the most important aspects of your software to hardware.
So now you have a piece of silicon with a two year old version of Chrome with seventeen CVEs hard-coded into it, and still have all the same antitrust problems because the device still also has an ordinary general purpose CPU that you're still anti-competitively impeding people from using to run Firefox or Ladybird.
I don't think any of what you're describing are legal "monopolies". I don't have a single Apple product in my life but I'm fairly sure there's nothing I'm prevented from doing because of that.
And back in the "Microsoft has a monopoly on IE6" ruling's days, I did not use Windows or Internet Explorer, and I was not prevented from doing anything because of that. Netscape Navigator on Linux worked fine. Sure, I occasionally hit sites that were broken and only worked in IE, but I also right now frequently hit apps that are "macOS only" (like when Claude Cowork released, or a ton of other YC company's apps).
Microsoft was found guilty, so clearly the bar is not what you're trying to claim.
Microsoft was found guilty of using their market power to do product bundling, which is illegal. The fact that they had dominance in the market is not what they got popped for, nor is it illegal.
It's possible on the Mac, but it's not easy. Apple uses an immutable system volume on macOS, so you can't just delete the Safari app like you would a user-installed app. To actually delete Safari you need to disable System Integrity Protection and reboot.
There are plenty of Linux distributions that use immutable root volumes. They protect the user in a huge number of ways by preventing the system from getting hosed (either by accident or by malicious unauthorized users / malware). Apple made the decision to do this for their users, and it has prevented a HUGE amount of tech support calls, as well as led to millions of happy users with trouble-free computers.
It also hasn't stopped users from installing Chrome and/or Firefox on their Macs, and millions of ordinary users have.
Apple has not, to my knowledge, required OEMs to bundle Safari with macOS alongside threats to withhold macOS if they don’t comply expressly to put Firefox out of business.
But hey, maybe some weird shit happened during the clone years that I’m not privy to.
The crucially important subtlety here is that Apple requiring developers to use the App Store doesn't leverage an existing monopoly (like what Microsoft had with Windows).
Compare the games console market. Nintendo is allowed to say you have to go through them to sell games for the Switch, ditto Microsoft with the Xbox. Sony doing the same thing with the Playstation is exactly equivalent, but they're approaching the sort of market dominance where it might soon be illegal for them (and them alone) to do that in some markets.
Yes, but that was coupled with other factors like them strongarming vendors, already being hugely dominant on desktops and abusing that position et al. I don't see this as being the same. Maybe my bar here is wrong, but it doesn't change whether they are a monopoly or not.
The issue was never "Microsoft has a monopoly on IE6". That's obviously nonsense.
The monopoly that Microsoft held was the home computer operating system market, first through DOS, then later through Windows. Holding a monopoly like that isn't illegal unto itself. What they were actually found guilty of was unfairly leveraging their monopoly on the OS market to gain the upper hand in a different market (the browser market). The subsequent range of issues we had with IE6 (compatibility, security, etc) was a result of Microsoft succeeding in achieving a monopoly on the browser market through illicit means.
Likewise, "Apple has a monopoly on the App Store" is just the same amount of nonsense. What you could argue is that Apple has a monopoly on the home computer market, or the mobile phone market, and that the way they integrate the App Store should be considered illegal leveraging of that monopoly, but that argument simply doesn't hold water — Microsoft's monopoly on the OS market at the time was pretty much incontrovertible, you simply couldn't walk into a shop and buy a computer running something else (except maybe a Mac at a more specialised place). Today, just about any shop you walk into that sells computers will probably have devices for sale running three different OSes (macOS, Windows, ChromeOS). Any phone place will have iPhones and Android devices, and probably a few more niche options. Actual market share percentage is nowhere near the high 90s that Microsoft saw in its heyday. At most, Apple is the biggest individual competitor in the market, but I don't think it hold an outright majority in any specific product class.
Mind you, I think that there is a good argument to be made that the Apple/Google duopoly on mobile devices does deserve scrutiny, but that's a very different kettle of fish.
You were not prevented from doing anything, but that doesn’t mean others weren’t. For example, OEMs were not allowed to offer any other preinstalled OS as a default option. That effectively killed Be and I’m sure hindered RedHat.
That’s not how monopoly definitions work. That makes about as much sense as saying Nintendo has a monopoly on Nintendo consoles or Ford has a monopoly on Mustangs
> Apple has a monopoly over the "M-chip" personal computer market. They have a monopoly over the iOS market with the app store
When a company is deemed an illegal monopoly, the DoJ basically becomes part of management. Antitrust settlements focus on germane elements, e.g. spin offs. But they also frequently include random terms of political convenience.
I don’t think we want a precedent where companies having a product means they have an automatic monopoly on said product.
More to the point: having a monopoly isn't de facto illegal (just look up natural monopolies), it's using the monopoly power in an anti-competitive way that's illegal. Microsoft wasn't charged with having a monopoly, they were charged because they used that monopoly to exclude Netscape Navigator and force bundling of IE.
It isn't just about monopoly or unfair competition. This can also be covered under consumer rights - the Right to Repair. No OS provider should be allowed to dictate what software you can or not run on your own device and / or OS you have paid for.
> It isn't just about monopoly or unfair competition. This can also be covered under consumer rights - the Right to Repair.
If we have a right to repair (we broadly do not, AFAICT), then that doesn't necessarily mean that we have a right to modify and/or add new functionality.
When I repair a widget that has become broken, I merely return it to its previous non-broken state. I might also decide to upgrade it in some capacity as part of this repair process, but the act of repairing doesn't imply upgrades. At all.
> No OS provider should be allowed to dictate what software you can or not run on your own device and / or OS you have paid for.
I agree completely, but here we are anyway. We've been here for quite some time.
Courts have already ruled it does in the iOS app store market. You can disagree of course but then you'd be disagreeing with legal experts who know more about anti-trust law than you do.
You can, but that doesn't mean your opinion is as valid as those who study the subject. Otherwise we might as well follow the sovereign citizen believers.
Internet Explorer Mobile is a YouTube client. You're describing a client-server disagreement when the user is talking about an entirely client-based conflict.
> That's normal behavior when your server is being reverse-engineered or abused. Video bandwidth is not free.
Microsoft rewrote their Windows Phone native client to pass through Google's ads. Google still blocked it.
Was it normal behavior when Google blocked Amazon Fire devices from connecting to YouTube with a web browser during the Google/Amazon corporate spat?
To be fair, Google did back down almost immediately when the tech press picked up on it.
Not allowing a native client for your monopoly market share video service on Amazon devices while also blocking Amazon's web browser on those devices is making things a bit too obvious.
Again - servers are always offered at-will. If the service provider wants to boot you out, their TOS usually won't give you the right to renegotiate service.
Clients are not offered at-will, they either work or they don't. Nvidia ships AArch64 UNIX drivers, Apple is the one that neglects their UNIX clients.
Using your monopoly market share video service as a weapon against companies offering platforms that compete with your own is textbook antitrust behavior.
Google used YouTube as a weapon against both Windows Phone and devices running Amazon's Fire fork of Android.
A "monopoly" "service"? What have they monopolized, laziness? It's not the App Store, you can go replace it with DailyMotion at your earliest convenience.
You're still retreading why your original comment was not at all relevant to the critique being made. We have precedent for prosecuting monopolistic behavior in America, but it doesn't encompass services even when they're mandatory to use the client. It does have a precedent for arbitrarily preventing competitors from shipping a runtime that competes with the default OS, incidentally.
When your product has a monopoly market share, you don't get to use it as a weapon against competitors in other markets, even if you claim there is some imaginary exception to antitrust law involving servers.
You don't get to demand that the server support your endpoint, period. There is no precedent for that ever happening in US antitrust law, because it's not anticompetitive.
If you think otherwise, make your case to Google's lawyers instead of spinning hypothetical case law.
That's besides the point, you don't own the server. You cannot expect the server to work forever, or demand a right to access it.
You do own the client though. In the example upstream, the failure to support macOS clients can't be blamed on Nvidia because they already wrote AArch64 UNIX support.
Yeah I'm pretty sure Nvidia just doesn't care to make Mac drivers. For years there was no SIP, Apple sold the Mac Pro which could take Nvidia GPUs, but you basically couldn't use Nvidia because of how bad and outdated the drivers were. I had a GTX 650 in my Mac Pro for a while, it was borderline unusable.
As more people carry ARM laptops and keep the GPU somewhere else, I think the interesting UX question becomes whether the GPU can "follow" the local workflow instead of forcing the whole workflow to move to the GPU host. That's the problem we've been looking at with GPUGo / TensorFusion: local-first dev flow, remote GPU access when needed. Curious whether people here mostly want true attached-eGPU semantics, or just the lowest-friction way to access remote compute from a Mac without turning everything into a remote desktop / VM workflow.
remote GPU compute payloads have been around a lot longer than LLMs, they're just few and far between.
folding@home and other such asynchronous "get this packet of work done and get back to me' style of operations rarely care much about latency.
Remote transcoding efforts can usually adjust whatever buffer needed to cover huge latency gaps , a lot of sim and render suites can do remote work regardless of machine to machine latency..
I just sort of figure the industry will trend more async when latency becomes a bigger issue than compute. Won't work in some places, but I think we tend to avoid thinking that way right now due to a lack of real need to do so; but latency is one of those numbers that trends down slowly.
Such a shame both companies are big on vanity to make great things happen. Imagine where you could run Mac hardware with nvidia on linux. It's all there, and closed walls are what's not allowing it to happen. That's what we as customers lose when we forego control of what we purchase to those that sold us the goods.
Unfortunately, Apple still won't release iMessage for Android or Linux (unlike every other messenger platform, like whatsapp, telegram, wechat, microsoft teams, etc, which are all cross-platform).
Because of that, you need an apple device around to be able to deal with iMessage users.
Then it would be more correct to say that we "lose when we forego control" when our friends push the iMessage on us.
In my bubble literally noone uses iMessage. More tech savvy use Signal/GroupMe, less tech savvy use SMS/Email. Family use Signal to chat with me, as I can steer my own family a little.
Also I sometimes open web-interface of Facebook, but any attempts to offer WhatsApp I answer "sorry no Facebook apps on my phone, no Instagram/Messenger either". Never had any issues with that. Although I heard some countries are very dependent on Facebook, so might be hard there.
By the way, I noticed it's not hard to use multiple messengers actually, sometimes it's even faster to find a contact as you always remember what app to look at in recents.
UPDATE: My point is that you can also influence your life and how people communicate with you. Up to a point of course, but it's not like you can do nothing with it.
My social circle is the complete opposite. We're all on iMessage (except for one group of extended family on Messenger), and we like it that way. I was the last holdout for years while I went from Android -> Windows Phone -> Android -> iPhone.
But you don't need an Apple device to contact iMessage users. Every iMessage ID is a phone number (SMS/RCS) or email.
You've listed a whole bunch of alternatives available to you, but for some reason you demand that Apple change its unique offering into just another one of those for you. Why? Is that not a completely enforced monoculture?
Apple has always been off to the side, doing their own thing, and for some reason that fact utterly enrages people. They demand that Apple become just like everyone else. But we already have everyone else! And in every single field Apple is in, there is more of everyone else than there is of Apple.
Have you considered people like Apple products precisely because they're not like everything else? That making Apple indistinguishable from Facebook or Google is no victory, but a significant loss for customer choice?
That is no longer true. https://bluebubbles.app/
Well… it’s not exactly no longer true, you do need an Apple VM but it doesn’t have to be the end device.
I don't understand the logic for downvotes.
We vote with our wallets.
When I could not update the Ram on my personal Dell machine I asked for a Frame.work in my new job. As my Intel based FW at work had thermal throttling problems, for my next personal purchase I got an AMD one. As Ubuntu had shady practices, I installed Fedora, as Gnome forced UX choices I did not want, I used KDE. As I wanted my machine to be even more stable I use an immutable spin.
The machine I'm using now represents my choices and matches what matters to me, and works closer to perfectly than all my machines in the past
And yes, I have worked with macs, and no, the UX and the entire tyranny in the Apple ecosystem was not something I could live with
And yes, this machine is fast, predictable, a joy to work with and is a tool I control, not a tool to control me. If something happens to it, I can order the part with the same price that goes into a new machine, and keep using my laptop
"We vote with our wallet, so don't complain" is a bad take in my opinion.
Like, for phones, I want a phone which runs Linux, has NFC support, and also has iMessage so my friend who only communicates with blue-bubbles and will never message a green-bubble will still talk to me. I also want it to have regulatory approval in the country I live in so I can legally use it to make calls.
Because apple has closed the iMessage ecosystem such that a linux phone can't use it, such a device is impossible. I cannot vote for it.
As such, I will complain about every phone I own for the foreseeable future.
> Like, for phones, I want a phone which runs Linux, has NFC support, and also has iMessage so my friend who only communicates with blue-bubbles and will never message a green-bubble will still talk to me. I also want it to have regulatory approval in the country I live in so I can legally use it to make calls.
I actually agree with you, but I also suggest getting better friends.
What is the blue and green bubble thing? I've never used an iPhone so don't understand the term. Does it classify messages as iMessage and non-iMessage?
iOS has two built-in messaging apps. Like all phones, they have SMS built in, and hardly anyone uses it for anything except SMS 2FA codes.
And then they have iMessage, aka blue bubbles, which are kinda like Signal or Whatsapp or Telegram. Everyone in Europe uses whatsapp, and a lot of people in the US use iMessage. If you don't use whatsapp in europe, you'll have a rough time communicating with some social groups, and the same thing for iMessage in the US.
However, unlike every other messenger app I can think of, iMessage isn't cross platform.
Also unlike every other messenger I can think of, it comes installed by default and for some reason uses the same app as the SMS app, and also claims encryption but randomly switches to SMS and breaks encryption making it obviously the least secure of all the apps (and also backs up your keys to iCloud in a way apple can access them by default, neither here nor there).
Blue bubbles are when iMessage is acting as the iMessage app, and has encryption and can use features like sending high resolution photos, location, invites, and a bunch of other apple-specific features.
Green bubbles are when the iMessage app has converted itself into the SMS and RCS app, and has a reduced feature set, like being unable to remove people from group chats.
It's frankly a quite confusing decision to have two quite different apps built into the same app and indicate which feature-set is active based on the color of a UI element. I think everyone would prefer if apple split it into the 'Messages' app (SMS + RCS) and an optional 'iMessage' app which doesn't come installed by default, but you can download on the app store from Apple. I'm frankly surprised the EU hasn't forced apple to show a prompt for "default messenger app" on startup with the options being "Whatsapp", "iMessage", etc etc, like they do for default browser.
> I think everyone would prefer if apple split it into the 'Messages' app (SMS + RCS) and an optional 'iMessage' app which doesn't come installed by default, but you can download on the app store from Apple.
No, I don't think anyone would prefer that. People on iOS like iMessage, not SMS + RCS. Nobody is confused by it, they all know that green bubbles means you're texting someone who doesn't have an iPhone. It works seamlessly, it's just annoying when you want to have along conversation with a friend on Android because it doesn't have any nice iMessage extras available – that's why people don't like green bubbles.
No, Apple has one built-in messaging app: Messages. It switches between SMS, RCS, and iMessage automatically depending on the capabilities of the devices.
I followed the instructions link and read the scripts...although the TinyGPU app is not in source form on GitHub, this looks to me like the GPU is passed into the Linux VM underneath to use the real driver and then somehow passed back out to the Mac (which might be what the TinyGrad team actually got approved).
Or I could have totally misunderstood the role of Docker in this.
My read of everything is that they are using Docker for NVIDIA GPUs for the sake of "how do you compile code to target the GPU"; for AMD they're just compiling their own LLVM with the appropriate target on macOS.
Well, to be fair, the whole shebang is from a completely different company, that have their own ML library and such, so that isn't that surprising. Although I agree that some CUDA shim or similar would be a lot more interesting, still getting to the place of running inference and training with your very own library is pretty dope already.
Woah, this is exciting. I'm traveling but I have a 5090 lying around at home. I'm eager to give it a go. Docs are here: https://docs.tinygrad.org/tinygpu/
I hope it'll work on an M4 Mac Mini. Does anyone know what hardware to get? You'll need a full ATX PSU to supply power, right? And then tinygrad can do LLM inference on it?
I own one of these, the cage is just a piece of plastic. Anyway, I don't think 80$ is that big of a difference here. I can't really afford a 4k Nvidia GPU. Intel is my only hope.
Almost twice the price and simply more accurate info regarding price and features.
Brand is TH3P4G3. Egpu.io has decent eGPU comparisons.
I wouldn't want all that dust in my GPU fans, prefer that near my case fans. I also don't like it given I got cats and want to store/box hw. I do use the eGPU in the fuse box. If I had a larger house, I'd use a server rack.
I was recently in the market for an eGPU but for a different niche (not eGPU/eNPU/eTPU but getting a HBA via TB to connect a LTO-6 drive via SAS). I went for a Sonnet instead, very low profile and small. I also bought an Asus one. Slightly bigger, came with more fans but TB4 instead of TB3 on the Sonnet. The cages are aluminium. Those eGPU were second hand (also without warranty but quicker S&H than Chinese New Year) but came with PSU. As you also gotta buy a PSU for it which came with the eGPUs I mentioned. For me no biggie, as I got a decent PSU lying around.
I used Sonnet egpu box on a similarly equipped Dell XPS and it had so many little issues that it sold me off of eGPUs over Thunderbolt entirely.
Sleep broke across all OSs, if sleep didn't break the GPU wouldn't get powered on with the laptop. If one side lost power during an outage (the gpu side, the laptop has a battery..) it would require an elaborate voodoo ritual of cycling both of them on and off until they 'caught' each other. It would cause the rest of the USB ports on the laptop to reset and drop comms with peripherals once or twice a week, necessitating a rain-dance restart.
when Oculink first started showing up I gave up all together and just said "fuck it i'll try it again in a few years.".
It worked fine when it worked fine, but the patches in between were not worth my time.
I blame Dell and their thunderbolt controllers entirely for the issue, but it left such a bad taste in my mouth that I would have a really tough time buying the newest Sonnet box to try it out. Now I have a desktop machine and don't fall into that market.
I ended up throwing that card (an rtx 3xxx) into a dell rackmount and have been happy with that card ever since.
to your point though: the non proprietary PSU was a nice feature, but in reality the expansion card for PCI->Thunderbolt or whichever interface you're using can be bought on alibaba for like 20-30 bucks and the PSU is worth another 30-40 bucks , a generic white-label 650w. I think if I did it over i'd just do that and make an enclosure, but the Sonnet boxes aren't too bad a value by the numbers.
Maybe I’m lacking imagination. But how will a GPU with small-ish but fast VRAM and great compute, augment a Mac with large but slow VRAM and weak compute? The interconnect isn’t powerful enough to change layers on the GPU rapidly, I guess?
> But how will a GPU with small-ish but fast VRAM and great compute, augment a Mac with large but slow VRAM and weak compute?
It would work just like a discrete GPU when doing CPU+GPU inference: you'd run a few shared layers on the discrete GPU and place the rest in unified memory. You'd want to minimize CPU/GPU transfers even more than usual, since a Thunderbolt connection only gives you equivalent throughput to PCIe 4.0 x4.
My Mini is actually the smallest model so it actually has "small but slow VRAM" (haha!) so the reason I want the GPU for are the smaller Gemmas or Qwens. Realistically, I'll probably run on an RTX 6000 Pro but this might be fun for home.
“Lying around”. I’ve got an unopened 5090 in a box that I know will suffer the same fate, so I’m sending it back. So privileged to have the money to impulse buy a 5090 and yet no time to actually do anything with it.
I'm writing scientific software that has components (molecular dynamics) that are much faster on GPU. I'm using CUDA only, as it's the eaisiest to code for. I'd assumed this meant no-go on ARM Macs. Does this news make that false?
No, MLX is nothing like a Cuda translation layer at all. It’d be more accurate to describe MLX as a NumPy translation layer; it lets you write high level code dealing with NumPy style arrays and under the hood will use a Metal GPU or CUDA GPU for execution. It doesn’t translate existing CUDA code to run on non-CUDA devices.
My main thought is would this allow me to speed up prompt process for large MoE models? That is the real bottleneck for m3ultra. The tokens per second is pretty good.
tinygrad does have pretty neat support for sharding things across various devices relatively easy, that'd help. I'm guessing you'd hit the bandwidth ceiling transferring stuff back and forth though instead.
Doesn't Apple support the major standard device categories: NVMe, XHCI, AHCI, and such, like most operating systems do? The challenges are all for hardware that needs a vendor-specific driver instead of conforming to a standard driver interface (which doesn't always exist). Lots of those can be supported with userspace drivers, which can be supplied by third parties instead of needing to be written by Apple.
Not for the past decade; it's been no connectors for most products, but standard PCIe connectors for the Mac Pro, and NVMe over Thunderbolt works fine.
>> XHCI
> Not on Lightning.
Again, not relevant to any recent products. And I'm pretty sure you're misunderstanding what XHCI is if you think anything with a Lightning connector is relevant here (XHCI is not USB 3.0). You can connect a Thunderbolt dock that includes an XHCI USB host controller and it works out of the box with no further driver or software support. I assume you can do the same with a USB controller card in a Mac Pro.
>> AHCI
> How exactly would Apple not support AHCI?
This might be another case of you not understanding what you're talking about and are lost in an entirely different layer of the protocol stack. Not supporting AHCI would be easy, since they're no longer selling any products that use SATA, and PCIe SSDs that use AHCI instead of NVMe died out a decade ago. But as far as I know, a SATA controller card at the far end of a Thunderbolt link or in a Mac Pro PCIe slot should still work, if the SATA controller uses AHCI instead of something proprietary as is typical for SAS controllers.
> Why does Apple need to make the drivers in a walled garden?
Isn't that the whole point of the walled garden, that they approve things? How could they aim and realize a walled garden without making things like that have to pass through them?
I think the OP is asking why Apple is enclosing macs in a walled garden when that concept is generally associated with iPhones, not general-purpose computers.
Macs and PCs are fundamentally different. Their architectures have always been distinct though the Intel Mac era has somewhat blurred the line.
Modern Mac is Macintosh descendants and by contrast PC is IBM PC descendants (their real name is technically PC-clone but because IBM PC don’t exist anymore the clone part have been scrapped).
And with Apple silicon Mac the two is again very different, for example Mac don’t use NVMe, they use just nand (their controller part is integrated in the SoC) and they don’t use UEFI or BIOS, but a combination of Boot ROM, LLB and iBoot
> Why does Apple need to make the drivers in a walled garden?
Because third party drivers usually are utter dogshit. That's how Apple managed to get double the battery life time even in the Intel era over comparable Windows based offerings.
Well, for starters, PCIe 5.0 x16 would do something like about 60 GB/s each way, while Thunderbolt 4 does 4 GB/s each way, TB 5 does 8 GB/s each way. If you don't actually hit the bandwidth limits, it obviously matters less. Whether you'd notice a large difference would depends heavily on the type of workload.
I hooked up a Radeon RX 9060 XT to my Feodra KDE laptop (Yoga Pro 7 14ASP9) using a Razer Core X Chroma (40Gbps), and the performance when using the eGPU was very similar to using the Radeon 880M built into the laptop's Ryzen 9 365 APU.
So at least with my setup, performance is not great at all.
On paper, TB4 is capable of pushing 5GB/s, which is somewhere between 4x and 8x of PCIe 3.0, while a 16x PCIe 4.0 link can do ~31.5GB/s.
For gaming, lots of things can affect Thunderbolt eGPU performance.
First, you need to connect the display directly to the eGPU rather than to the laptop.
Second, you need to make sure you have enough VRAM to minimize texture streaming during gameplay.
Third, you'll typically see better performance in terms of higher settings/resolutions vs higher framerates at lower settings/resolutions.
Fourth, depending on your system, you may be bottlenecked by other peripherals sharing PCH lanes with the Thunderbolt connection.
Finally, depending on the Thunderbolt version, PCIe bandwidth can be significantly lower than the advertised bandwidth of the Thunderbolt link. For example, while Thunderbolt 3 advertises 40 Gbps, and typically connects via x4 PCIe 3.0 (~32 Gbps), for whatever reason it imposes a 22 Gbps cap on PCIe data over the Thunderbolt link.
Even taking all this into account, you'll still see a significant performance drop on a current-gen GPU when running over Thunderbolt, though I'd still expect a useful performance improvement over integrated graphics in most cases (though not necessarily worth the cost of the eGPU enclosure vs just buying a cheap used minitower PC on eBay and gaming on that instead of a laptop).
So you're just replying to the headline, not the actual article. Useful.
Apple, just like Microsoft, has a driver signing process because drivers have basically system-wide access to a system. There is no evidence that nvidia has tried to get eGPU drivers signed for years, but now someone did and Apple signed it. So?
And you could always, precisely as the article states in the very first paragraph, disable System Integrity Protection if you want to run drivers that aren't signed.
They... do? Or rather, they built a system where they don't need to; macs happily run Linux on bare metal or VMs. (Whether Linux supports Apple hardware well is another matter)
The opportunity cost of Apple refusing to sign Nvidia's OEM AArch64 drivers is probably reaching the trillion-dollar mark, now that Nvidia and ARM have their own server hardware.
Apple got out of the server game long before they adopted aarch64, so that's a trillion worth of server hardware they never would have sold anyway. And probably not actually a trillion.
Almost everyone including myself had MacBook Pros at my last place of work.
If Apple was in the high-end server market, I see no reason why the company I was working for would not be running macOS on Apple hardware as servers, instead of the fleet of Linux based servers they had.
Why wait? You can go run macOS as a server right now. It will take you a few hours to get Docker working, and disable mdworker_shared() and turn off SIP, and then install a package manager/XCode utilities, and finally configure macOS to run as a headless UNIX box, but it's attainable.
Despite how easy Apple makes it, nobody is really using Macs as a server in production. Apple[0] is not using them as a server in production. They would need a radically different strategy to replace Linux, because their efforts on macOS still haven't replaced Windows.
https://xcancel.com/__tinygrad__/status/2039213719155310736
Redirect: https://x.com/*
to: https://xcancel.com/$1
You want to use an NVidia GPU for LLM ? just buy a basic PC on second hand (the GPU is the primary cost anyway), you want to use Mac for good amount of VRAM ? Buy a Mac.
With this proposed solution you have an half-backed system, the GPU is limited by the Thunderbolt port and you don’t have access to all of NVidia tool and library, and on other hand you have a system who doesn’t have the integration of native solution like MLX and a risk of breakage in future macOS update.
> the hardware wasn't usable on macOS
This eGPU thing is from a third-party if I understand correctly. I don't see why nvidia would get excited about that. If they cared about the platform, they would have released something already.
The software stack has been ready for Apple Silicon for more than a half decade.
The tradeoff vs. a physical eGPU: no Thunderbolt bandwidth ceiling or cabling, but you do need to be on the same LAN and there's ~4% overhead vs. native. Doesn't help if you need the GPU while traveling, and won't fix the physical macOS driver situation for native GPU access.
Disclosure: I work on GPU Go (tensor-fusion.ai/products/gpu-go), so I'm obviously biased toward this approach — but it genuinely is a different point in the design space from eGPU.
At that point you're making more work for yourself than debugging over SSH.
[1] https://docs.tinygrad.org/tinygpu/
Before that was the pre-trash can Mac Pro in 2006-2012. So that was canceled most of a decade before the 2019 model.
High bandwidth PCIe hasn’t been a thing in Apple world for most of 15 years.
Also Thunderbolt is trivially disconnected, which in many critical workflows is not a positive, but an opportunity for ill-timed interruptions. Plus I don't have to buy a fucking dongle/dock for a real goddamn slot, make room for external power supplies, etc.
Apple has a monopoly over the "M-chip" personal computer market. They have a monopoly over the iOS market with the app store. They have a monopoly over the driver market on macOS.
Like, Microsoft was found guilty of exploiting its monopoly for installing IE by default while still allowing other browser engines. On iOS, apple bundles safari by default and doesn't allow other browser engines.
If we apply the same standard that found MS a monopoly in the past, then Apple is obviously a monopoly, so at the very least I think it's fair to say that reasonable people can disagree about whether Apple is a monopoly or not.
[0]: https://en.wikipedia.org/wiki/United_States_v._Microsoft_Cor....
The relevant thing here isn't the chips, it's tying things to the chips, because those would otherwise be separate markets. If you could feasibly buy an iPhone and install Android or Lineage OS on it or use Google Play or F-Droid on iOS then no one would be saying that Apple has a monopoly on operating systems or app stores for iOS since there would actually be alternatives to theirs.
The fake alternative is that you could use a different store by buying a different phone, but this is like saying that if Toyota is the only one who can change the brake pads on a Toyota and Ford is the only one who can change the brake pads on a Ford then there is competition for "brake pads" because when your Toyota needs new brake pads you can just buy a Ford vehicle. It's obvious why this is different than anyone being able to buy third party brake pads for your Toyota from Autozone, right?
> It’s also probably relevant that MS was not selling PCs or their own hardware.
This is the thing that unambiguously should never be relevant. It can't be a real thing that you can avoid being a monopoly by owning more of the supply chain. It's like saying that Microsoft could have avoided being a monopoly by buying Intel and AMD, or buying one of them and then exterminating the other by refusing to put Windows on it. That's a preposterous perverse incentive.
Move the most important aspects of your software to hardware. Hard for MacOS but for a Chromebook style thing you could write the browser into its own pice of wafer.
Google should pay me to be this evil.
So now you have a piece of silicon with a two year old version of Chrome with seventeen CVEs hard-coded into it, and still have all the same antitrust problems because the device still also has an ordinary general purpose CPU that you're still anti-competitively impeding people from using to run Firefox or Ladybird.
But the M series are an Apple product line designed by Apple with a ARM license and produced on contract by TSMC for use in other Apple products.
Don’t assume the facts from another case automatically apply in other cases.
Or as Justice Jackson once put it: “Other cases presenting different allegations and different records may lead to different conclusions”
Microsoft was found guilty, so clearly the bar is not what you're trying to claim.
Go ahead, I'll wait.
There are plenty of Linux distributions that use immutable root volumes. They protect the user in a huge number of ways by preventing the system from getting hosed (either by accident or by malicious unauthorized users / malware). Apple made the decision to do this for their users, and it has prevented a HUGE amount of tech support calls, as well as led to millions of happy users with trouble-free computers.
It also hasn't stopped users from installing Chrome and/or Firefox on their Macs, and millions of ordinary users have.
But hey, maybe some weird shit happened during the clone years that I’m not privy to.
Just an example… and yes, I know the EU ruling but it’s still fitting.
Compare the games console market. Nintendo is allowed to say you have to go through them to sell games for the Switch, ditto Microsoft with the Xbox. Sony doing the same thing with the Playstation is exactly equivalent, but they're approaching the sort of market dominance where it might soon be illegal for them (and them alone) to do that in some markets.
The monopoly that Microsoft held was the home computer operating system market, first through DOS, then later through Windows. Holding a monopoly like that isn't illegal unto itself. What they were actually found guilty of was unfairly leveraging their monopoly on the OS market to gain the upper hand in a different market (the browser market). The subsequent range of issues we had with IE6 (compatibility, security, etc) was a result of Microsoft succeeding in achieving a monopoly on the browser market through illicit means.
Likewise, "Apple has a monopoly on the App Store" is just the same amount of nonsense. What you could argue is that Apple has a monopoly on the home computer market, or the mobile phone market, and that the way they integrate the App Store should be considered illegal leveraging of that monopoly, but that argument simply doesn't hold water — Microsoft's monopoly on the OS market at the time was pretty much incontrovertible, you simply couldn't walk into a shop and buy a computer running something else (except maybe a Mac at a more specialised place). Today, just about any shop you walk into that sells computers will probably have devices for sale running three different OSes (macOS, Windows, ChromeOS). Any phone place will have iPhones and Android devices, and probably a few more niche options. Actual market share percentage is nowhere near the high 90s that Microsoft saw in its heyday. At most, Apple is the biggest individual competitor in the market, but I don't think it hold an outright majority in any specific product class.
Mind you, I think that there is a good argument to be made that the Apple/Google duopoly on mobile devices does deserve scrutiny, but that's a very different kettle of fish.
When a company is deemed an illegal monopoly, the DoJ basically becomes part of management. Antitrust settlements focus on germane elements, e.g. spin offs. But they also frequently include random terms of political convenience.
I don’t think we want a precedent where companies having a product means they have an automatic monopoly on said product.
Intel sold chips to anyone. Anyone could make Intel computers.
Apple does not sell chips to anyone. Nobody else can make m-series computers.
Your argument is basically that Ford has a monopoly on selling mustangs because standard oil had a monopoly on selling oil.
If we have a right to repair (we broadly do not, AFAICT), then that doesn't necessarily mean that we have a right to modify and/or add new functionality.
When I repair a widget that has become broken, I merely return it to its previous non-broken state. I might also decide to upgrade it in some capacity as part of this repair process, but the act of repairing doesn't imply upgrades. At all.
> No OS provider should be allowed to dictate what software you can or not run on your own device and / or OS you have paid for.
I agree completely, but here we are anyway. We've been here for quite some time.
Apple's decision is not constrained by server logic or ballooning costs, it is entirely a client-based policy to not sign CUDA drivers.
Microsoft rewrote their Windows Phone native client to pass through Google's ads. Google still blocked it.
Was it normal behavior when Google blocked Amazon Fire devices from connecting to YouTube with a web browser during the Google/Amazon corporate spat?
To be fair, Google did back down almost immediately when the tech press picked up on it.
Not allowing a native client for your monopoly market share video service on Amazon devices while also blocking Amazon's web browser on those devices is making things a bit too obvious.
Clients are not offered at-will, they either work or they don't. Nvidia ships AArch64 UNIX drivers, Apple is the one that neglects their UNIX clients.
Google used YouTube as a weapon against both Windows Phone and devices running Amazon's Fire fork of Android.
A "monopoly" "service"? What have they monopolized, laziness? It's not the App Store, you can go replace it with DailyMotion at your earliest convenience.
You're still retreading why your original comment was not at all relevant to the critique being made. We have precedent for prosecuting monopolistic behavior in America, but it doesn't encompass services even when they're mandatory to use the client. It does have a precedent for arbitrarily preventing competitors from shipping a runtime that competes with the default OS, incidentally.
If you think otherwise, make your case to Google's lawyers instead of spinning hypothetical case law.
You do own the client though. In the example upstream, the failure to support macOS clients can't be blamed on Nvidia because they already wrote AArch64 UNIX support.
This is as basic as antitrust law gets.
Since that’s definitely a big enough use case all on its own, I wonder if such a product should really just double down on LLMs.
folding@home and other such asynchronous "get this packet of work done and get back to me' style of operations rarely care much about latency.
Remote transcoding efforts can usually adjust whatever buffer needed to cover huge latency gaps , a lot of sim and render suites can do remote work regardless of machine to machine latency..
I just sort of figure the industry will trend more async when latency becomes a bigger issue than compute. Won't work in some places, but I think we tend to avoid thinking that way right now due to a lack of real need to do so; but latency is one of those numbers that trends down slowly.
Because of that, you need an apple device around to be able to deal with iMessage users.
In my bubble literally noone uses iMessage. More tech savvy use Signal/GroupMe, less tech savvy use SMS/Email. Family use Signal to chat with me, as I can steer my own family a little.
Also I sometimes open web-interface of Facebook, but any attempts to offer WhatsApp I answer "sorry no Facebook apps on my phone, no Instagram/Messenger either". Never had any issues with that. Although I heard some countries are very dependent on Facebook, so might be hard there.
By the way, I noticed it's not hard to use multiple messengers actually, sometimes it's even faster to find a contact as you always remember what app to look at in recents.
UPDATE: My point is that you can also influence your life and how people communicate with you. Up to a point of course, but it's not like you can do nothing with it.
Your green bubble? =P
My social circle is the complete opposite. We're all on iMessage (except for one group of extended family on Messenger), and we like it that way. I was the last holdout for years while I went from Android -> Windows Phone -> Android -> iPhone.
You've listed a whole bunch of alternatives available to you, but for some reason you demand that Apple change its unique offering into just another one of those for you. Why? Is that not a completely enforced monoculture?
Apple has always been off to the side, doing their own thing, and for some reason that fact utterly enrages people. They demand that Apple become just like everyone else. But we already have everyone else! And in every single field Apple is in, there is more of everyone else than there is of Apple.
Have you considered people like Apple products precisely because they're not like everything else? That making Apple indistinguishable from Facebook or Google is no victory, but a significant loss for customer choice?
Thanks to Apple co-opting phone numbers, there's literally no need to ever have iMessage for anyone
The machine I'm using now represents my choices and matches what matters to me, and works closer to perfectly than all my machines in the past
And yes, I have worked with macs, and no, the UX and the entire tyranny in the Apple ecosystem was not something I could live with
And yes, this machine is fast, predictable, a joy to work with and is a tool I control, not a tool to control me. If something happens to it, I can order the part with the same price that goes into a new machine, and keep using my laptop
Like, for phones, I want a phone which runs Linux, has NFC support, and also has iMessage so my friend who only communicates with blue-bubbles and will never message a green-bubble will still talk to me. I also want it to have regulatory approval in the country I live in so I can legally use it to make calls.
Because apple has closed the iMessage ecosystem such that a linux phone can't use it, such a device is impossible. I cannot vote for it.
As such, I will complain about every phone I own for the foreseeable future.
I actually agree with you, but I also suggest getting better friends.
And then they have iMessage, aka blue bubbles, which are kinda like Signal or Whatsapp or Telegram. Everyone in Europe uses whatsapp, and a lot of people in the US use iMessage. If you don't use whatsapp in europe, you'll have a rough time communicating with some social groups, and the same thing for iMessage in the US.
However, unlike every other messenger app I can think of, iMessage isn't cross platform.
Also unlike every other messenger I can think of, it comes installed by default and for some reason uses the same app as the SMS app, and also claims encryption but randomly switches to SMS and breaks encryption making it obviously the least secure of all the apps (and also backs up your keys to iCloud in a way apple can access them by default, neither here nor there).
Blue bubbles are when iMessage is acting as the iMessage app, and has encryption and can use features like sending high resolution photos, location, invites, and a bunch of other apple-specific features.
Green bubbles are when the iMessage app has converted itself into the SMS and RCS app, and has a reduced feature set, like being unable to remove people from group chats.
It's frankly a quite confusing decision to have two quite different apps built into the same app and indicate which feature-set is active based on the color of a UI element. I think everyone would prefer if apple split it into the 'Messages' app (SMS + RCS) and an optional 'iMessage' app which doesn't come installed by default, but you can download on the app store from Apple. I'm frankly surprised the EU hasn't forced apple to show a prompt for "default messenger app" on startup with the options being "Whatsapp", "iMessage", etc etc, like they do for default browser.
No, I don't think anyone would prefer that. People on iOS like iMessage, not SMS + RCS. Nobody is confused by it, they all know that green bubbles means you're texting someone who doesn't have an iPhone. It works seamlessly, it's just annoying when you want to have along conversation with a friend on Android because it doesn't have any nice iMessage extras available – that's why people don't like green bubbles.
Or I could have totally misunderstood the role of Docker in this.
My read of everything is that they are using Docker for NVIDIA GPUs for the sake of "how do you compile code to target the GPU"; for AMD they're just compiling their own LLVM with the appropriate target on macOS.
I hope it'll work on an M4 Mac Mini. Does anyone know what hardware to get? You'll need a full ATX PSU to supply power, right? And then tinygrad can do LLM inference on it?
Takes a standard PSU. However, Mac Minis don't have occulink. So you might be a bit limited by whatever USB C can do.
Now if Intel can get there Arc drivers in order we'll see some real budget fun.
https://www.newegg.com/intel-arc-pro-b70-32gb-graphics-card/...
32 GB of VRAM for 1000$. Plus a 500$ Mac Mini.
Article mentions: "Apple finally approved our driver for both AMD and NVIDIA"
Does not mention Intel (GPUs). Select AMD GPUs work on macOS, but...
Macs (both Intel and ARM) support TB, but eGPU only work on Intel Macs, and basically only with AMD.
Good news is for medium end gaming choices are solid, and CUDA works on AMD these days.
I own one of these, the cage is just a piece of plastic. Anyway, I don't think 80$ is that big of a difference here. I can't really afford a 4k Nvidia GPU. Intel is my only hope.
Brand is TH3P4G3. Egpu.io has decent eGPU comparisons.
I wouldn't want all that dust in my GPU fans, prefer that near my case fans. I also don't like it given I got cats and want to store/box hw. I do use the eGPU in the fuse box. If I had a larger house, I'd use a server rack.
I was recently in the market for an eGPU but for a different niche (not eGPU/eNPU/eTPU but getting a HBA via TB to connect a LTO-6 drive via SAS). I went for a Sonnet instead, very low profile and small. I also bought an Asus one. Slightly bigger, came with more fans but TB4 instead of TB3 on the Sonnet. The cages are aluminium. Those eGPU were second hand (also without warranty but quicker S&H than Chinese New Year) but came with PSU. As you also gotta buy a PSU for it which came with the eGPUs I mentioned. For me no biggie, as I got a decent PSU lying around.
One nice thing about the Sonnet eGPU boxes is that they use standard SFX PSUs that are inexpensive to replace if they fail.
For LTO, I'm cheap, and iSCSI over a dedicated 2.5 Gbps Ethernet link is fast enough for my aging FC LTO-5 drives and spinning rust backup disks.
Sleep broke across all OSs, if sleep didn't break the GPU wouldn't get powered on with the laptop. If one side lost power during an outage (the gpu side, the laptop has a battery..) it would require an elaborate voodoo ritual of cycling both of them on and off until they 'caught' each other. It would cause the rest of the USB ports on the laptop to reset and drop comms with peripherals once or twice a week, necessitating a rain-dance restart.
when Oculink first started showing up I gave up all together and just said "fuck it i'll try it again in a few years.".
It worked fine when it worked fine, but the patches in between were not worth my time.
I blame Dell and their thunderbolt controllers entirely for the issue, but it left such a bad taste in my mouth that I would have a really tough time buying the newest Sonnet box to try it out. Now I have a desktop machine and don't fall into that market.
I ended up throwing that card (an rtx 3xxx) into a dell rackmount and have been happy with that card ever since.
to your point though: the non proprietary PSU was a nice feature, but in reality the expansion card for PCI->Thunderbolt or whichever interface you're using can be bought on alibaba for like 20-30 bucks and the PSU is worth another 30-40 bucks , a generic white-label 650w. I think if I did it over i'd just do that and make an enclosure, but the Sonnet boxes aren't too bad a value by the numbers.
It would work just like a discrete GPU when doing CPU+GPU inference: you'd run a few shared layers on the discrete GPU and place the rest in unified memory. You'd want to minimize CPU/GPU transfers even more than usual, since a Thunderbolt connection only gives you equivalent throughput to PCIe 4.0 x4.
How big a bottleneck is Thunderbolt 5 compared to an SSD? Is the 120 Gbps mode only available when linked to a monitor?
That's why all the projects streaming models into the GPU from an SSD popped up recently.
Something like eNPU or eTPU seems more appropriate here.
For well over the previous decade Apple has not allowed newer nVidia GPUs (by not allowing drivers).
A seven year old GPU (e.g. VEGA64, RTX1080Ti) can still process more tokens/second than most Apple Silicon (particularly the lower-ends).
As discussed elsewhere, Apple MAX/Ultra processors are best-suited for huge models (but are not as fast as e.g. RTX5090).
Using proprietary connectors.
> XHCI
Not on Lightning.
> AHCI
How exactly would Apple not support AHCI?
> Using proprietary connectors.
Not for the past decade; it's been no connectors for most products, but standard PCIe connectors for the Mac Pro, and NVMe over Thunderbolt works fine.
>> XHCI
> Not on Lightning.
Again, not relevant to any recent products. And I'm pretty sure you're misunderstanding what XHCI is if you think anything with a Lightning connector is relevant here (XHCI is not USB 3.0). You can connect a Thunderbolt dock that includes an XHCI USB host controller and it works out of the box with no further driver or software support. I assume you can do the same with a USB controller card in a Mac Pro.
>> AHCI
> How exactly would Apple not support AHCI?
This might be another case of you not understanding what you're talking about and are lost in an entirely different layer of the protocol stack. Not supporting AHCI would be easy, since they're no longer selling any products that use SATA, and PCIe SSDs that use AHCI instead of NVMe died out a decade ago. But as far as I know, a SATA controller card at the far end of a Thunderbolt link or in a Mac Pro PCIe slot should still work, if the SATA controller uses AHCI instead of something proprietary as is typical for SAS controllers.
For the same reason that Microsoft requires Windows driver signing?
Drivers run with root permissions.
Isn't that the whole point of the walled garden, that they approve things? How could they aim and realize a walled garden without making things like that have to pass through them?
Modern Mac is Macintosh descendants and by contrast PC is IBM PC descendants (their real name is technically PC-clone but because IBM PC don’t exist anymore the clone part have been scrapped).
And with Apple silicon Mac the two is again very different, for example Mac don’t use NVMe, they use just nand (their controller part is integrated in the SoC) and they don’t use UEFI or BIOS, but a combination of Boot ROM, LLB and iBoot
Because third party drivers usually are utter dogshit. That's how Apple managed to get double the battery life time even in the Intel era over comparable Windows based offerings.
https://www.convertunits.com/from/Gbps/to/GB/s
I hooked up a Radeon RX 9060 XT to my Feodra KDE laptop (Yoga Pro 7 14ASP9) using a Razer Core X Chroma (40Gbps), and the performance when using the eGPU was very similar to using the Radeon 880M built into the laptop's Ryzen 9 365 APU.
So at least with my setup, performance is not great at all.
On paper, TB4 is capable of pushing 5GB/s, which is somewhere between 4x and 8x of PCIe 3.0, while a 16x PCIe 4.0 link can do ~31.5GB/s.
For numbers about all PCIe generations and lane counts, see the "History and revisions" section here: https://en.wikipedia.org/wiki/PCI_Express
Edit to add: the performance I measured is in gaming workloads, not compute
First, you need to connect the display directly to the eGPU rather than to the laptop.
Second, you need to make sure you have enough VRAM to minimize texture streaming during gameplay.
Third, you'll typically see better performance in terms of higher settings/resolutions vs higher framerates at lower settings/resolutions.
Fourth, depending on your system, you may be bottlenecked by other peripherals sharing PCH lanes with the Thunderbolt connection.
Finally, depending on the Thunderbolt version, PCIe bandwidth can be significantly lower than the advertised bandwidth of the Thunderbolt link. For example, while Thunderbolt 3 advertises 40 Gbps, and typically connects via x4 PCIe 3.0 (~32 Gbps), for whatever reason it imposes a 22 Gbps cap on PCIe data over the Thunderbolt link.
Even taking all this into account, you'll still see a significant performance drop on a current-gen GPU when running over Thunderbolt, though I'd still expect a useful performance improvement over integrated graphics in most cases (though not necessarily worth the cost of the eGPU enclosure vs just buying a cheap used minitower PC on eBay and gaming on that instead of a laptop).
Apple, just like Microsoft, has a driver signing process because drivers have basically system-wide access to a system. There is no evidence that nvidia has tried to get eGPU drivers signed for years, but now someone did and Apple signed it. So?
And you could always, precisely as the article states in the very first paragraph, disable System Integrity Protection if you want to run drivers that aren't signed.
If Apple was in the high-end server market, I see no reason why the company I was working for would not be running macOS on Apple hardware as servers, instead of the fleet of Linux based servers they had.
Despite how easy Apple makes it, nobody is really using Macs as a server in production. Apple[0] is not using them as a server in production. They would need a radically different strategy to replace Linux, because their efforts on macOS still haven't replaced Windows.
[0] https://9to5mac.com/2026/03/02/some-apple-ai-servers-are-rep...