Concurrency issues aside, I've been working on a greenfield iOS project recently and I've really been enjoying much of Swift's syntax.
I’ve also been experimenting with Go on a separate project and keep running into the opposite feeling — a lot of relatively common code (fetching/decoding) seems to look so visually messy.
E.g., I find this Swift example from the article to be very clean:
func fetchUser(id: Int) async throws -> User {
let url = URL(string: "https://api.example.com/users/\(id)")!
let (data, _) = try await URLSession.shared.data(from: url)
return try JSONDecoder().decode(User.self, from: data)
}
And in Go (roughly similar semantics)
func fetchUser(ctx context.Context, client *http.Client, id int) (User, error) {
req, err := http.NewRequestWithContext(
ctx,
http.MethodGet,
fmt.Sprintf("https://api.example.com/users/%d", id),
nil,
)
if err != nil {
return User{}, err
}
resp, err := client.Do(req)
if err != nil {
return User{}, err
}
defer resp.Body.Close()
var u User
if err := json.NewDecoder(resp.Body).Decode(&u); err != nil {
return User{}, err
}
return u, nil
}
I understand why it's more verbose (a lot of things are more explicit by design), but it's still hard not to prefer the cleaner Swift example. The success path is just three straightforward lines in Swift. While the verbosity of Go effectively buries the key steps in the surrounding boilerplate.
This isn't to pick on Go or say Swift is a better language in practice — and certainly not in the same domains — but I do wish there were a strongly typed, compiled language with the maturity/performance of e.g. Go/Rust and a syntax a bit closer to Swift (or at least closer to how Swift feels in simple demos, or the honeymoon phase)
I'm conflicted about the implicit named returns using this pattern in go. It's definitely tidier but I feel like the control flow is harder to follow: "I never defined `user` how can I return it?".
Also those variables are returned even if you don't explicitly return them, which feels a little unintuitive.
It's a good article but I think you need to start explaining structured concurrency from the very core of it: why it exists in the first place.
The design goal of structured concurrency is to have a safe way of using all available CPU cores on the device/computer. Modern mobile phones can have 4, 6, even 8 cores. If you don't get a decent grasp of how concurrency works and how to use it properly, your app code will be limited to 1 or 1.5 cores at most which is not a crime but a shame really.
That's where it all starts. You want to execute things in parallel but also want to ensure data integrity. If the compiler doesn't like something, it means a design flaw and/or misconception of structured concurrency, not "oh I forgot @MainActor".
Swift 6.2 is quite decent at its job already, I should say the transition from 5 to 6 was maybe a bit rushed and wasn't very smooth. But I'm happy with where Swift is today, it's an amazing, very concise and expressive language that allows you to be as minimalist as you like, and a pretty elegant concurrency paradigm as a big bonus.
I wish it was better known outside of the Apple ecosystem because it fully deserves to be a loved, general purpose mainstream language alongside Python and others.
One of the things that really took me a long time to map in my head correctly is that in theory async/await should NOT be the same as spinning up a new thread (across most languages). It's just suspending that closure on the current thread and coming back around to it on the next loop of that existing thread. It makes certain data reads and writes safe in a way that multithreading doesn't. However, as noted in article, it is possible to eject a task onto a different thread and then deal with data access across those boundaries. But that is an enhancement to the model, not the default.
I'd argue the default is that work _does_ move across system threads, and single-threaded async/await is the uncommon case.
Whether async "tasks" move across system threads is a property of the executor - by default C#, Swift and Go (though without the explicit syntax) all have work-stealing executors that _do_ move work between threads.
In Rust, you typically are more explicit about that choice, since you construct the executor in your "own" [1] code and can make certain optimizations such as not making futures Send if you build a single threaded one, again depending on the constraints of the executor.
You can see this in action in Swift with this kind of program:
import Foundation
for i in 1...100 {
Task {
let originalThread = Thread.current
try? await Task.sleep(for: Duration.seconds(1))
if Thread.current != originalThread {
print("Task \(i) moved from \(originalThread) to \(Thread.current)")
}
}
}
RunLoop.main.run()
Note to run it as-is you have to use a version of Swift < 6.0, which has prevented Thread.current being exposed in asynchronous context.
[1]: I'm counting the output of a macro here as your "own" code.
actor TemperatureLogger {
let label: String
var measurements: [Int]
private(set) var max: Int
init(label: String, measurement: Int) {
self.label = label
self.measurements = [measurement]
self.max = measurement
}
}
Here, the ‘actor’ keyword provides a strong hint that this defines an actor. The code to call an actor in Swift also is clean, and clearly signals “this is an async call” by using await:
await logger.max
I know Akka is a library, and one cannot expect all library code to look as nice as code that has actual support from the language, but the simplest Akka example seems to be something like this (from https://doc.akka.io/libraries/akka-core/current/typed/actors...):
object HelloWorld {
final case class Greet(whom: String, replyTo: ActorRef[Greeted])
final case class Greeted(whom: String, from: ActorRef[Greet])
def apply(): Behavior[Greet] = Behaviors.receive { (context, message) =>
context.log.info("Hello {}!", message.whom)
message.replyTo ! Greeted(message.whom, context.self)
Behaviors.same
}
}
I have no idea how naive readers of that would easily infer that’s an actor. I also would not have much idea about how to use this (and I _do_ have experience writing scala; that is not the blocker).
You may claim that’s because Akka http isn’t good code, but I think the point still stands that Akka allows writing code that doesn’t make it obvious what is an actor.
This is my feeling as well. It feels to me that based on the current product, Swift had two different designers: one designer who felt swift needed to be a replacement for Objective C and therefore needed to feel like a spiritual successor to that language, which meant it had to be fundamentally OOP, imperative, and familiar to iOS devs; and another designer who wanted it to be a modern functional language for writing dynamic user interfaces with an advanced type checker, static analysis, and reactive updates for dynamic variables.
The end result is a language that brings the worst of both worlds while not really bringing the benefits. An example I will give is SwiftUI, which I absolutely hate. You'd think this thing would be polished, because it's built by Apple for use on Apple devices, so they've designed the full stack from editor to language to OS to hardware. Yet when writing SwiftUI code, it's very common for the compiler to keel over and complain it can't infer the types of the system, and components which are ostensibly "reactive" are plagued by stale data issues.
Seeing that Chris Lattner has moved on from Swift to work on his own language, I'm left to wonder how much of this situation will actually improve. My feeling on Swift at this point is it's not clear what it's supposed to be. It's the language for the Apple ecosystem, but they also want it to be a general purpose thing as well. My feeling is it's never not going to be explicitly tied to and limited by Apple, so it's never really going to take off as a general purpose programming language even if they eventually solve the design challenges.
Because it's extremely hard to retrofit actors (or, really, any type of concurrency and/or parallelism) onto a language not explicitly designed to support it from scratch.
I really don't know why Apple decided to substitute terms like "actor" and "task" with their own custom semantics. Was the goal to make it so complicated that devs would run out of spoons if they try to learn other languages?
And after all this "fucking approachable swift concurrency", at the end of the day, one still ends up with a program that can deadlock (because of resources waiting for each other) or exhaust available threads and deadlock.
Also, the overload of keywords and language syntax around this feature is mind blowing... and keywords change meaning depending on compiler flags so you can never know what a code snippet really does unless it's part of a project. None of the safeties promised by Swift 6 are worth the burnout that would come with trying to keep all this crap in one's mind.
Do people actually believe that there are too many keywords? I’ve never met a dev irl that says this but I see it regurgitated on every post about Swift. Most of the new keywords are for library writers and not iOS devs.
Preventing deadlock wasn’t a goal of concurrency. Like all options - there are trade offs. You can still used gcd.
I’ve also been experimenting with Go on a separate project and keep running into the opposite feeling — a lot of relatively common code (fetching/decoding) seems to look so visually messy.
E.g., I find this Swift example from the article to be very clean:
And in Go (roughly similar semantics) I understand why it's more verbose (a lot of things are more explicit by design), but it's still hard not to prefer the cleaner Swift example. The success path is just three straightforward lines in Swift. While the verbosity of Go effectively buries the key steps in the surrounding boilerplate.This isn't to pick on Go or say Swift is a better language in practice — and certainly not in the same domains — but I do wish there were a strongly typed, compiled language with the maturity/performance of e.g. Go/Rust and a syntax a bit closer to Swift (or at least closer to how Swift feels in simple demos, or the honeymoon phase)
Also those variables are returned even if you don't explicitly return them, which feels a little unintuitive.
The design goal of structured concurrency is to have a safe way of using all available CPU cores on the device/computer. Modern mobile phones can have 4, 6, even 8 cores. If you don't get a decent grasp of how concurrency works and how to use it properly, your app code will be limited to 1 or 1.5 cores at most which is not a crime but a shame really.
That's where it all starts. You want to execute things in parallel but also want to ensure data integrity. If the compiler doesn't like something, it means a design flaw and/or misconception of structured concurrency, not "oh I forgot @MainActor".
Swift 6.2 is quite decent at its job already, I should say the transition from 5 to 6 was maybe a bit rushed and wasn't very smooth. But I'm happy with where Swift is today, it's an amazing, very concise and expressive language that allows you to be as minimalist as you like, and a pretty elegant concurrency paradigm as a big bonus.
I wish it was better known outside of the Apple ecosystem because it fully deserves to be a loved, general purpose mainstream language alongside Python and others.
Every time I think I “get” concurrency, a real bug proves otherwise.
What finally helped wasn’t more theory, but forcing myself to answer basic questions:
What can run at the same time here?
What must be ordered?
What happens if this suspends at the worst moment?
A rough framework I use now:
First understand the shape of execution (what overlaps)
Then define ownership (who’s allowed to touch what)
Only then worry about syntax or tools
Still feels fragile.
How do you know when your mental model is actually correct? Do you rely on tests, diagrams, or just scars over time?
Whether async "tasks" move across system threads is a property of the executor - by default C#, Swift and Go (though without the explicit syntax) all have work-stealing executors that _do_ move work between threads.
In Rust, you typically are more explicit about that choice, since you construct the executor in your "own" [1] code and can make certain optimizations such as not making futures Send if you build a single threaded one, again depending on the constraints of the executor.
You can see this in action in Swift with this kind of program:
Note to run it as-is you have to use a version of Swift < 6.0, which has prevented Thread.current being exposed in asynchronous context.[1]: I'm counting the output of a macro here as your "own" code.
Reading https://docs.swift.org/swift-book/documentation/the-swift-pr..., their first example is:
Here, the ‘actor’ keyword provides a strong hint that this defines an actor. The code to call an actor in Swift also is clean, and clearly signals “this is an async call” by using await: I know Akka is a library, and one cannot expect all library code to look as nice as code that has actual support from the language, but the simplest Akka example seems to be something like this (from https://doc.akka.io/libraries/akka-core/current/typed/actors...): I have no idea how naive readers of that would easily infer that’s an actor. I also would not have much idea about how to use this (and I _do_ have experience writing scala; that is not the blocker).And that gets worse when you look at Akka http (https://doc.akka.io/libraries/akka-http/current/index.html). I have debugged code using it, but still find it hard to figure out where it has suspension points.
You may claim that’s because Akka http isn’t good code, but I think the point still stands that Akka allows writing code that doesn’t make it obvious what is an actor.
https://github.com/apple/swift-distributed-actors is more like Akka, but with better guarantees from the underlying platform because of the first-class nature of actors.
The end result is a language that brings the worst of both worlds while not really bringing the benefits. An example I will give is SwiftUI, which I absolutely hate. You'd think this thing would be polished, because it's built by Apple for use on Apple devices, so they've designed the full stack from editor to language to OS to hardware. Yet when writing SwiftUI code, it's very common for the compiler to keel over and complain it can't infer the types of the system, and components which are ostensibly "reactive" are plagued by stale data issues.
Seeing that Chris Lattner has moved on from Swift to work on his own language, I'm left to wonder how much of this situation will actually improve. My feeling on Swift at this point is it's not clear what it's supposed to be. It's the language for the Apple ecosystem, but they also want it to be a general purpose thing as well. My feeling is it's never not going to be explicitly tied to and limited by Apple, so it's never really going to take off as a general purpose programming language even if they eventually solve the design challenges.
Because it's extremely hard to retrofit actors (or, really, any type of concurrency and/or parallelism) onto a language not explicitly designed to support it from scratch.
And after all this "fucking approachable swift concurrency", at the end of the day, one still ends up with a program that can deadlock (because of resources waiting for each other) or exhaust available threads and deadlock.
Also, the overload of keywords and language syntax around this feature is mind blowing... and keywords change meaning depending on compiler flags so you can never know what a code snippet really does unless it's part of a project. None of the safeties promised by Swift 6 are worth the burnout that would come with trying to keep all this crap in one's mind.
Preventing deadlock wasn’t a goal of concurrency. Like all options - there are trade offs. You can still used gcd.