Hi, I work on Dart and was one of the people working on this feature. Reposting my Reddit comment to provide a little context:
I'm bummed that it's canceled because of the lost time, but also relieved that we decided to cancel it. I feel it was the right decision.
We knew the macros feature was a big risky gamble when we took a shot at it. But looking at other languages, I saw that most started out with some simple metaprogramming feature (preprocessor macros in C/C++, declarative macros in Rust, etc.) and then later outgrew them and added more complex features (C++ template metaprogramming, procedural macros in Rust). I was hoping we could leapfrog that whole process and get to One Metaprogramming Feature to Rule Them All.
Alas, it is really hard to be able to introspect on the semantics of a program while it is still being modified in a coherent way without also seriously regressing compiler performance. It's probably not impossible, but it increasingly felt like the amount of work to get there was unbounded.
I'm sad we weren't able to pull it off but I'm glad that we gave it a shot. We learned a lot about the problem space and some of the hidden sharp edges.
I'm looking forward to working on a few smaller more targeted features to deal with the pain points we hoped to address with macros (data classes, serialization, stateful widget class verbosity, code generation UX, etc.).
Sounds like performance is the biggest issue. I'm guessing the macros need to run after every keystroke. That creates big time constraint that while nice it's not really needed.
Due to AOT compilation, some form of (pre)compile time code generation is needed, but it doesn't need to be macros. It doesn't need to be instantaneous, but it also shouldn't take minutes.
Adding features directly into the language removes the need for some code generation.
Augmentations will already make code generation much nicer to use.
build_runner needs to become more integrated so that IDEs would read build_runner's config and run it automatically.
I always feel better about the stewardship of a project when you see a thoughtfully written reason for saying no to a feature, especially when there’s already sunk cost. Props to the team.
This is good news. The dart language has been getting more complicated without corresponding quality of life improvements. A first class record object without messing around with macros would be a great start.
Because implementing them is tedious, and you can always simulate them with simpler aggregation methods, or possibly lexical closures.
When the language implementors start making larger programs, it will soon become apparent how the program organization is hampered without named, defined data structures.
I didn't add structs to TXR Lisp until August 2015, a full six years from the start of the project. I don't remember it being all that much fun, except when I changed my mind about single inheritance and went multiple. The first commit for that was in December 2019.
Another fun thing was inventing a macro system for defstruct, allowing new kinds of clauses to be written that can be used inside defstruct. Then using them to write a delegation mechanism in the form of :delegate and :mass-delegate clauses, whereby you can declare individual methods, or a swath of them, to delegate through another object.
Because it's arguably syntactic sugar and, IMHO, it's worked out better for developers for Dart to model it as a 3rd party library problem. i.e. have a JSONSerializable protocol, and enable easy code generation by libraries.
i.e. I annotate my models with @freezed, which also affords you config of the N different things people differ on with json (are fields snake case? camel case? pascal case?) and if a new N+1 became critical, I could hack it in myself in 2-4 hours.
I'm interested to see how this'd integrate with the language while affording the same functionality. Or maybe that's the point: it won't, but you can always continue using the 3rd party libraries. But now it's built into the language, so it is easier to get from 0 to 1.
Macros give their own kind of power, and it's a tough call to give that up for runtime hot-reloading. Languages like Haxe have macros, but also have hot reloading capabilities that typically are supported in certain game frameworks. You probably don't want to mix them together, but it's also a good development process to have simpler compilation targets that enable more rapid R&D, and then save macros for larger/more comprehensive builds.
> Runtime introspection (e.g., reflection) makes it difficult to perform the tree-shaking optimizations that allow us to generate smaller binaries.
Does anyone have any more information on How Dart actually does Tree Shaking? And what is "Tree Shakeable"? This issue is still open on Github https://github.com/Dart-lang/sdk/issues/33920.
I think this quote accurately sums things up
> In fact the only references I can find anywhere to this feature is on the Dart2JS page:
> Don’t worry about the size of your app’s included libraries. The dart2js tool performs tree shaking to omit unused classes, functions, methods, and so on. Just import the libraries you need, and let dart2js get rid of what you don’t need.
> This has led customers to wild assumptions around what is and what is not tree-shakeable, and without any clear guidance to patterns that allow or disallow tree-shaking. For example internally, many large applications chose to store configurable metadata in a hash-map:
I don't have a full answer for you, but I know a little. I've hacked on the Dart compiler some, but my relationship with Dart has mostly been as a creator of Flutter and briefly Eng Dir for the Dart project.
Dart has multiple layers where it does tree shaking.
The first one is when building the "dill" (dart intermediate language) file, which is essentially the "front-end" processing step of the compiler which takes .dart files and does amount of processing. At that step things like entire unused libraries and classes are removed I believe.
When compiling to an ahead of time compiled binary (e.g. for releasing to iOS or Android) Dart does additional steps where it collects a set of roots and walks from those roots to related objects in the graph and discards all the rest. Not unlike a garbage collection. There are several passes of this for different parts of the compile, including as Dart is even writing the binary it will drop things like class names for unused classes (but keep their id in the snapshot so as not to re-number all the other classes).
It is incredibly slow though. I have a project with 40k lines of code which takes a minute to generate on an m1. It's a far cry from incremental compilation. It's enough that I generally avoid adding anything new that would require generation.
I agree, Dart's public-facing codegen system (build_runner) leaves a lot to be desired. (In part the problem is that Dart uses a separate system inside Google.)
However, this is a topic of active work for the Dart team: https://github.com/dart-lang/build/issues/3800. I'm sure they would welcome your feedback, particularly if you have examples you can share.
You're also always welcome to reach out to me if you have Flutter/Dart concerns. I founded the Flutter project (and briefly led the Dart team) and care a great deal about customer success with both. eric@shorebird.dev reaches me.
It takes a minute to build from scratch or to update when running "build_runner watch"? My app is over 40k lines and watch updates almost instantaneously.
Thanks, good take. Especially if some of the pieces are still being added.
In my humble opinion, you can handle many cases like serialization better with 'compileTime' or comptime features though I'm partial to macros. Especially with core compile time constructs like 'fields' [1, 2]. Though those require some abilities dart's compiler may not have or be able to do efficiently. That'd be a bummer, as even C++ is finally getting compile time reflection.
Sounds like a good thing overall, my biggest annoyance when I was writing a flutter app was the codegen for annotations (which sure it's better iteratively, but the first one was taking minutes), but if you move these seconds that happen once in a while to seconds during "hot" reload, you're just losing. Honestly, I think they should try to come with a faster codegen, maybe write it in c++ or rust and fix these problems, because macros aren't a silver bullet. They introduce complexity, a new "thing to learn" and sometimes lead to Turing complete machines.
> I think they should try to come with a faster codegen, maybe write it in c++ or rust and fix these problems
Language execution speed isn't the fundamental blocker for code generation in Dart. Dart isn't quite as fast as C++ or Rust, but it's in roughly the same ballpark as other statically typed GC languages like C#, Java, and Go.
The performance challenges in code generation are more architectural and are around cache invalidation, modularity, and other tricky stuff like that.
I belive there are many low hanging fruits to improve the speed. In my project the code generation took more than 3 minutes. This is rather annoying if you have just renamed a field in a single “freezed“ class.
We could speed up this process by 3x by hacking together a script which greps all .dart files for the relevant annotations like “@freezed“ and then only feeds those to the build_runner via a on-the-fly-generated config file.
Agreed, there is a lot of opportunity to improve build_runner. I hope now that we've freed up a lot of engineering resources from macros that we can dig into that some.
If the trade off for macros is speed, size and usability. I am glad they didn't merge macros just to tick a box and leave everyone saddled with a bad decision going forward.
FWIW, I enjoyed the hundreds of hours I spent with dart:mirrors to automate serialization, and the code-generation heavy approach always felt like kind of a bummer. But I feel like AI-assisted programming solves the majority of use cases this feature was meant for.
Without investing significant time, like they did with Dart, they would have a language with a much bigger ecosystem that is faster, already has compile time code generation and better support for data than Dart. It supports ahead-of-time compilation and hot-reload. The only feature missing in C# is compilation to JS, but with WASM is that really needed? Biggest downside of C# is probably that it's not invented at Google.
I'm pretty sure we did look at C# (and certainly a whole bunch of other languages). I don't actually recall why we didn't use C# at the time. I remember Go binaries were waaay to big, JS (what we originally wrote Flutter in) startup time was way too slow on iOS, Swift was too deeply tied to Apple (the standard library was closed source at that time), etc. It's possible that C# was too verbose or didn't have a path to hot reload? But that's just a guess. I'm not a C# expert, and Adam Barth drove most of the language evals at the time.
That said, I'm also not sure Miguel (creator of Xamarin) would agree. He's a Flutter fan now (and backer of Shorebird, my company).
.NET is huge compared to Dart and every hot reload I've seen was complete garbage compared to what you get in Flutter.
Anyone who worked with the mobile .NET and Flutter would see Dart/Flutter DX as something unreachable for .NET, it's a terrible experience like any other .NET cross compilation I've tried (Blazor, Silverlight).
I'm not a big fan of Dart as a language but it really was a great choice that allowed amazing DX, Flutter hot reload feelt better than JS/HTML.
Assuming you were looking at the language 2016ish I could see why you'd rule out C#.
That was the same year that MS first released C# core. For cross platform support mono was really the only way to go and it was second class.
MS was just starting to get out of the mindset of putting the universe into the .Net framework and instead offering first class support for a broader 3rd party ecosystem.
How Microsoft operates today with open source software really started roughly around 2016. I could see why you'd be hesitant to trust them then.
I'm pretty sure in that era Xamarin was closed source and a commerical offering. It wasn't until 2016 when MS acquired them that it became free and open source. So using .NET and C# wasn't really possible in 2014 without either paying a third party or buying the Xamarin company.
I think folks forget how long ago we made Flutter.
I guess that at the time you started developing Flutter C#'s hot reload, source generators and NativeAOT compilation didn't yet exist or were just introduced and incomplete.
So before .NET Core 1.0. In 2015 C# still mainly targeted only MS platforms (ignoring Mono) and didn't have mentioned features yet.
I know that Dart started as an alternative to JS, but it seems like JS target is now (unnecessarily) limiting Dart language in a way. I would be nice to be able to use lists of structs or a proper uint64 when needed. As a language it needs to expand in both directions to compete: high level productivity features and low level performance features. It has potential, but it's not there yet.
If Dart would find more use cases besides Flutter, it would make more sense to invest in its ecosystem.
I think you're not wrong about JS target being unnecessarily limiting. The problem is that a 250B/year business is written on top of Dart's JS transpiler (Google Ads) so it seems unlikely to be removed from the language anytime soon
(maybe Dart2Wasm could allow that?).
It's pretty neat that Dart's JS support means you can take your code (e.g Flutter app) to the web, but I think that whole aspect of the ecosystem is underexplored/underdeveloped as of yet.
I don't think C# would've been a good fit for flutter in its early days, but definitely Flutter should no longer be tied to a single language like Dart. I think you should definitely consider stronger cross-language support.
I'm not sure what that would look like. Flutter mostly written in Dart itself (the framework, tooling, etc). The Flutter Engine (C++) is probably less than 1/3rd of Flutter, but could certainly be made portable to other languages if that were useful, but I suspect said languages would just fork it or write their own.
Typescript is great! But doesn't get you away from running in a JS interpreter or JIT, which at least on iOS is very slow. We wrote the first 3 versions of Flutter in JS but eventually had to move off due to 10s+ startup times (we wrote a ton of JS). Once we moved to an ahead-of-time compiled language we could write as much code as we wanted and the user didn't have to compile it during launch on their device. Typescript would have that problem still today, sadly.
Angular was originally in Dart is my understanding, but eventually forked into two projects. Angular Dart (which is really only still used internally at Google, mostly for Google AdWords which makes all the money) and Angular JS which is what has seen so much popularity more generally.
Btw that Angular history is backwards. The first versions of angular were written in JS back in ~2009. jQuery was the most popular way to build web apps, and Angular provided a (very fancy) declarative data binding framework over it. Everything ran in your browser by inspecting DOM attributes.
Later in ~2014 they built a new framework on similar principles in TypeScript, but with an AOT compiler and called it "Angular 2". Then they retconed the original framework "AngularJS" and made the 2+ framework be just "Angular". In that era some Ads folks forked the Angular 2 framework and rewrote it in Dart and the two frameworks evolved separately since.
So there's really 3 separate "angular" frameworks...
There are lots of good examples of adoption despite non invention.
Ironically, Typescript is not the best one. I can go into great detail - i was overseeing production programming languages at Google at the time, but getting to Typescript (which was the right choice) took a lot.
For most things, even things that seem to be contentious in the broader developer world, internal developer infrastructure teams were often relatively agnostic on choice as long as we had the resources to do it right (IE deal with migrations, etc). You'd have to push people to be meaningfully objective in evaluations, but once they realized you were not going to let them get away with nonsense, you got reasonable evaluations and options. Not always (can't avoid zealots at this scale), but a lot of the time.
But Typescript vs Dart vs Closure (GWT and a few other things were in there somewhere, too) was just particularly contentious for $reasons.
I'm not going to log in to Xitter to read the rest of his tweets, but it doesn't seem like Miguel regrets using C# or is particularly in love with Dart.
He's very complimentary of the goals and governance of Flutter, which is certainly more important than a language choice between two respectable languages.
I do think C# is by far the best mainstream language, but good IDE support and library ecosystem are the dealmakers/breakers for me when choosing a stack for a project.
> I do think C# is by far the best mainstream language
C# is a hugely underrated language that I feel like often gets overlooked when teams look to move beyond JS/TS. The language has a pretty tight syntactic congruency to JS/TS[0], Entity Framework is pretty amazing in terms of DX/perf/maturity, and it seems like we should see more C#/.NET it in the wild than we actually do.
My sense is that there are some legitimate reasons to pick something like Kotlin (JVM ecosystem), but a lot of folks that might have worked with C# in passing in the .NET Framework days simply haven't given the ecosystem another look. It's productive, stable, performant, and secure.
VS Code support is really good and Rider has a community license available.
JVM probably has a better ecosystem, so Kotlin wins in that regard. I would also say there's a huge and painful difference between C# and TypeScript, which is that C# has nominal typing (no equivalency between types unless they're literally the same) and TS has structural/duck typing (if the members of two objects are the same, they're equivalent).
That makes sense because C# can be much lower level and has its own set of priorities during compilation, so I'm not really complaining. But ergonomically, you really miss the TS type system when you don't have it.
I think C# is just too associated to .Net like how Ruby is tied to Rails.
Plus it's a Microsoft managed language, and if you're looking to move out of JS / TS, then you probably don't trust Microsoft's management of languages...
I mean, .NET is what makes C# so good. You also get to use F#, you get to target all these platforms and a very flexible deployment/compilation model for your applications - something that e.g. neither Go nor Java offer to the same extent.
There is also a greater selection of IDEs and LS's.
>It supports ahead-of-time compilation and hot-reload.
In name only. Doesn't really well in practice. Go and just look for "C# hot reload not working" in any search engine and look at the variety of contexts it just simply does not work with no resolution.
C# is a fantastic language that has, in recent years, evolved very quickly for the better. Nice mix of object and functional drawing a lot of influence from F#.
It shares a lot of language constructs with TypeScript (and by extension, JS) and has been converging with each release so I'm often surprised that people hate on it or that more startups don't reach for it if they are on Node with TS.
Same syntax for key language constructs like async-await, try-catch-finally, generics, etc.
Hot reload works pretty well (at least in the contexts that I use .NET (backend APIs)); a lot of the issues were from the early days. `dotnet watch` has been very much usable for the last few years.
The improvements are great for people who have to or want to use C# for whatever reason? But how does that move the needle from other tech-stacks that are for more capable, especially on non-Windows environments (and please don't imply that C# is truly cross-platform, it's fine for web API's, it's not fine when dealing with actual system calls).
If you're within the Windows garden, those tools certainly make sense to use. But if you're not, there just simply isn't a reason to burden your app/platform with them.
To be clear, there's nothing wrong with C#, but the advocacy for it tends to be quite loud and passionate without much technical clarity in what it brings to the table that's lacking in other ecosystems. And again, you might be in for a world of hurt depending on how complex your needs are.
I'm not advocating it for every use case, I'm advocating it for use cases where it's a good fit (e.g. web APIs, backends where there's a need for multi-threaded code). For teams that need to move up from Node/JS/TS or augment Node/JS/TS, it's likely a better choice than say Rust or Go (nothing wrong with Go, but C# is going to be an easier ramp than Go for JS/TS devs IMO)
I would not, for example, advocate it for web UIs or any UIs except for Windows desktop UIs (and even there, I might advocate for JS based options).
> and please don't imply that C# is truly cross-platform, it's fine for web API's, it's not fine when dealing with actual system calls
What exactly do you mean by this? How are syscalls worse in C# than other languages?
My understanding is other "cross-compiled" languages have cumbersome ergonomics with syscalls. They all use System or OS libraries that hide complexity and OS differences to varying degrees of success.
They did. They called in J++ and gave it extensions, leading to a schooyard scuffle with Sun. When the school principal said Sun were right, they went off sulking and made Dot Net.
Genuine question, is this comparison really apples to apples? Microsoft wanted to compete with sun right? Does google want to compete with programming languages like this? My gut tells me this is NIH not wanting to compete.
Microsoft didn't want to compete with Sun so much as have an application development language with a garbage collector that wasn't owned by Sun.
You don't make much money off programming languages inherently.
This also elides an obvious riposte (so you mean they should have just used Mono? how did all that work out?) and a metric ton of differences between what C# targets and what Dart targets.
MS wanted to fracture the Java ecosystem. The Microsoft Java VM was an attempt to lockin developers to MS Java and not sun Java. They created J# and C# because of the sun lawsuit they lost.
They still wanted a Java like ecosystem but they would be sure it only ran on Windows servers.
MS spent years being hostile to open source software. It's only in about the past decade that they've turned a corner.
Here's a famous email from Bill Gates about Java and how to stop it.
It's much much more complicated than that. Sun refused to add many language features that Microsoft (then a cautious but also genuine user of Java) wanted. Such as refusal to add delegates/closures:
J++, which was Microsoft's Java implementation in the 90's added a few language extensions that were clearly not Sun-approved, but driven by internal engineering feedback at MS. C# having struct and class keywords, allowing you to define your own value types, is clearly a result of that missing in Java, which still in 2025 has no such equivalent yet.
Also Java's then native code interop solution, JNI, was and still remains complete garbage, and it's flaws were a huge guide for Microsoft when they deveoped .NET and it's native interop equivalent, PI (platform invoke).
The key point is that C# was happening regardless of whatever technical upsides people wanted to see out of it. C# would still exist today and still be just as popular in the Windows ecosystem even if it made all the same exact mistakes as Java.
Microsoft had an underlying operating system that they wanted to rewrite substantially in C# on the .NET VM. They had a decent motivation for not having a core piece of Windows dependent on a product from another vendor that was competing in some of the same markets as their core product.
Google by contrast isn't nearly as invested in Dart as Microsoft was (and still is) in C#/.NET. Perhaps a better objection is that they should have just used Go — or a Go-binary-compatible language built on some of the same toolchain. (See also: Vala and Guile still don't play nice together as well as they should for two languages from the same project.)
I've been unlucky enough to have many years in on both iOS and Android, and Dart is a fantastic language, far better than both incumbents.
I worry about judging it as a whole, based solely on their ability to launch pre-compile time code generation that is faster than their current approach.
Macros seemed really cool + really difficult to improve past the current codegen.
I have a 35K LOC "main" code base that generates 670K lines of code under the current approach. It takes 52 seconds for a cold generation of all 670K. Seconds for warm. shrugs (sounds great to me)
Yeah. Dart's over-rotation on generated code is a googlism. They have a fancy build setup internally which is very good at generated code and caching it.
I know that the build_runner authors are looking into perf as we speak, and I'd be happy to put you in touch with them if you'd like to speak with them about debugging your case:
https://github.com/dart-lang/build/issues/3800
eric@shorebird.dev reaches me (for this or any other Flutter/Shorebird issue).
We considered Go! At the time it was much more designed for servers than mobile devices. If I recall correctly the minimum binary size was like 30mb or something.
That seems more like an argument against Go. Dart is the language more familiar to the average dev, and you're gonna have an easier time translating that to Java/C# than you are Go.
> Semantic introspection, unfortunately, turned out to introduce large compile-time costs which made it difficult to keep stateful hot reload hot.
They must have done something wrong. Macros are expanded when you ahead-of-time compile your code, which doesn't take place in the run-time environment where you hot load, but in the build environment. It doesn't matter whether the macro are simple, or whether they can inspect lexical environments and look up type info and whatnot.
Compile-time costs should never factor into hot reload, because the stuff being loaded should already be compiled.
Maybe they aren't explaining it; there could be certain semantic problems preventing existing state from being re-used on what should be a hot reload.
Macros create certain issues in reloading. If you change a macro such that the expansion requires different run-time support which is incompatible with existing expansions, you have problems. One option may be to reload all the code which depends on those macros, so that everything cuts over to the new run-time support. If you need to support a mixture: hot-reloaded modules using the new versions of the macros, side by side with code made using the old versions, then the old version of the run-time support has to coexist with the old.
If the run-time support for the macros is something which manages state that needs to be preserved on reloads, then that can cause difficulties. The old and new macro expansions want to appear to be sharing the same state, not different silos.
> Macros are expanded when you ahead-of-time compile your code, which doesn't take place in the run-time environment where you hot load, but in the build environment.
The user experience with hot reload is:
1. They hit "run".
2. The compiler compiles the app.
3. The app starts running on their device.
4. They change some code in their IDE.
5. They click "hot reload".
6. The compiler compiles the changed code.
7. The IDE sends the updated code to the running app.
8. The runtime loads the changed code.
9. They see the changed behavior in their running app.
Steps 6-8 determine the total time between "user requests a hot reload" and "user sees their updated app". Compilation doesn't happen on the device, but it still takes time and is in the critical path for that experience.
Making the compiler slower makes hot reload slower. We measure hot reload time in milliseconds, so it doesn't take much for us to consider it an unacceptable performance regression.
Firstly, that's a developing user of the language, not the end user of the application.
Secondly, any compilation delay they experience affects all their iterative development scenarios, including a complete application restart for each run.
If they wrote the macros themselves that are slowing down compilation that much, it is their self-inflicted problem.
Even if macros are slowing down compilation noticeably, unless you change the macros such that everything that uses them has to be recompiled, you still have the benefit of incremental compilation and hot reloading. E.g. recompiling one just one file-with-macros out of hundreds that don't get recompiled.
> We measure hot reload time in milliseconds
It takes seconds to minutes to make the code change, but when you hit the hot-key to deploy it to the target, it's gotta compile and upload in milliseconds?
That's just a silly requirement that will leave your compiler development hamstrung.
I can't even type this comment without at times experiencing character delays that are certainly more than single digit milliseconds. :)
A conclusion like "our users require hot reloads to be milliseconds, end-to-end including compilation" deserves to be researched among the user base, because I don't suspect most devs need the times to be quite that low. They are building a program, not trying to avoid getting fragged in a multi-player shooter!
> Firstly, that's a developing user of the language, not the end user of the application.
Yes, hot reload is a developer feature in Dart, not an end user feature.
> If they wrote the macros themselves that are slowing down compilation that much, it is their self-inflicted problem.
The compile time impact we saw, unfortunately, wasn't entirely linear in the amount of macro applications that a user had. If macro application time was entirely pay as you go, then, yes, it would be feasible. But it impacted compiler performance worse than that.
> It takes seconds to minutes to make the code change, but when you hit the hot-key to deploy it to the target, it's gotta compile and upload in milliseconds?
Yup! Those seconds to minutes are meaningful time well spent by the user thinking about their program and the problem. Those milliseconds are just them sitting on their thumb getting mad at the machine.
> deserves to be researched among the user base, because I don't suspect most devs need the times to be quite that low.
I would suggest to you that after working on Flutter for nearly a decade, conducting user surveys every single quarter, gathering metrics from our tool usage (opt in) and lots of other UX research, that we do have a pretty good idea of what our user base wants in regards to performance. :)
You literally cannot get your thumb under your ass in milliseconds to sit on it, unless you're the Olympic record holder for that sporting event.
You can't change your focus from the window where you are editing the code to the window where you are interacting with the app in milliseconds. Maybe triple digit milliseconds at best, not double, let alone single. Well, double may be within reach, if it's hot-keyed.
I think after reading through the blog post the reasons they have made a whole lot of sense and sounded like that of a mature engineering team to me.
There are a bunch of other interesting approaches here they can look at. Improving the code generation story more generally, shopping the augmentations feature (basically C#’s partial classes) and getting more serious about serialization all feel like sensible directions from here.
There is a really interesting community proposal at the moment on the serialization front that I think would solve a lot of the issues that got people so excited about macros in the first place here: https://github.com/schultek/codable/blob/main/docs/rfc.md
This sounds like they were going for a Roslyn analogue (using Dart to generate Dart the same way Roslyn uses C# to generate C#). Definitely a big time investment.
It's a big bite to chew, but I think Roslyn has paid big dividends.
Lisp is dynamically typed and macros are syntactic. The macro gets in an AST and spits out an AST with little in the way of semantic information involved beyond maybe some identifier resolution.
Dart is a statically typed language and we wanted macros to be able to introspect over the semantics of the code the macro is applied to, not just the semantics. For example, if a macro is generating code for serialization, we wanted the macro to be able to ask "Does the type of this field implement JsonSerializable?". Answering that means being able to look up the type of the field, possibly walk its inheritance hierarchy, etc.
It's a very different problem from just "give me a way to add pretty loop syntax".
This is just my opinion but I believe it's because the syntax tree is the syntax. In a Lisp macro you are working with lists, just like you are for any other Lisp code. Almost every other language I've used (I've been programming since the late 1980s) has, at best, a special data structure to manipulate ASTs. So it ends up being quite unnatural. Lisp macros are just Lisp.
> This is just my opinion but I believe it's because the syntax tree is the syntax.
This holds for "old" lisps. There are other options. Racket and Scheme uses "syntax objects". Syntax objects contain besides the old syntax tree also source location information and lexical information.
Are Lisp macros easy to use? My understanding was that Lisp code is notoriously difficult to understand if you didn't write it, largely because of the obscenely powerful macro system that makes it too easy to be too clever. Which is essentially the same complaint that everyone has about every macro system.
It’s possible to do that, but in practice it’s quite uncommon. Especially since Lisps offer great tools for programmers to learn what the macros that the are using actually do.
in Scheme you can redefine `define` to be number 5. Easy to implement, but a nightmare in real world scenarios [0]. That's why languages like Go became popular, they're trashy, boring, and dumb, but that's exactly what's needed in big projects.
[0]: imagine your colleague wrote a macro that redefines for loops because at the time, it made life easier for him.
> in Scheme you can redefine `define` to be number 5.
This is like asking "what if your coworker named all errs as `ok`" so everything was `if ok { return errors.New("Not ok!!"); }`. It's possible but no one does it.
This is why `defmacro` and `gensym` in common lisp are awesome, and similarly why Go's warts don't matter. Much of programming language ugliness is an "impact x frequency" calculation, rather than one or the other.
It's also why javascript is so terrible, you run into it's warts constantly all day long.
"No one does it" is extremely relative. Take your closing remark about JavaScript: I don't run into JS warts very often at all, and I'm a professional web developer who works in it day in and day out. I guess my team just doesn't do dumb JS stuff?
But apparently lots of other people do run into them regularly, so I believe that such things do exist.
By the same token, I've heard countless reports of people struggling with the flexibility that Lisp offers, with co-workers who abuse it to create nightmarish situations. That you haven't experienced that doesn't mean no one does.
I don't mean "do dumb stuff", I mean I've literally never seen anyone redefine the `define` keyword in any code.
With javascript, I do see people use `===` frequently. It's a wart of the language that the operator even exist. It's not "dumb" to use it - it's how frequently are you assaulted with the bugs of the language (not bugs in your code).
You might have been misled by a CS professor's enthusiasm about what they thought was neat, or was a good way to communicate something.
But I don't recall seeing someone re-define `define` in real life.
Nor do I recall seeing any problematic redefinitions in Scheme in real life.
That said, if you wanted to make a language variant in which `define` did something different than normal (say, for instrumentation, or for different semantics for variable syntax), then you'd probably use Racket, and you'd probably define your own `#lang`, so that the first line of files using your language would be something like `#lang myfunkylang`.
You can randomly sample code in <https://pkgs.racket-lang.org/>, and look for people doing anything other than `#lang racket/base` or `#lang typed/racket/base`.
Maybe it's time to just recognize that lisp-style macros-as-language-syntax features just aren't worth the struggle and grief?
The big metaprogramming feature traditionally implemented in macros, type generation, is already provided in some form by all major languages already.
And an awful lot (and I mean an awful lot) of good work can be done at the string replacement level with cpp. And generating code upstream of the compiler entirely via e.g. python scripts or templating engines is a very reasonable alternative too. And at lower levels generating code programmatically via LLVM and GPU shaders is well-trodden and mature.
Basically, do "macros" really have a home as a first class language feature anymore?
Oh how I enjoy trying to compile and use projects where they use some complex home brew codegen system often written in a different language entirely [1]. Luckily they often use Python as part of some core build step which never breaks compatability in their regex librwry [2]. </sarcasm>
Yes macros can be a pain and should be limited, but in my experience, a couple hundred lines of macros replaces many thousands of lines code generators with complicated baroque build system integrations (ahem ROS2). The tradeoff is even worse when the language supports templates and compile time operations which can usually replace macros with even less code and are easier to understand. Though at least Go supports codegen properly with support in its official tooling.
Bad code is bad code, but FWIW every time I've seen something that actually needs a "complex home brew codegen system", it's absolutely best expressed in a dedicated piece of software engineering and not within a macro environment. Zephyr leans heavily on that kind of engineering, actually.
The point is really that "macros" is a weird sandwich between "complicated metaprogramming you need to do from first principles" and "you really didn't need metaprogramming, did you?". And that over the years that sandwich has been getting thinner.
Lisp macros in the 60's were a revelation. They don't really have a home anymore.
> FWIW every time I've seen something that actually needs a "complex home brew codegen system", it's absolutely best expressed in a dedicated piece of software engineering and not within a macro environment. Zephyr leans heavily on that kind of engineering, actually.
I'd definitely agree with you about Zephyr's "I really want C macros to be real macros" system for it's device tree system. It's a huge pain to debug because you don't find issues until the linking step. However I'd counter with the fact that C macros _aren't_ real macros. Hence the pain.
Actually C's "macros" are literally just a separate codegen tool (m4 processor) just like what you're suggesting. It's a large part of what makes C code so hard to programmatically parse or automate.
Essentially Zephyr is doing meta-programming in an external codegen tool. Totally agree that a separate codegen tool specifically for device trees or whatnot could be better than that. However if C had a proper macro system (and compile time types), you could readily express the device tree system in the language at compile time and produce helpful errors.
> The point is really that "macros" is a weird sandwich between "complicated metaprogramming you need to do from first principles" and "you really didn't need metaprogramming, did you?". And that over the years that sandwich has been getting thinner.
They aren't really though. Homebrew codegen systems are much more of the weird sandwich between "you needed metaprogramming" and "you didn't have metaprogramming". Instead you build a system which is even more complicated in total, e.g. source and generated code gets out of sync, you have name clashes, the core API logic is spread out over different languages and code bases, etc.
Though I do agree the use cases for macros are shrinking, but more due to meta-programming via templates and compile time expressions becoming more powerful and usually preferable method to doing meta-programming.
What will the Dart team focus on instead? I wish the cross-compilation issue was taken to a higher priority, I mean Flutter already kinda of solved it.
They talk about it some in the post, but my understanding is they're going to see if the can solve some of the motivating problems (e.g. json serialization) with simpler one-off solutions rather than a big general language feature.
Hi, I work on Dart and was one of the people working on this feature. Reposting my Reddit comment to provide a little context:
I'm bummed that it's canceled because of the lost time, but also relieved that we decided to cancel it. I feel it was the right decision.
We knew the macros feature was a big risky gamble when we took a shot at it. But looking at other languages, I saw that most started out with some simple metaprogramming feature (preprocessor macros in C/C++, declarative macros in Rust, etc.) and then later outgrew them and added more complex features (C++ template metaprogramming, procedural macros in Rust). I was hoping we could leapfrog that whole process and get to One Metaprogramming Feature to Rule Them All.
Alas, it is really hard to be able to introspect on the semantics of a program while it is still being modified in a coherent way without also seriously regressing compiler performance. It's probably not impossible, but it increasingly felt like the amount of work to get there was unbounded.
I'm sad we weren't able to pull it off but I'm glad that we gave it a shot. We learned a lot about the problem space and some of the hidden sharp edges.
I'm looking forward to working on a few smaller more targeted features to deal with the pain points we hoped to address with macros (data classes, serialization, stateful widget class verbosity, code generation UX, etc.).
Sounds like performance is the biggest issue. I'm guessing the macros need to run after every keystroke. That creates big time constraint that while nice it's not really needed.
Due to AOT compilation, some form of (pre)compile time code generation is needed, but it doesn't need to be macros. It doesn't need to be instantaneous, but it also shouldn't take minutes.
Adding features directly into the language removes the need for some code generation.
Augmentations will already make code generation much nicer to use.
build_runner needs to become more integrated so that IDEs would read build_runner's config and run it automatically.
That's what you really believe huh?
I always feel better about the stewardship of a project when you see a thoughtfully written reason for saying no to a feature, especially when there’s already sunk cost. Props to the team.
This is good news. The dart language has been getting more complicated without corresponding quality of life improvements. A first class record object without messing around with macros would be a great start.
I might be missing something here, don't records already exist? https://dart.dev/language/records
They exist but they're still missing a lot of things you'd want in a data class like copyWith, serialization etc.
In practice in a Dart app you usually use freezed or something similar: https://pub.dev/packages/freezed
Why is it that many languages, at the start, don't have support for records/plain structs?
Because implementing them is tedious, and you can always simulate them with simpler aggregation methods, or possibly lexical closures.
When the language implementors start making larger programs, it will soon become apparent how the program organization is hampered without named, defined data structures.
I didn't add structs to TXR Lisp until August 2015, a full six years from the start of the project. I don't remember it being all that much fun, except when I changed my mind about single inheritance and went multiple. The first commit for that was in December 2019.
Another fun thing was inventing a macro system for defstruct, allowing new kinds of clauses to be written that can be used inside defstruct. Then using them to write a delegation mechanism in the form of :delegate and :mass-delegate clauses, whereby you can declare individual methods, or a swath of them, to delegate through another object.
Because it's arguably syntactic sugar and, IMHO, it's worked out better for developers for Dart to model it as a 3rd party library problem. i.e. have a JSONSerializable protocol, and enable easy code generation by libraries.
i.e. I annotate my models with @freezed, which also affords you config of the N different things people differ on with json (are fields snake case? camel case? pascal case?) and if a new N+1 became critical, I could hack it in myself in 2-4 hours.
I'm interested to see how this'd integrate with the language while affording the same functionality. Or maybe that's the point: it won't, but you can always continue using the 3rd party libraries. But now it's built into the language, so it is easier to get from 0 to 1.
Macros give their own kind of power, and it's a tough call to give that up for runtime hot-reloading. Languages like Haxe have macros, but also have hot reloading capabilities that typically are supported in certain game frameworks. You probably don't want to mix them together, but it's also a good development process to have simpler compilation targets that enable more rapid R&D, and then save macros for larger/more comprehensive builds.
https://haxe.org/manual/macro.html
https://github.com/RblSb/KhaHotReload
> Runtime introspection (e.g., reflection) makes it difficult to perform the tree-shaking optimizations that allow us to generate smaller binaries.
Does anyone have any more information on How Dart actually does Tree Shaking? And what is "Tree Shakeable"? This issue is still open on Github https://github.com/Dart-lang/sdk/issues/33920.
I think this quote accurately sums things up
> In fact the only references I can find anywhere to this feature is on the Dart2JS page:
> Don’t worry about the size of your app’s included libraries. The dart2js tool performs tree shaking to omit unused classes, functions, methods, and so on. Just import the libraries you need, and let dart2js get rid of what you don’t need.
> This has led customers to wild assumptions around what is and what is not tree-shakeable, and without any clear guidance to patterns that allow or disallow tree-shaking. For example internally, many large applications chose to store configurable metadata in a hash-map:
I don't have a full answer for you, but I know a little. I've hacked on the Dart compiler some, but my relationship with Dart has mostly been as a creator of Flutter and briefly Eng Dir for the Dart project.
Dart has multiple layers where it does tree shaking.
The first one is when building the "dill" (dart intermediate language) file, which is essentially the "front-end" processing step of the compiler which takes .dart files and does amount of processing. At that step things like entire unused libraries and classes are removed I believe.
When compiling to an ahead of time compiled binary (e.g. for releasing to iOS or Android) Dart does additional steps where it collects a set of roots and walks from those roots to related objects in the graph and discards all the rest. Not unlike a garbage collection. There are several passes of this for different parts of the compile, including as Dart is even writing the binary it will drop things like class names for unused classes (but keep their id in the snapshot so as not to re-number all the other classes).
I have no experience with tree shaking in the dart2js compiler, but there are experts on Discord who might be able to answer: https://github.com/flutter/flutter/blob/master/docs/contribu...
What exactly all this means as a dev using Dart, I don't know. In general I just assume the tree shaking works and ignore it. :)
The Dart tech lead has done some writings, but none seem to cover the exact details of treeshaking: https://mrale.ph/dartvm/ https://github.com/dart-lang/sdk/blob/main/runtime/docs/READ...
Good. Dart already has good support for code-generation. It would just encourage package authors and app developers to waste their time golfing.
It is incredibly slow though. I have a project with 40k lines of code which takes a minute to generate on an m1. It's a far cry from incremental compilation. It's enough that I generally avoid adding anything new that would require generation.
I agree, Dart's public-facing codegen system (build_runner) leaves a lot to be desired. (In part the problem is that Dart uses a separate system inside Google.)
However, this is a topic of active work for the Dart team: https://github.com/dart-lang/build/issues/3800. I'm sure they would welcome your feedback, particularly if you have examples you can share.
You're also always welcome to reach out to me if you have Flutter/Dart concerns. I founded the Flutter project (and briefly led the Dart team) and care a great deal about customer success with both. eric@shorebird.dev reaches me.
It takes a minute to build from scratch or to update when running "build_runner watch"? My app is over 40k lines and watch updates almost instantaneously.
Former Eng. Dir for Dart, co-founder of Flutter, here.
I'd like to believe this is a good thing for the Dart project, but only time will tell. My hot take here: https://shorebird.dev/blog/dart-macros/
Thanks, good take. Especially if some of the pieces are still being added.
In my humble opinion, you can handle many cases like serialization better with 'compileTime' or comptime features though I'm partial to macros. Especially with core compile time constructs like 'fields' [1, 2]. Though those require some abilities dart's compiler may not have or be able to do efficiently. That'd be a bummer, as even C++ is finally getting compile time reflection.
1: https://nim-lang.org/docs/iterators.html#fieldPairs.i%2CT 2: https://www.openmymind.net/Basic-MetaProgramming-in-Zig/
Thanks!
Sounds like a good thing overall, my biggest annoyance when I was writing a flutter app was the codegen for annotations (which sure it's better iteratively, but the first one was taking minutes), but if you move these seconds that happen once in a while to seconds during "hot" reload, you're just losing. Honestly, I think they should try to come with a faster codegen, maybe write it in c++ or rust and fix these problems, because macros aren't a silver bullet. They introduce complexity, a new "thing to learn" and sometimes lead to Turing complete machines.
> I think they should try to come with a faster codegen, maybe write it in c++ or rust and fix these problems
Language execution speed isn't the fundamental blocker for code generation in Dart. Dart isn't quite as fast as C++ or Rust, but it's in roughly the same ballpark as other statically typed GC languages like C#, Java, and Go.
The performance challenges in code generation are more architectural and are around cache invalidation, modularity, and other tricky stuff like that.
I belive there are many low hanging fruits to improve the speed. In my project the code generation took more than 3 minutes. This is rather annoying if you have just renamed a field in a single “freezed“ class. We could speed up this process by 3x by hacking together a script which greps all .dart files for the relevant annotations like “@freezed“ and then only feeds those to the build_runner via a on-the-fly-generated config file.
https://github.com/dart-lang/build/issues/3800 is where the "low hanging fruit search" is happening :)
Agreed, there is a lot of opportunity to improve build_runner. I hope now that we've freed up a lot of engineering resources from macros that we can dig into that some.
If the trade off for macros is speed, size and usability. I am glad they didn't merge macros just to tick a box and leave everyone saddled with a bad decision going forward.
FWIW, I enjoyed the hundreds of hours I spent with dart:mirrors to automate serialization, and the code-generation heavy approach always felt like kind of a bummer. But I feel like AI-assisted programming solves the majority of use cases this feature was meant for.
They should have just use C# for Flutter.
Without investing significant time, like they did with Dart, they would have a language with a much bigger ecosystem that is faster, already has compile time code generation and better support for data than Dart. It supports ahead-of-time compilation and hot-reload. The only feature missing in C# is compilation to JS, but with WASM is that really needed? Biggest downside of C# is probably that it's not invented at Google.
[Flutter founder here.]
I'm pretty sure we did look at C# (and certainly a whole bunch of other languages). I don't actually recall why we didn't use C# at the time. I remember Go binaries were waaay to big, JS (what we originally wrote Flutter in) startup time was way too slow on iOS, Swift was too deeply tied to Apple (the standard library was closed source at that time), etc. It's possible that C# was too verbose or didn't have a path to hot reload? But that's just a guess. I'm not a C# expert, and Adam Barth drove most of the language evals at the time.
That said, I'm also not sure Miguel (creator of Xamarin) would agree. He's a Flutter fan now (and backer of Shorebird, my company).
Past discussions: https://x.com/migueldeicaza/status/1778759403451081159 https://x.com/migueldeicaza/status/1559898665350832128
.NET is huge compared to Dart and every hot reload I've seen was complete garbage compared to what you get in Flutter.
Anyone who worked with the mobile .NET and Flutter would see Dart/Flutter DX as something unreachable for .NET, it's a terrible experience like any other .NET cross compilation I've tried (Blazor, Silverlight).
I'm not a big fan of Dart as a language but it really was a great choice that allowed amazing DX, Flutter hot reload feelt better than JS/HTML.
Assuming you were looking at the language 2016ish I could see why you'd rule out C#.
That was the same year that MS first released C# core. For cross platform support mono was really the only way to go and it was second class.
MS was just starting to get out of the mindset of putting the universe into the .Net framework and instead offering first class support for a broader 3rd party ecosystem.
How Microsoft operates today with open source software really started roughly around 2016. I could see why you'd be hesitant to trust them then.
It was 2014, so even earlier than that.
I'm pretty sure in that era Xamarin was closed source and a commerical offering. It wasn't until 2016 when MS acquired them that it became free and open source. So using .NET and C# wasn't really possible in 2014 without either paying a third party or buying the Xamarin company.
I think folks forget how long ago we made Flutter.
I guess that at the time you started developing Flutter C#'s hot reload, source generators and NativeAOT compilation didn't yet exist or were just introduced and incomplete.
We started Flutter in 2014 and made the decision to switch to Dart in ~Jan 2015 iirc.
So before .NET Core 1.0. In 2015 C# still mainly targeted only MS platforms (ignoring Mono) and didn't have mentioned features yet.
I know that Dart started as an alternative to JS, but it seems like JS target is now (unnecessarily) limiting Dart language in a way. I would be nice to be able to use lists of structs or a proper uint64 when needed. As a language it needs to expand in both directions to compete: high level productivity features and low level performance features. It has potential, but it's not there yet.
If Dart would find more use cases besides Flutter, it would make more sense to invest in its ecosystem.
I think you're not wrong about JS target being unnecessarily limiting. The problem is that a 250B/year business is written on top of Dart's JS transpiler (Google Ads) so it seems unlikely to be removed from the language anytime soon (maybe Dart2Wasm could allow that?).
It's pretty neat that Dart's JS support means you can take your code (e.g Flutter app) to the web, but I think that whole aspect of the ecosystem is underexplored/underdeveloped as of yet.
I don't think C# would've been a good fit for flutter in its early days, but definitely Flutter should no longer be tied to a single language like Dart. I think you should definitely consider stronger cross-language support.
I'm not sure what that would look like. Flutter mostly written in Dart itself (the framework, tooling, etc). The Flutter Engine (C++) is probably less than 1/3rd of Flutter, but could certainly be made portable to other languages if that were useful, but I suspect said languages would just fork it or write their own.
I am curious as well.
Despite the "not invented at Google" swipe
* TypeScript wasn't invented at Google either, but adopted heavily.
* Angular was the first major project ever to use TS.
* And interestingly enough, Anders Hejlsberg contributed heavily to both.
Typescript is great! But doesn't get you away from running in a JS interpreter or JIT, which at least on iOS is very slow. We wrote the first 3 versions of Flutter in JS but eventually had to move off due to 10s+ startup times (we wrote a ton of JS). Once we moved to an ahead-of-time compiled language we could write as much code as we wanted and the user didn't have to compile it during launch on their device. Typescript would have that problem still today, sadly.
https://www.youtube.com/watch?v=xqGAC5QCYuQ is a talk where we discuss what led to modern Flutter (including 3 attempts in JS).
Angular was originally in Dart is my understanding, but eventually forked into two projects. Angular Dart (which is really only still used internally at Google, mostly for Google AdWords which makes all the money) and Angular JS which is what has seen so much popularity more generally.
These days Hermes is the answer to the JS and iOS startup issues. Of course it's a decade too late for flutter. :P
https://github.com/facebook/hermes
https://x.com/tmikov/status/1869945330638442651
Btw that Angular history is backwards. The first versions of angular were written in JS back in ~2009. jQuery was the most popular way to build web apps, and Angular provided a (very fancy) declarative data binding framework over it. Everything ran in your browser by inspecting DOM attributes.
Later in ~2014 they built a new framework on similar principles in TypeScript, but with an AOT compiler and called it "Angular 2". Then they retconed the original framework "AngularJS" and made the 2+ framework be just "Angular". In that era some Ads folks forked the Angular 2 framework and rewrote it in Dart and the two frameworks evolved separately since.
So there's really 3 separate "angular" frameworks...
Sorry, I was unclear. I didn't mean to necessarily suggest TS as a candidate for Dart's goals. (Though there is STS...)
I meant to point out that you can't just assume a priori it was NIH syndrome, as Google's heavy adoption of TS is a counterexample.
Makes sense. Google is a very large and diverse place. (And sometimes a lot of unpleasant infighting and politicking around tech choices.)
There are lots of good examples of adoption despite non invention.
Ironically, Typescript is not the best one. I can go into great detail - i was overseeing production programming languages at Google at the time, but getting to Typescript (which was the right choice) took a lot.
For most things, even things that seem to be contentious in the broader developer world, internal developer infrastructure teams were often relatively agnostic on choice as long as we had the resources to do it right (IE deal with migrations, etc). You'd have to push people to be meaningfully objective in evaluations, but once they realized you were not going to let them get away with nonsense, you got reasonable evaluations and options. Not always (can't avoid zealots at this scale), but a lot of the time.
But Typescript vs Dart vs Closure (GWT and a few other things were in there somewhere, too) was just particularly contentious for $reasons.
I'm not going to log in to Xitter to read the rest of his tweets, but it doesn't seem like Miguel regrets using C# or is particularly in love with Dart.
He's very complimentary of the goals and governance of Flutter, which is certainly more important than a language choice between two respectable languages.
I do think C# is by far the best mainstream language, but good IDE support and library ecosystem are the dealmakers/breakers for me when choosing a stack for a project.
My sense is that there are some legitimate reasons to pick something like Kotlin (JVM ecosystem), but a lot of folks that might have worked with C# in passing in the .NET Framework days simply haven't given the ecosystem another look. It's productive, stable, performant, and secure.
VS Code support is really good and Rider has a community license available.
[0] https://github.com/CharlieDigital/js-ts-csharp
JVM probably has a better ecosystem, so Kotlin wins in that regard. I would also say there's a huge and painful difference between C# and TypeScript, which is that C# has nominal typing (no equivalency between types unless they're literally the same) and TS has structural/duck typing (if the members of two objects are the same, they're equivalent).
That makes sense because C# can be much lower level and has its own set of priorities during compilation, so I'm not really complaining. But ergonomically, you really miss the TS type system when you don't have it.
We're not there yet, but if you squint, named tuples kinda interesting because it's also shape-based: https://www.reddit.com/r/csharp/comments/164w8l1/how_do_yall...
I think C# is just too associated to .Net like how Ruby is tied to Rails. Plus it's a Microsoft managed language, and if you're looking to move out of JS / TS, then you probably don't trust Microsoft's management of languages...
I mean, .NET is what makes C# so good. You also get to use F#, you get to target all these platforms and a very flexible deployment/compilation model for your applications - something that e.g. neither Go nor Java offer to the same extent.
There is also a greater selection of IDEs and LS's.
>They should have just use C# for Flutter.
Dear lord no. We don't need more C# in the world.
>It supports ahead-of-time compilation and hot-reload.
In name only. Doesn't really well in practice. Go and just look for "C# hot reload not working" in any search engine and look at the variety of contexts it just simply does not work with no resolution.
C# is a fantastic language that has, in recent years, evolved very quickly for the better. Nice mix of object and functional drawing a lot of influence from F#.
It shares a lot of language constructs with TypeScript (and by extension, JS) and has been converging with each release so I'm often surprised that people hate on it or that more startups don't reach for it if they are on Node with TS.
Same syntax for key language constructs like async-await, try-catch-finally, generics, etc.
Hot reload works pretty well (at least in the contexts that I use .NET (backend APIs)); a lot of the issues were from the early days. `dotnet watch` has been very much usable for the last few years.
The improvements are great for people who have to or want to use C# for whatever reason? But how does that move the needle from other tech-stacks that are for more capable, especially on non-Windows environments (and please don't imply that C# is truly cross-platform, it's fine for web API's, it's not fine when dealing with actual system calls).
If you're within the Windows garden, those tools certainly make sense to use. But if you're not, there just simply isn't a reason to burden your app/platform with them.
To be clear, there's nothing wrong with C#, but the advocacy for it tends to be quite loud and passionate without much technical clarity in what it brings to the table that's lacking in other ecosystems. And again, you might be in for a world of hurt depending on how complex your needs are.
What do you mean that it's not truly cross-platform?
https://developers.redhat.com/blog/2016/09/14/pinvoke-in-net...
https://developers.redhat.com/blog/2019/03/25/using-net-pinv...
> without much technical clarity in what it brings to the table that's lacking in other ecosystems
What other GC language offers such levels of both high level expressiveness and low level control and also has a big ecosystem?
I'm not advocating it for every use case, I'm advocating it for use cases where it's a good fit (e.g. web APIs, backends where there's a need for multi-threaded code). For teams that need to move up from Node/JS/TS or augment Node/JS/TS, it's likely a better choice than say Rust or Go (nothing wrong with Go, but C# is going to be an easier ramp than Go for JS/TS devs IMO)
I would not, for example, advocate it for web UIs or any UIs except for Windows desktop UIs (and even there, I might advocate for JS based options).
>I'm advocating it for use cases where it's a good fit
>I would not, for example, advocate it for web UIs or any UIs except for Windows desktop UIs
Well, the GP was talking about using C# for Flutter, a cross-platform product from desktop to web lol.
> and please don't imply that C# is truly cross-platform, it's fine for web API's, it's not fine when dealing with actual system calls
What exactly do you mean by this? How are syscalls worse in C# than other languages?
My understanding is other "cross-compiled" languages have cumbersome ergonomics with syscalls. They all use System or OS libraries that hide complexity and OS differences to varying degrees of success.
And Microsoft should have just used Java.
They did. They called in J++ and gave it extensions, leading to a schooyard scuffle with Sun. When the school principal said Sun were right, they went off sulking and made Dot Net.
Genuine question, is this comparison really apples to apples? Microsoft wanted to compete with sun right? Does google want to compete with programming languages like this? My gut tells me this is NIH not wanting to compete.
It is apples to apples.
Microsoft didn't want to compete with Sun so much as have an application development language with a garbage collector that wasn't owned by Sun.
You don't make much money off programming languages inherently.
This also elides an obvious riposte (so you mean they should have just used Mono? how did all that work out?) and a metric ton of differences between what C# targets and what Dart targets.
MS wanted to fracture the Java ecosystem. The Microsoft Java VM was an attempt to lockin developers to MS Java and not sun Java. They created J# and C# because of the sun lawsuit they lost.
They still wanted a Java like ecosystem but they would be sure it only ran on Windows servers.
MS spent years being hostile to open source software. It's only in about the past decade that they've turned a corner.
Here's a famous email from Bill Gates about Java and how to stop it.
https://web.archive.org/web/20220630223035/https://www.teche...
Unrelated to the discussion, but wow, the Nathan Myhrvold email seems prescient on so many levels.
C# was Microsoft's attempt to learn from Java's mistakes, which they very much succeeded at doing.
That's not even remotely historic accurate. Sun vs Microsoft drove MS into creating C#.
Please don't fan boy to the point of lying.
It's much much more complicated than that. Sun refused to add many language features that Microsoft (then a cautious but also genuine user of Java) wanted. Such as refusal to add delegates/closures:
https://benhutchison.wordpress.com/2009/02/14/suns-rejection... https://stackoverflow.com/questions/1973579/why-doesnt-java-...
J++, which was Microsoft's Java implementation in the 90's added a few language extensions that were clearly not Sun-approved, but driven by internal engineering feedback at MS. C# having struct and class keywords, allowing you to define your own value types, is clearly a result of that missing in Java, which still in 2025 has no such equivalent yet.
Also Java's then native code interop solution, JNI, was and still remains complete garbage, and it's flaws were a huge guide for Microsoft when they deveoped .NET and it's native interop equivalent, PI (platform invoke).
Thankfully, Java now have FFM [foreign function and memory APIs](https://docs.oracle.com/en/java/javase/21/core/foreign-funct...) APIs (and also JNA which is community driven), which are much better than JNI.
The key point is that C# was happening regardless of whatever technical upsides people wanted to see out of it. C# would still exist today and still be just as popular in the Windows ecosystem even if it made all the same exact mistakes as Java.
That doesn't change that C# was designed with that philosophy in mind. The two statements aren't mutually exclusive.
> Please don't fan boy to the point of lying.
You made 5 replies negative about C# in this comment section alone.
As they say, haters are fans too :)
Where do you see 5 negative comments? Please link them?
Also why are you talking like cliquey high-school girl regarding a programming language. Complete with the emoji no less.
It's a tool, not a religion.
Microsoft had an underlying operating system that they wanted to rewrite substantially in C# on the .NET VM. They had a decent motivation for not having a core piece of Windows dependent on a product from another vendor that was competing in some of the same markets as their core product.
Google by contrast isn't nearly as invested in Dart as Microsoft was (and still is) in C#/.NET. Perhaps a better objection is that they should have just used Go — or a Go-binary-compatible language built on some of the same toolchain. (See also: Vala and Guile still don't play nice together as well as they should for two languages from the same project.)
And Java should have just been C++ with a really nice garbage collector
And C++ should have just been Objective-C with saner invocation syntax.
And Objective-C should have just been Smalltalk without the C baggage
(is this the root NIH syndrome?! I'm guessing no, I'm only 36. maybe LISP enters the picture here?)
Dart's awesome.
I'm sure C# is too.
I've been unlucky enough to have many years in on both iOS and Android, and Dart is a fantastic language, far better than both incumbents.
I worry about judging it as a whole, based solely on their ability to launch pre-compile time code generation that is faster than their current approach.
Macros seemed really cool + really difficult to improve past the current codegen.
I have a 35K LOC "main" code base that generates 670K lines of code under the current approach. It takes 52 seconds for a cold generation of all 670K. Seconds for warm. shrugs (sounds great to me)
Yeah. Dart's over-rotation on generated code is a googlism. They have a fancy build setup internally which is very good at generated code and caching it.
I know that the build_runner authors are looking into perf as we speak, and I'd be happy to put you in touch with them if you'd like to speak with them about debugging your case: https://github.com/dart-lang/build/issues/3800
eric@shorebird.dev reaches me (for this or any other Flutter/Shorebird issue).
I think Go would have been the most logical choice, given that it's Google.
We considered Go! At the time it was much more designed for servers than mobile devices. If I recall correctly the minimum binary size was like 30mb or something.
I think you are missing the fact that Dart is actually an incredibly nice language to work with in a way that Go absolutely is not.
The problem is that it’s yet another language. It’s the cognitive load and the inability to easily reuse code across a project.
That seems more like an argument against Go. Dart is the language more familiar to the average dev, and you're gonna have an easier time translating that to Java/C# than you are Go.
> Semantic introspection, unfortunately, turned out to introduce large compile-time costs which made it difficult to keep stateful hot reload hot.
They must have done something wrong. Macros are expanded when you ahead-of-time compile your code, which doesn't take place in the run-time environment where you hot load, but in the build environment. It doesn't matter whether the macro are simple, or whether they can inspect lexical environments and look up type info and whatnot.
Compile-time costs should never factor into hot reload, because the stuff being loaded should already be compiled.
Maybe they aren't explaining it; there could be certain semantic problems preventing existing state from being re-used on what should be a hot reload.
Macros create certain issues in reloading. If you change a macro such that the expansion requires different run-time support which is incompatible with existing expansions, you have problems. One option may be to reload all the code which depends on those macros, so that everything cuts over to the new run-time support. If you need to support a mixture: hot-reloaded modules using the new versions of the macros, side by side with code made using the old versions, then the old version of the run-time support has to coexist with the old.
If the run-time support for the macros is something which manages state that needs to be preserved on reloads, then that can cause difficulties. The old and new macro expansions want to appear to be sharing the same state, not different silos.
> Macros are expanded when you ahead-of-time compile your code, which doesn't take place in the run-time environment where you hot load, but in the build environment.
The user experience with hot reload is:
1. They hit "run".
2. The compiler compiles the app.
3. The app starts running on their device.
4. They change some code in their IDE.
5. They click "hot reload".
6. The compiler compiles the changed code.
7. The IDE sends the updated code to the running app.
8. The runtime loads the changed code.
9. They see the changed behavior in their running app.
Steps 6-8 determine the total time between "user requests a hot reload" and "user sees their updated app". Compilation doesn't happen on the device, but it still takes time and is in the critical path for that experience.
Making the compiler slower makes hot reload slower. We measure hot reload time in milliseconds, so it doesn't take much for us to consider it an unacceptable performance regression.
Firstly, that's a developing user of the language, not the end user of the application.
Secondly, any compilation delay they experience affects all their iterative development scenarios, including a complete application restart for each run.
If they wrote the macros themselves that are slowing down compilation that much, it is their self-inflicted problem.
Even if macros are slowing down compilation noticeably, unless you change the macros such that everything that uses them has to be recompiled, you still have the benefit of incremental compilation and hot reloading. E.g. recompiling one just one file-with-macros out of hundreds that don't get recompiled.
> We measure hot reload time in milliseconds
It takes seconds to minutes to make the code change, but when you hit the hot-key to deploy it to the target, it's gotta compile and upload in milliseconds?
That's just a silly requirement that will leave your compiler development hamstrung.
I can't even type this comment without at times experiencing character delays that are certainly more than single digit milliseconds. :)
A conclusion like "our users require hot reloads to be milliseconds, end-to-end including compilation" deserves to be researched among the user base, because I don't suspect most devs need the times to be quite that low. They are building a program, not trying to avoid getting fragged in a multi-player shooter!
> Firstly, that's a developing user of the language, not the end user of the application.
Yes, hot reload is a developer feature in Dart, not an end user feature.
> If they wrote the macros themselves that are slowing down compilation that much, it is their self-inflicted problem.
The compile time impact we saw, unfortunately, wasn't entirely linear in the amount of macro applications that a user had. If macro application time was entirely pay as you go, then, yes, it would be feasible. But it impacted compiler performance worse than that.
> It takes seconds to minutes to make the code change, but when you hit the hot-key to deploy it to the target, it's gotta compile and upload in milliseconds?
Yup! Those seconds to minutes are meaningful time well spent by the user thinking about their program and the problem. Those milliseconds are just them sitting on their thumb getting mad at the machine.
> deserves to be researched among the user base, because I don't suspect most devs need the times to be quite that low.
I would suggest to you that after working on Flutter for nearly a decade, conducting user surveys every single quarter, gathering metrics from our tool usage (opt in) and lots of other UX research, that we do have a pretty good idea of what our user base wants in regards to performance. :)
You literally cannot get your thumb under your ass in milliseconds to sit on it, unless you're the Olympic record holder for that sporting event.
You can't change your focus from the window where you are editing the code to the window where you are interacting with the app in milliseconds. Maybe triple digit milliseconds at best, not double, let alone single. Well, double may be within reach, if it's hot-keyed.
Perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away
"You have discovered ENGINEERING"
I think after reading through the blog post the reasons they have made a whole lot of sense and sounded like that of a mature engineering team to me.
There are a bunch of other interesting approaches here they can look at. Improving the code generation story more generally, shopping the augmentations feature (basically C#’s partial classes) and getting more serious about serialization all feel like sensible directions from here.
There is a really interesting community proposal at the moment on the serialization front that I think would solve a lot of the issues that got people so excited about macros in the first place here: https://github.com/schultek/codable/blob/main/docs/rfc.md
This sounds like they were going for a Roslyn analogue (using Dart to generate Dart the same way Roslyn uses C# to generate C#). Definitely a big time investment.
It's a big bite to chew, but I think Roslyn has paid big dividends.
Why is it that Lisp family macros are easy to implement and use, but not so in other languages?
Lisp is dynamically typed and macros are syntactic. The macro gets in an AST and spits out an AST with little in the way of semantic information involved beyond maybe some identifier resolution.
Dart is a statically typed language and we wanted macros to be able to introspect over the semantics of the code the macro is applied to, not just the semantics. For example, if a macro is generating code for serialization, we wanted the macro to be able to ask "Does the type of this field implement JsonSerializable?". Answering that means being able to look up the type of the field, possibly walk its inheritance hierarchy, etc.
It's a very different problem from just "give me a way to add pretty loop syntax".
This is just my opinion but I believe it's because the syntax tree is the syntax. In a Lisp macro you are working with lists, just like you are for any other Lisp code. Almost every other language I've used (I've been programming since the late 1980s) has, at best, a special data structure to manipulate ASTs. So it ends up being quite unnatural. Lisp macros are just Lisp.
> This is just my opinion but I believe it's because the syntax tree is the syntax.
This holds for "old" lisps. There are other options. Racket and Scheme uses "syntax objects". Syntax objects contain besides the old syntax tree also source location information and lexical information.
See for example the last part of:
https://parentheticallyspeaking.org/articles/bicameral-not-h...
Are Lisp macros easy to use? My understanding was that Lisp code is notoriously difficult to understand if you didn't write it, largely because of the obscenely powerful macro system that makes it too easy to be too clever. Which is essentially the same complaint that everyone has about every macro system.
I’ve been working with Common Lisp for about ten years now and I’ve never found that this criticism matches the reality of working on lisp codebases.
It’s possible to do that, but in practice it’s quite uncommon. Especially since Lisps offer great tools for programmers to learn what the macros that the are using actually do.
Why is it that square pegs are easy to fit in a square hole, but not so in a round hole?
in Scheme you can redefine `define` to be number 5. Easy to implement, but a nightmare in real world scenarios [0]. That's why languages like Go became popular, they're trashy, boring, and dumb, but that's exactly what's needed in big projects.
[0]: imagine your colleague wrote a macro that redefines for loops because at the time, it made life easier for him.
> in Scheme you can redefine `define` to be number 5.
This is like asking "what if your coworker named all errs as `ok`" so everything was `if ok { return errors.New("Not ok!!"); }`. It's possible but no one does it.
This is why `defmacro` and `gensym` in common lisp are awesome, and similarly why Go's warts don't matter. Much of programming language ugliness is an "impact x frequency" calculation, rather than one or the other.
It's also why javascript is so terrible, you run into it's warts constantly all day long.
"No one does it" is extremely relative. Take your closing remark about JavaScript: I don't run into JS warts very often at all, and I'm a professional web developer who works in it day in and day out. I guess my team just doesn't do dumb JS stuff?
But apparently lots of other people do run into them regularly, so I believe that such things do exist.
By the same token, I've heard countless reports of people struggling with the flexibility that Lisp offers, with co-workers who abuse it to create nightmarish situations. That you haven't experienced that doesn't mean no one does.
ah you misunderstand me.
I don't mean "do dumb stuff", I mean I've literally never seen anyone redefine the `define` keyword in any code.
With javascript, I do see people use `===` frequently. It's a wart of the language that the operator even exist. It's not "dumb" to use it - it's how frequently are you assaulted with the bugs of the language (not bugs in your code).
That’s true but I think if a major issue is having to type === instead of == then you’re probably doing okay.
I would definitely recommend mature teams for powerful languages, and the other way around.
You wouldn't let a child handle a chainsaw.
You might have been misled by a CS professor's enthusiasm about what they thought was neat, or was a good way to communicate something.
But I don't recall seeing someone re-define `define` in real life.
Nor do I recall seeing any problematic redefinitions in Scheme in real life.
That said, if you wanted to make a language variant in which `define` did something different than normal (say, for instrumentation, or for different semantics for variable syntax), then you'd probably use Racket, and you'd probably define your own `#lang`, so that the first line of files using your language would be something like `#lang myfunkylang`.
You can randomly sample code in <https://pkgs.racket-lang.org/>, and look for people doing anything other than `#lang racket/base` or `#lang typed/racket/base`.
Maybe it's time to just recognize that lisp-style macros-as-language-syntax features just aren't worth the struggle and grief?
The big metaprogramming feature traditionally implemented in macros, type generation, is already provided in some form by all major languages already.
And an awful lot (and I mean an awful lot) of good work can be done at the string replacement level with cpp. And generating code upstream of the compiler entirely via e.g. python scripts or templating engines is a very reasonable alternative too. And at lower levels generating code programmatically via LLVM and GPU shaders is well-trodden and mature.
Basically, do "macros" really have a home as a first class language feature anymore?
Oh how I enjoy trying to compile and use projects where they use some complex home brew codegen system often written in a different language entirely [1]. Luckily they often use Python as part of some core build step which never breaks compatability in their regex librwry [2]. </sarcasm>
Yes macros can be a pain and should be limited, but in my experience, a couple hundred lines of macros replaces many thousands of lines code generators with complicated baroque build system integrations (ahem ROS2). The tradeoff is even worse when the language supports templates and compile time operations which can usually replace macros with even less code and are easier to understand. Though at least Go supports codegen properly with support in its official tooling.
1: https://github.com/google/flatbuffers/blob/master/src/idl_ge... 2: https://github.com/python/cpython/issues/94675
Bad code is bad code, but FWIW every time I've seen something that actually needs a "complex home brew codegen system", it's absolutely best expressed in a dedicated piece of software engineering and not within a macro environment. Zephyr leans heavily on that kind of engineering, actually.
The point is really that "macros" is a weird sandwich between "complicated metaprogramming you need to do from first principles" and "you really didn't need metaprogramming, did you?". And that over the years that sandwich has been getting thinner.
Lisp macros in the 60's were a revelation. They don't really have a home anymore.
> FWIW every time I've seen something that actually needs a "complex home brew codegen system", it's absolutely best expressed in a dedicated piece of software engineering and not within a macro environment. Zephyr leans heavily on that kind of engineering, actually.
I'd definitely agree with you about Zephyr's "I really want C macros to be real macros" system for it's device tree system. It's a huge pain to debug because you don't find issues until the linking step. However I'd counter with the fact that C macros _aren't_ real macros. Hence the pain.
Actually C's "macros" are literally just a separate codegen tool (m4 processor) just like what you're suggesting. It's a large part of what makes C code so hard to programmatically parse or automate.
Essentially Zephyr is doing meta-programming in an external codegen tool. Totally agree that a separate codegen tool specifically for device trees or whatnot could be better than that. However if C had a proper macro system (and compile time types), you could readily express the device tree system in the language at compile time and produce helpful errors.
> The point is really that "macros" is a weird sandwich between "complicated metaprogramming you need to do from first principles" and "you really didn't need metaprogramming, did you?". And that over the years that sandwich has been getting thinner.
They aren't really though. Homebrew codegen systems are much more of the weird sandwich between "you needed metaprogramming" and "you didn't have metaprogramming". Instead you build a system which is even more complicated in total, e.g. source and generated code gets out of sync, you have name clashes, the core API logic is spread out over different languages and code bases, etc.
Though I do agree the use cases for macros are shrinking, but more due to meta-programming via templates and compile time expressions becoming more powerful and usually preferable method to doing meta-programming.
They seem to work well in Rust.
In another world, macros could have filled the role that programmable yaml fills today.
|-
What will the Dart team focus on instead? I wish the cross-compilation issue was taken to a higher priority, I mean Flutter already kinda of solved it.
They talk about it some in the post, but my understanding is they're going to see if the can solve some of the motivating problems (e.g. json serialization) with simpler one-off solutions rather than a big general language feature.
Oh okey