1. Go, when I first saw code I wrote almost a decade ago still compiles and runs in Go, I decided to use Go for everything. There were some initial troubles when I started using it a decade ago, but now it's painless.
2. Haskell, I use it for DSL and state machines.
3. Bash for all deployment scripts and everything.
4. TypeScript, well for the frontend.
Lately, I’ve been using Go and SQLite for nearly everything.
I don't think I’ve any motivation to look at any other language.
I gave up on Java, Python, Ruby, Rust, C++, and C# long ago.
Fun fact:
Same thing for cloud, I just don't use managed cloud services anymore. I only use VMs or dedicated servers. I've found when you want to run a service for decades+, you’ve got to run your own service if you want it not to cost a lot in the long run.
I manage a few MongoDB, PostgreSQL clusters. Most of the apps like email lists marketer (for marketing, sending thousands of email each day) are simple Go app + SQLite using less than 512MB RAM.
Same for SaaS billing, the solution is entirely written in Go and uses Postgres. (I didn’t feel safe here using SQLite for this for a multi-tenant setup.)
Our chat/ticketing system is SQLite + Go. Deployment is easy, just upload Go cross-compiled binary + systemd service file, alloy picks up log and drops it graphana which has all alerts there.
I don't need to worry about "speed" for anything I do in Go, unlike Ruby/Python.
When something has to be correct I define it model it in Haskell as its rich type system helps you write correct code. Though setup is not painless as Go, decent performance.
I write good documentation, deployment instructions right into mono repo. For a small team this is more than enough imho.
No Docker, no Kubernetes, just using simple scripts + graphana + prometheus + Loki and for alloy/nodeexporter. Life couldn't be any simpler than this.
Used to be in a few companies where most developers just couldn’t/wouldn’t write in more than one language and it was always a pain to maintain different runtimes, languages, packages and internal dependencies of things that could have been a 20-line bash script, and had to be maintained and updated from time to time.
I understand people have their own limitations and reasons, but having to constantly deal with “wrong tool for the job” for the thousandth time gets frustrating.
Especially in cases where four different languages were used across the company because different people had different preferences. Worst case was Python/Ruby/C#/Javascript.
I get that Bash is not perfect, but I enjoy the simplicity and directness, and dislike the multitude of problems caused by not using it have shown to me it’s a better tradeoff.
Funny, I have also converged on shell scripts for simple scripting or configuration, but I use /bin/sh for portability. Many of the machines I use do not even have bash installed.
> 1. Go, when I first saw code I wrote almost a decade ago still compiles and runs in Go, I decided to use Go for everything. There were some initial troubles when I started using it a decade ago, but now it's painless.
And fewer dependencies, and fewer vulnerabilities (if any at all, depending on your few dependencies).
Go is "only" a pain when you want to use your own copy of packages (because `replace` directives are always ignored everywhere except on the "root" package), and whenever you want to work with private Git repositories outside of the forges that have hardcoded config in the Go code (like GitHub) (because Go assumes there's an HTTPS server, and the only way to force it to use only SSH is with ugly workarounds AFAIK).
But despite this I still prefer it for personal projects because I can come back after not touching it for years, and the most I need to do is maybe update `golang.org/x/net` or something like that.
Note that Java makes breaking changes all the time, which is why it publishes a compatibility guide with each major release. These are usually judged to be minor breakages, but if you have a codebase on the order of millions of lines, there's a very good chance that at least one thing will break and require a little bit of work to upgrade. And Java's not unique here, every stable language makes changes all the time that have the potential to break some user in some edge case.
I'm in the same boat, I started using go only a year ago, but don't want to really use anything else now for apps or data processing. I wrote an app that loaded a lot of data for reporting into duckdb. I've been doing so much java and JavaScript that I feel like it was just much simpler to deal with overall.
Shell for the scripts. I haven't tried to work through much DSL as I really am not a fan of DSLs. Maybe I'll give haskell a shot again to see if it sticks.
The funny thing is how ubiquitous TypeScript/JavaScript is. There is no escape. I also only use four languages: C#, F# (for DSL), Powershell (for deployment) and... TypeScript.
Despite we have different tastes in language and are in completely different ecosystems, TypeScript is still the lingua franca lol.
Whether there is any escape from JS/TS is a matter of what you are building and who is around you. If you are building SPAs all day, then sure, you will probably have to deal with the JS/TS ecosystem. If you are just building websites, then basically any traditional web framework would do. Only that then it depends on whether you have to work with people, who don't know web basics or people who want to use JS web frameworks even when there is no need, in majority, so that you get no choice, but to work as a team.
In theory most websites could be done statically with rendered HTML and CSS and maybe a little bit JS, but not mandatory, and having noscript fallback flows. MPAs are fine for most things and having noscript fallback flows can also be done kind of systematically, and in many cases isn't that difficult. Just that these days not many people bother or care.
IME Ruby is really good for working alone on tiny projects without an IDE (trying to get more than syntax highlighting causes problems). Sometimes I write single-file scripts or even just use interactive Ruby.
Good for you? I’m glad you have languages that fit your needs.
In the realtime/high assurance systems world, where garbage collection can be a huge source of non-determinism and overhead; we don’t have great options.
Zig is really the only language (idk about Odin?) trying to take the same approach that C did in giving you absolute control over a minimally abstracted CPU model. Us folks who need/want maximum control/performance should be allowed to have nice things too.
Im with you on Go and SQLite, dropped Postgres for many of my projects, I might add: HTMX instead of a TS frontend, very few apps need a TS/React/... frontend. Doubling development effort with minimal gain (except games etc.)
Dabbled with Rust some years ago, I think it is an excellent choice for sudo-rs and such but for GUI and web apps I (perhaps too stupid) end up with arcmutex soup.
Yeah after writing some Haskell semi production apps (ported an old service at a previous company to Haskell and tried to productionize it enough for our staging environments) that's the conclusion I came to for using Haskell.
Curious if you've tried to use agents to read / write Haskell and how the experience has been?
Would love to use go for SaaS but things like OmniAuth (RoR) make me stay with Ruby. I actually never used ruby before, but I think its a swell language to do SaaS in.
I went the same way but with only using Lisp dialects like Elisp and Clojure and Nix. Although I would ditch Nix too if another Lisp could supplant it too.
I'm curious about your Clojure setup. Same as GO, I think Clojure has very strong backwards compatibility.
If trying to avoid the cloud, like OP, which hosting option is suitable for Clojure, what do you use? I believe Clojure (JVM) has higher RAM requirements?
And GO has pocketbase.io which looks quite interesting. Do know whether something similar exists for Clojure, or maybe it's straightforward enough to compose your own by using various Clojure libs?
I also LOVE Go, but recently rewrote a small tool to Lisette [1]. Its was the most fun i had in a long time while programming.
I can Highly recommend it, specially because you have Haskell experience (you get all the usual suspects, like ADTs, exhaustive pattern matching etc) in Lisette code. It has a fast compiler too, and produces human readable Go code. It also comes with great tooling out of the box (formatter/lsp etc).
Java is a resource hog when you use patterns and libraries popular in Java land.
When you are working in the Java ecosystem, you just assume that this much resource is needed by the app! But when you'll code the same thing in Go using the same methods, you'll find resource usage is really very low.
We’ve a 1: 1 copy of the app; on JVM, it's using 2GB RAM using Spring Boot, and on Go, it runs on 512MB RAM and is blazingly fast.
ofc, it's possible to tune java app but why bother? when we get same low resource usage and better performance in Go from get go while still writing naive and dumb code?
Deployment is super simple in Go, upload a single cross compiled binary it's done. Very simple and easy.
Rust needs a lot more effort to write correct code than Go in my experience. We get the same performance out of Go, with much less effort. At some point, it's just cheaper to start one extra instance than perform some low-level optimisation; modern hardware is fast enough that Rust-level optimisation is rarely needed for what we do.
I cant really agree on Rust. It does take a bit more time to write the same code in rust vs go. But in my experience the code is much more likely to be incorrect in go than it is in rust. Which over longer periods means rust is easier to maintain.
If you have unmotivated employees then using Go will only exacerbate the shortcomings it has. Cutting corners is much easier in Go than it is in Rust. But in general it's true, if you want a piece of code released a bit faster but spend more in developer hours maintaining it later than Go is the better fit. And there are definitely use cases for that.
You can write exploratory code in Rust fairly quickly, it's just obvious when you've done so due to the heavy boilerplate involved. Keep in mind that the earliest versions of Rust were actually very Golang-like, the language iteratively evolved towards what it is today.
I'm not sure the effort part makes sense now that we have LLMs? LLMs basically liberate language choice, which has made Rust incredibly attractive to me since I basically get good performance out of the box, while any possibly annoying pedantic obsession with correctness can be easily handed over to the LLM.
If I use a JVM language, running my test suite takes 10 to 30 seconds. With Rust it spends 3 seconds compiling and half a second to run 250 tests.
The irritating parts of Rust are more related with bloated libraries like serde that insist on generating code which massively slows down compilation for not much benefit.
I don't understand why Zig's `Io` is a "monad". In fact I discussed that with the author of this article and the author of Zig here, but no conclusion was reached (https://news.ycombinator.com/item?id=46129568).
But, flipping the script, if you want to see something like Zig's `Io` interface in Haskell then have a look at my capability system Bluefin, particularly Bluefin.IO. The equivalent of Zig's `Io` is called `IOE` and you can't do IO without it!
Regarding custom allocators and such, well, that could fit into the same pattern, in principle, since capabilities/regions/lifetimes are pretty much the same pattern. I don't know how one would plug that into Haskell's RTS.
Agreed, Zig's IO is closer to the effect handler / capability passing model. And by closer, I mean exactly the same [1]. However, it's related to monads by duality. A comonadic program is a program that depends on context, which captures the notion of passing capabilities around.
[1] Languages designed around capability passing often have other features, like capture checking to ensure capabilities aren't used outside the scope where they are active. There are only two such languages I know of. Effekt (see https://effekt-lang.org/tour/captures) and Scala 3 (see https://docs.scala-lang.org/scala3/reference/experimental/cc...) However, this is not core to the idea of capability passing.
I am going to look at Zig after 1.0 is released. The current state is that you are playing catch up with language if you have any reasonable sized project in Zig. A new release might mean that you need to rewrite significant portion of your code.
i dont think its generally a good idea to be making complex type generators like this in zig. just write the type out.
the annoyingness of the thing you tried to do in zig is a feature. its a "don't do this, you will confuse the reader" signal. as for optional, its a pattern that is so common that it's worth having builtin optimizations, for example @sizeOf(*T) == @sizeOf(usize) but @sizeOf(?*T) != @sizeOf(?usize). if optional were a general sum type you wouldn't be able to make these optimizations easily without extra information
The point is that algebraic data types are common in functional languages. "Maybe" is just an example of an algebraic data type, there's tons more.
If the article says "functional programmers should take a look at Zig", and Zig makes algebraic data types hard, then maybe they shouldn't use it.
If you even say "the annoyingness is a feature, use zig the way it is intended to be used" then that's another signal for functional programmers that they won't be able to use zig the same way they use functional languages.
> if optional were a general sum type you wouldn't be able to make these optimizations easily without extra information
Rust has these optimizations (called "niche optimizations") for all sum types. If a type has any unused or invalid bit patterns, then those can be used for enum discriminants, e.g.:
- References cannot be null, so the zero value is a niche
- References must be aligned properly for the target type, so a reference to a type with alignment 4 has a niche in the bottom 2 bits
- bool only uses two values of the 256 in a byte, so the other 254 form a niche
There's limitations though, in that you still must be able to create and pass around pointers to values contained within enum, and so the representation of a type cannot change just because it's placed within an enum. So, for example, the following enum is one byte in size:
enum Foo {
A(bool),
B
}
Variant A uses the valid bool values 0 and 1, whereas variant B uses some other bit pattern (maybe 2).
But this enum must be two bytes in size:
enum Foo {
A(bool),
B(bool)
}
...because bool always has bit patterns 0 and 1, so it's not possible for an invalid value for A's fields to hold a valid value for B's fields.
You also can't stuff niches in padding bytes between struct fields, because code that operates on the struct is allowed to clobber the padding.
Yes, the care that Rust goes through to ensure that niches work properly, especially when composing arbitrary types from arbitrary sources, shows why you absolutely don't want to be implementing these optimizations by hand.
Came to say this. Early in my career I really thought implementing Maybe in any language is necessary but not I know better. Use the idioms and don’t try to make every language something it’s not.
This looks like an example of a low level language vs a high level language (relatively speaking). The low level language makes a lot more of what is going on underneath explicit compared to the higher level language which abstracts that away for a common pattern. Presumably that explicitness allows for more control and/or flexibility. So apples to oranges?
Low-level doesn’t mean more information, it means more explicit.
In Zig, that means being able to use the language itself to express type level computations. Instead of Rust’s an angle brackets and trait constraints and derive syntax. Or C++ templates.
Sure, it won’t beat a language with sugar for the exact thing you’re doing, but the whole point is that you’re a layer below the sugar and can do more.
Option<T> is trivial. But Tuple<N>? Parameterizing a struct by layout, AoS vs SoA? Compile time state machines? Parser generators? Serialization? These are likely where Zig would shine compared to the others.
I don't think there is a standardized meaning of 'low-level'. I think a useful definition is that a low-level language controls more/is explicit about more properties of execution.
So zig/c/c++/rust all have ways to specify when and where should allocations happen, as well as memory layout of objects.
Expressivity is a completely different axis on which these low-level languages separate. C has ultra-low expressivity, you can barely create any meaningful abstraction there. Zig is much better at the price of remarkably small amount of extra language complexity. And c++ and rust have a huge amount of extra language complexity for the high expressivity they provide (given that they have to be expressive even on the low-level details makes e.g. rust more complex as a language than a similar, GC-d language would be, but this is a necessity).
As for this particular case, I don't really see a level difference here, both languages can express the same memory layout here.
> Option<T> is trivial. But Tuple<N>? Parameterizing a struct by layout, AoS vs SoA? Compile time state machines? Parser generators? Serialization? These are likely where Zig would shine compared to the others.
I don't see how any of that becomes easier in the Zig case. It's just extra syntactic ceremony. The Rust version conveys the exact same information.
I found this funny. I am not sure if it was intended that way!
> Monads are not some kind of obscure math-y thing that only the big brains think are necessary. No, instead monads are a fundamental abstract algebraic description of imperative programming as a computational context.
Yep, as a non-big-brainer, I definitely get it now. :)
io is not a monad. theres nothing stopping you from stashing a global io "object" and just passing the global wherever you interface with the stdlib.
It's dependency injection. and yes, you can model dependecies like a monad but most people, even in less pure fp langs, don't.
i don't really say this to just be a pedant, but if you're an fp enjoyer, you will be disappointed if you get the picture that zig is fp-like, outside of a few squint-and-it-looks-like things
My reading of the article, was that the author seems to be in search of a new paradigm, that moves beyond what he sees as the limitations of "fp-like" languages as they exist today. His point appears to be that Zig provides the benefits of "fp-like" languages that exist today, while avoiding at least some of the downsides.
And he does admit you may have to squint, to appreciate the fp capabilities provided by Zig.
It is worth noting that some rather "enlightened" type system features are common in other imperative languages, not particularly novel ides in Zig.
For example Swift enums, while in some ways clunky, can do a decent job both as newtypes and as sum types (unlike Java enums, which are a fixed collection of instances of the same class).
Sigh. I meant that the zig authors did not make it a general pattern and just slapped on the DI pattern specifically for io, instead of generalising the abstraction so people can DI stuff.
While using "monads" in functional languages is a neat trick, I do not like them.
In my opinion, the concept of automaton is fundamental and it deserves equal standing with the concept of function (even if it is a higher level concept that is built upon that of function).
I believe that functional programming is preferable wherever it is naturally applicable, and most programs have components of this kind, but most complete application programs, i.e. which do input and output actions, are automata, not functions and it is better to not attempt to masquerade this with tricks that provide no benefits.
Therefore, I prefer a programming language that has a pure functional subset, allowing the use of that subset where desirable, but which also has standard imperative features (e.g. assignment), to be used where appropriate.
For me monads are similar to inheritance. There are areas where one topic/functionality is dominant and it can really help to define a base class in a library or define a monad like for async. The moment you start to mix/compose things, things get ugly pretty fast.
You can't just put assignment in a functional language though - you lose the ability to fearlessly refactor that's the whole point. You either need something like a stratified language (which I've never seen actually implemented, much less production-ready, as much as I like the design of Noether), or you use, well, monads.
> look at the era of software that garbage collectors have ushered in. Programs are bloated, slow, and wasteful compared to the literal super-computers that are running them.
Of course you can: you just have to define it in your type.
The output set becomes a union type of the normal output and whatever you want as an exception.
If you write this as a monad, your get very similar syntax to procedural code.
An exception is different to an Either result type. Exceptions short circuit execution and walk up the call tree to the nearest handler. They also have very different optimization in practice (eg in C++)
I'm what way is that different?
You return early and the call Cascades up the call chain until you handle it (otherwise it's always an "either" results)
In practice you use something like an exception monad, which makes this a lot more ergonomic since you don't need to carry a case distinction around for every unwrap: an exception monad essentially has an implicit passthrough that says "if it's a value, apply the function, if it's an exception just keep that".
You only need to "catch" the exception if you actually need the value.
I'm this case the exception monad is not that different from annotating a function with "throws": your calling function either needs it's own throws (=error monad wrapper) in which case exceptions just roll through, or you remove the throws, but now need to handle the exception explicitly (=unwrap the monad).
Then allow partial functions too. Maybe even require them to be tagged as such. (Is that within the capabilities of Zig's programmable type system?)
I don't mind escape hatches - as long as they're visible/greppable in the source code. You can always write undefined/error/panic/trace directives while you're coding, then come back and remove them later.
I would love a language that distinguishes functions (pure mathematical constructs) from procedures (imperative constructs that map in a predictable way to the instruction set).
This feels like the direction Algebraic Effects might take us.
> Well, I’ve been radicalized. I’ve learned enough performance-oriented programming to be dissatisfied with the common functional languages (Haskell, OCaml, Common Lisp/Clojure, Scheme) because each of these languages are predicated on the existence of garbage collection and heaps.
I would take another look at Common Lisp if I were the author. Manual memory management is very much an option where you need it.
No? I don't agree. The domain can be strongly modelled in the types; for instance, declaring kilometers, seconds, etc. instead of using primitive floats/reals everywhere, to statically prevent dimensional analysis issues.
I am still hearing about Monads, but is it not the case that they have well-known flaws? And that is the reason why algebraic effects are interesting, because they don't have these flaws?
You can do functional programming without strict typing. Not common, because strict types work just so well with the FP paradigm but definitely possible, it’s not in itself a contradiction
I think the lisp situation is peculiar, for 3 main reasons:
- most of them are dynamically typed (thus don't need sum types, as there are no types). The ones that do have gradual type systems likely either implement some form of them (off the top of my head I can only remember typed racket, and I think it implements them through union types)
- not all lisps lean functional: I believe that's mostly a prerogative of scheme and clojure (and their descendants); something like CL is a lot more procedural, iirc
- in most lisps, thanks to macros, you probably don't need the language to support some sort of match construct out of the box: just implement it as a macro [1]
In general the "proper sum types" side of functional programming is just the statically typed one, but even in dynamically typed FP languages you end up adopting sum type-esque patterns, like elixir's error handling (which closely resembles the usual Either/Result type, just built out of tuples and atoms rather than a predefined type), and I assume many lisps adopt similar patterns as well
It’s possible (even true in my opinion) that garbage collected functional languages and low level languages like Zig are both great, and serve different purposes.
I actually ship stuff in Haskell believe it or not. I also think Zig is very cool and have played around with it quite a bit. Yes, garbage collection hurts performance, but the reality is that the overwhelming majority of all software does not suffer from the performance loss between well written code in a reasonably performant functional gc language and a highly performant language with manual memory management. It’s just not important. But not having to deal with the cognitive overhead of managing memory and being able to deal in domain specific abstractions only is a massive win for developer productivity and code base simplicity and correctness.
I think OxCamls approach of opting in to more direct control of performance is interesting. I also think it’s great that many functional patterns are making their way into imperative first languages. Language selection is always about trades offs for your specific use case. My team writes Haskell instead of Rust because Haskell is plenty fast for our use case and we don’t have to write lifetime annotations everywhere and think about borrowing. If we needed more performance we would have no choice but to explore other languages and sacrifice some developer experience and productivity, that’s very reasonable. I’m also not saying performance doesn’t matter (if you’re writing for loops in Python, stop). But this read to me like “because better performance exits with manual memory management, all garbage collectors are bad, so I’ll force zig to be something it’s not in order to gain performance I probably don’t need”. Which to me is an odd take. A more measured way of thinking about this might be, it can be useful to leverage functional patterns where appropriate in low level languages, if you find yourself needing to write code in one.
Anyone preferring functional programming will be extremely disappointed with Zig. And I'm saying this as a big user of Zig. It's a language for imperative code. And Io is not a monad, just a bunch of virtual methods doing the actual I/O.
From article " Where the next Programming Language will come from? that beautifully described the sad state of things. His main point is that the incentives for programming language innovation are at best misaligned and at worst non-existent"
Ok. Zig is great. But wont it still suffer from same headwinds as every other 'better' language. That industry wont adapt it? They have to much installed base and just want to hire Java/C#/etc...
You can think of comptime (as of zig 0.16) as an interpreter that evaluates code with very limited optimization. So yes, naive use of comptime can definitely grind compilation to a halt.
Zig tackles the halting problem a bit differently by putting the evaluation cutoff in userspace through the compiler builtin function `@setEvalBranchQuota`. You bump up the quota as you see fit.
I've been recently trying to port my simple program to Mojo to find out how the language looks like and feel. And the comptime feature (which inspired by Zig I think) is absolute joy to use. It helps a lot that the syntax looks like Python also. Excited to see how the language will become in the future particularly for its memory safety paradigm.
I would encourage everyone remotely interested in Zig to have a look at Odin[1]. If like me, you read that article and found yourself muttering "what the hell," then you might appreciate Odin's simplicity and design consistency.
I am definitely in the minority here, but I am not a fan of the kind of meta-programming that Zig and Rust offer, with Rust being especially atrocious. In the two decades I've been programming I can count on one hand the number of times meta-programming was an appropriate solution to a problem I had. Every time I reached for it, I got bit. There's a reason "when in doubt, use brute force" is sage advice, it may not be fast and glamorous, but it'll be a hell of a lot less opaque.
> I see http_client as existing in a Reader monad that contains an allocator and an IO interface. This is exactly how the IO monad (and for that matter IO#) works in Haskell. The fact that the Zig people came up with this independantly speaks not just to the universal nature of monads (and the algebraic structures of programming languages)
Honestly this sounds like monad bullshit. That's a struct/class/ADT/whatever you want to call it, they existed since forever. The only idea Zig had was that maybe we shouldn't make them global instances.
Isn't the whole point of abstraction to not care about whats underneath unless you really have to? But ideally, you don't because the abstraction is "good enough"?
I haven't heard anyone writing code in Elixir complain about performance issues.
I believe EqPoint allows you to pass around a bag of functions (aka an interface, which Zig does not have as a concept) to functions which can be written in terms of "I need these functions" rather than in terms of a concrete type.
116 comments:
These days I just use a few languages:
1. Go, when I first saw code I wrote almost a decade ago still compiles and runs in Go, I decided to use Go for everything. There were some initial troubles when I started using it a decade ago, but now it's painless.
2. Haskell, I use it for DSL and state machines.
3. Bash for all deployment scripts and everything.
4. TypeScript, well for the frontend.
Lately, I’ve been using Go and SQLite for nearly everything.
I don't think I’ve any motivation to look at any other language.
I gave up on Java, Python, Ruby, Rust, C++, and C# long ago.
Fun fact:
Same thing for cloud, I just don't use managed cloud services anymore. I only use VMs or dedicated servers. I've found when you want to run a service for decades+, you’ve got to run your own service if you want it not to cost a lot in the long run.
I manage a few MongoDB, PostgreSQL clusters. Most of the apps like email lists marketer (for marketing, sending thousands of email each day) are simple Go app + SQLite using less than 512MB RAM.
Same for SaaS billing, the solution is entirely written in Go and uses Postgres. (I didn’t feel safe here using SQLite for this for a multi-tenant setup.)
Our chat/ticketing system is SQLite + Go. Deployment is easy, just upload Go cross-compiled binary + systemd service file, alloy picks up log and drops it graphana which has all alerts there.
I don't need to worry about "speed" for anything I do in Go, unlike Ruby/Python.
When something has to be correct I define it model it in Haskell as its rich type system helps you write correct code. Though setup is not painless as Go, decent performance.
I write good documentation, deployment instructions right into mono repo. For a small team this is more than enough imho.
No Docker, no Kubernetes, just using simple scripts + graphana + prometheus + Loki and for alloy/nodeexporter. Life couldn't be any simpler than this.
I am in a similar place.
Especially regarding Bash.
Used to be in a few companies where most developers just couldn’t/wouldn’t write in more than one language and it was always a pain to maintain different runtimes, languages, packages and internal dependencies of things that could have been a 20-line bash script, and had to be maintained and updated from time to time.
I understand people have their own limitations and reasons, but having to constantly deal with “wrong tool for the job” for the thousandth time gets frustrating.
Especially in cases where four different languages were used across the company because different people had different preferences. Worst case was Python/Ruby/C#/Javascript.
I get that Bash is not perfect, but I enjoy the simplicity and directness, and dislike the multitude of problems caused by not using it have shown to me it’s a better tradeoff.
Funny, I have also converged on shell scripts for simple scripting or configuration, but I use /bin/sh for portability. Many of the machines I use do not even have bash installed.
> 1. Go, when I first saw code I wrote almost a decade ago still compiles and runs in Go, I decided to use Go for everything. There were some initial troubles when I started using it a decade ago, but now it's painless.
And fewer dependencies, and fewer vulnerabilities (if any at all, depending on your few dependencies).
Go is "only" a pain when you want to use your own copy of packages (because `replace` directives are always ignored everywhere except on the "root" package), and whenever you want to work with private Git repositories outside of the forges that have hardcoded config in the Go code (like GitHub) (because Go assumes there's an HTTPS server, and the only way to force it to use only SSH is with ugly workarounds AFAIK).
But despite this I still prefer it for personal projects because I can come back after not touching it for years, and the most I need to do is maybe update `golang.org/x/net` or something like that.
Well, Java would compile and work for 3 decades straight. If anything, go did have an actual breaking language change (for loop variable capture)
Note that Java makes breaking changes all the time, which is why it publishes a compatibility guide with each major release. These are usually judged to be minor breakages, but if you have a codebase on the order of millions of lines, there's a very good chance that at least one thing will break and require a little bit of work to upgrade. And Java's not unique here, every stable language makes changes all the time that have the potential to break some user in some edge case.
Not that I need to tell you of all people, but I do find that Rust's editions system is one of the better ways to minimise this issue.
I'm in the same boat, I started using go only a year ago, but don't want to really use anything else now for apps or data processing. I wrote an app that loaded a lot of data for reporting into duckdb. I've been doing so much java and JavaScript that I feel like it was just much simpler to deal with overall.
Shell for the scripts. I haven't tried to work through much DSL as I really am not a fan of DSLs. Maybe I'll give haskell a shot again to see if it sticks.
The funny thing is how ubiquitous TypeScript/JavaScript is. There is no escape. I also only use four languages: C#, F# (for DSL), Powershell (for deployment) and... TypeScript.
Despite we have different tastes in language and are in completely different ecosystems, TypeScript is still the lingua franca lol.
Whether there is any escape from JS/TS is a matter of what you are building and who is around you. If you are building SPAs all day, then sure, you will probably have to deal with the JS/TS ecosystem. If you are just building websites, then basically any traditional web framework would do. Only that then it depends on whether you have to work with people, who don't know web basics or people who want to use JS web frameworks even when there is no need, in majority, so that you get no choice, but to work as a team.
In theory most websites could be done statically with rendered HTML and CSS and maybe a little bit JS, but not mandatory, and having noscript fallback flows. MPAs are fine for most things and having noscript fallback flows can also be done kind of systematically, and in many cases isn't that difficult. Just that these days not many people bother or care.
IME Ruby is really good for working alone on tiny projects without an IDE (trying to get more than syntax highlighting causes problems). Sometimes I write single-file scripts or even just use interactive Ruby.
Good for you? I’m glad you have languages that fit your needs.
In the realtime/high assurance systems world, where garbage collection can be a huge source of non-determinism and overhead; we don’t have great options.
Zig is really the only language (idk about Odin?) trying to take the same approach that C did in giving you absolute control over a minimally abstracted CPU model. Us folks who need/want maximum control/performance should be allowed to have nice things too.
Im with you on Go and SQLite, dropped Postgres for many of my projects, I might add: HTMX instead of a TS frontend, very few apps need a TS/React/... frontend. Doubling development effort with minimal gain (except games etc.)
Dabbled with Rust some years ago, I think it is an excellent choice for sudo-rs and such but for GUI and web apps I (perhaps too stupid) end up with arcmutex soup.
https://www.radicalsimpli.city
Yeah after writing some Haskell semi production apps (ported an old service at a previous company to Haskell and tried to productionize it enough for our staging environments) that's the conclusion I came to for using Haskell.
Curious if you've tried to use agents to read / write Haskell and how the experience has been?
Would love to use go for SaaS but things like OmniAuth (RoR) make me stay with Ruby. I actually never used ruby before, but I think its a swell language to do SaaS in.
I went the same way but with only using Lisp dialects like Elisp and Clojure and Nix. Although I would ditch Nix too if another Lisp could supplant it too.
Obvious follow-up that's begging to be asked -- if you like nix and want a lisp, have you tried guix/guile?
I'm curious about your Clojure setup. Same as GO, I think Clojure has very strong backwards compatibility.
If trying to avoid the cloud, like OP, which hosting option is suitable for Clojure, what do you use? I believe Clojure (JVM) has higher RAM requirements?
And GO has pocketbase.io which looks quite interesting. Do know whether something similar exists for Clojure, or maybe it's straightforward enough to compose your own by using various Clojure libs?
Elisp and Common Lisp for me, although I still use bash in the terminal.
I also LOVE Go, but recently rewrote a small tool to Lisette [1]. Its was the most fun i had in a long time while programming.
I can Highly recommend it, specially because you have Haskell experience (you get all the usual suspects, like ADTs, exhaustive pattern matching etc) in Lisette code. It has a fast compiler too, and produces human readable Go code. It also comes with great tooling out of the box (formatter/lsp etc).
1. http://lisette.run
Which Sqlite library are you using? With or without cgo?
Why did you give up on Java and Rust?
Java is a resource hog when you use patterns and libraries popular in Java land. When you are working in the Java ecosystem, you just assume that this much resource is needed by the app! But when you'll code the same thing in Go using the same methods, you'll find resource usage is really very low.
We’ve a 1: 1 copy of the app; on JVM, it's using 2GB RAM using Spring Boot, and on Go, it runs on 512MB RAM and is blazingly fast.
ofc, it's possible to tune java app but why bother? when we get same low resource usage and better performance in Go from get go while still writing naive and dumb code?
Deployment is super simple in Go, upload a single cross compiled binary it's done. Very simple and easy.
Rust needs a lot more effort to write correct code than Go in my experience. We get the same performance out of Go, with much less effort. At some point, it's just cheaper to start one extra instance than perform some low-level optimisation; modern hardware is fast enough that Rust-level optimisation is rarely needed for what we do.
You are comparing a (the most?) featureful web framework to a vanilla http server.. of course one will be significantly more resource heavy.
> using SpringBoot
well there's your answer, isn't it?
I cant really agree on Rust. It does take a bit more time to write the same code in rust vs go. But in my experience the code is much more likely to be incorrect in go than it is in rust. Which over longer periods means rust is easier to maintain.
On the other hand, most pieces of software in this world are kind of mediocre code written by unmotivated employees within tight timelines.
In such context, I think Go might be a better or at least, more realistic, compromise in most cases.
If you have unmotivated employees then using Go will only exacerbate the shortcomings it has. Cutting corners is much easier in Go than it is in Rust. But in general it's true, if you want a piece of code released a bit faster but spend more in developer hours maintaining it later than Go is the better fit. And there are definitely use cases for that.
You can write exploratory code in Rust fairly quickly, it's just obvious when you've done so due to the heavy boilerplate involved. Keep in mind that the earliest versions of Rust were actually very Golang-like, the language iteratively evolved towards what it is today.
This article convinced me to switch from Go to Rust: https://discord.com/blog/why-discord-is-switching-from-go-to...
The issues with Go in that article only surfaced at Discord scale.
I'm not sure the effort part makes sense now that we have LLMs? LLMs basically liberate language choice, which has made Rust incredibly attractive to me since I basically get good performance out of the box, while any possibly annoying pedantic obsession with correctness can be easily handed over to the LLM.
If I use a JVM language, running my test suite takes 10 to 30 seconds. With Rust it spends 3 seconds compiling and half a second to run 250 tests.
The irritating parts of Rust are more related with bloated libraries like serde that insist on generating code which massively slows down compilation for not much benefit.
> If I use a JVM language, running my test suite takes 10
Sounds like a bad build tool.
I don't understand why Zig's `Io` is a "monad". In fact I discussed that with the author of this article and the author of Zig here, but no conclusion was reached (https://news.ycombinator.com/item?id=46129568).
But, flipping the script, if you want to see something like Zig's `Io` interface in Haskell then have a look at my capability system Bluefin, particularly Bluefin.IO. The equivalent of Zig's `Io` is called `IOE` and you can't do IO without it!
https://hackage-content.haskell.org/package/bluefin-0.5.1.0/...
Regarding custom allocators and such, well, that could fit into the same pattern, in principle, since capabilities/regions/lifetimes are pretty much the same pattern. I don't know how one would plug that into Haskell's RTS.
Agreed, Zig's IO is closer to the effect handler / capability passing model. And by closer, I mean exactly the same [1]. However, it's related to monads by duality. A comonadic program is a program that depends on context, which captures the notion of passing capabilities around.
[1] Languages designed around capability passing often have other features, like capture checking to ensure capabilities aren't used outside the scope where they are active. There are only two such languages I know of. Effekt (see https://effekt-lang.org/tour/captures) and Scala 3 (see https://docs.scala-lang.org/scala3/reference/experimental/cc...) However, this is not core to the idea of capability passing.
> I don't understand why Zig's `Io` is a "monad".
I don't see how it's true in any meaningful sense. It seems about the same as stating that any function is an example of the reader monad.
The whole point of monads in programming languages is as an _abstraction_ that allows one to ignore internals like how the IO token is passed around.
Maybe Zig is a language for people who are scared of abstraction. Otherwise they'd presumably be using something more powerful like Rust.
I guess that if a burrito can illustrate what is a monad, anything can be casted as a projection of a monad in some perspective.
https://i.imgflip.com/65gu3j.jpg
I am going to look at Zig after 1.0 is released. The current state is that you are playing catch up with language if you have any reasonable sized project in Zig. A new release might mean that you need to rewrite significant portion of your code.
Do you really prefer this:
Over this?Optionals handle this in zig:
Write: Read:Sure, but this is an example from the article, and pertains to sum types in general, not just Maybe.
i dont think its generally a good idea to be making complex type generators like this in zig. just write the type out.
the annoyingness of the thing you tried to do in zig is a feature. its a "don't do this, you will confuse the reader" signal. as for optional, its a pattern that is so common that it's worth having builtin optimizations, for example @sizeOf(*T) == @sizeOf(usize) but @sizeOf(?*T) != @sizeOf(?usize). if optional were a general sum type you wouldn't be able to make these optimizations easily without extra information
The point is that algebraic data types are common in functional languages. "Maybe" is just an example of an algebraic data type, there's tons more.
If the article says "functional programmers should take a look at Zig", and Zig makes algebraic data types hard, then maybe they shouldn't use it.
If you even say "the annoyingness is a feature, use zig the way it is intended to be used" then that's another signal for functional programmers that they won't be able to use zig the same way they use functional languages.
> if optional were a general sum type you wouldn't be able to make these optimizations easily without extra information
Rust has these optimizations (called "niche optimizations") for all sum types. If a type has any unused or invalid bit patterns, then those can be used for enum discriminants, e.g.:
- References cannot be null, so the zero value is a niche
- References must be aligned properly for the target type, so a reference to a type with alignment 4 has a niche in the bottom 2 bits
- bool only uses two values of the 256 in a byte, so the other 254 form a niche
There's limitations though, in that you still must be able to create and pass around pointers to values contained within enum, and so the representation of a type cannot change just because it's placed within an enum. So, for example, the following enum is one byte in size:
Variant A uses the valid bool values 0 and 1, whereas variant B uses some other bit pattern (maybe 2).But this enum must be two bytes in size:
...because bool always has bit patterns 0 and 1, so it's not possible for an invalid value for A's fields to hold a valid value for B's fields.You also can't stuff niches in padding bytes between struct fields, because code that operates on the struct is allowed to clobber the padding.
Yes, the care that Rust goes through to ensure that niches work properly, especially when composing arbitrary types from arbitrary sources, shows why you absolutely don't want to be implementing these optimizations by hand.
Came to say this. Early in my career I really thought implementing Maybe in any language is necessary but not I know better. Use the idioms and don’t try to make every language something it’s not.
This looks like an example of a low level language vs a high level language (relatively speaking). The low level language makes a lot more of what is going on underneath explicit compared to the higher level language which abstracts that away for a common pattern. Presumably that explicitness allows for more control and/or flexibility. So apples to oranges?
I don't think so, where's the extra information in the Zig example?
In Rust, which is arguably also a low level language, it looks like this:
Low-level doesn’t mean more information, it means more explicit.
In Zig, that means being able to use the language itself to express type level computations. Instead of Rust’s an angle brackets and trait constraints and derive syntax. Or C++ templates.
Sure, it won’t beat a language with sugar for the exact thing you’re doing, but the whole point is that you’re a layer below the sugar and can do more.
Option<T> is trivial. But Tuple<N>? Parameterizing a struct by layout, AoS vs SoA? Compile time state machines? Parser generators? Serialization? These are likely where Zig would shine compared to the others.
I don't think there is a standardized meaning of 'low-level'. I think a useful definition is that a low-level language controls more/is explicit about more properties of execution.
So zig/c/c++/rust all have ways to specify when and where should allocations happen, as well as memory layout of objects.
Expressivity is a completely different axis on which these low-level languages separate. C has ultra-low expressivity, you can barely create any meaningful abstraction there. Zig is much better at the price of remarkably small amount of extra language complexity. And c++ and rust have a huge amount of extra language complexity for the high expressivity they provide (given that they have to be expressive even on the low-level details makes e.g. rust more complex as a language than a similar, GC-d language would be, but this is a necessity).
As for this particular case, I don't really see a level difference here, both languages can express the same memory layout here.
> Option<T> is trivial. But Tuple<N>? Parameterizing a struct by layout, AoS vs SoA? Compile time state machines? Parser generators? Serialization? These are likely where Zig would shine compared to the others.
I don't see how any of that becomes easier in the Zig case. It's just extra syntactic ceremony. The Rust version conveys the exact same information.
My old memories of Guava in Java 6 have been triggered.
I found this funny. I am not sure if it was intended that way!
> Monads are not some kind of obscure math-y thing that only the big brains think are necessary. No, instead monads are a fundamental abstract algebraic description of imperative programming as a computational context.
Yep, as a non-big-brainer, I definitely get it now. :)
You need to write a monad tutorial to really get it.
https://news.ycombinator.com/item?id=47958106
io is not a monad. theres nothing stopping you from stashing a global io "object" and just passing the global wherever you interface with the stdlib.
It's dependency injection. and yes, you can model dependecies like a monad but most people, even in less pure fp langs, don't.
i don't really say this to just be a pedant, but if you're an fp enjoyer, you will be disappointed if you get the picture that zig is fp-like, outside of a few squint-and-it-looks-like things
My reading of the article, was that the author seems to be in search of a new paradigm, that moves beyond what he sees as the limitations of "fp-like" languages as they exist today. His point appears to be that Zig provides the benefits of "fp-like" languages that exist today, while avoiding at least some of the downsides.
And he does admit you may have to squint, to appreciate the fp capabilities provided by Zig.
It is worth noting that some rather "enlightened" type system features are common in other imperative languages, not particularly novel ides in Zig.
For example Swift enums, while in some ways clunky, can do a decent job both as newtypes and as sum types (unlike Java enums, which are a fixed collection of instances of the same class).
I am not even sure if its a general pattern (inject any dependency?) or a specific pattern they added to Zig
idk in elixir we basically do exactly whats happening with io parameters when mocking or swapping implementations that all satisfy the same behaviour.
here. i am not the only one that refers to it as dependency injection:
https://daily.dev/blog/zig-async-io-io-uring-zig-0-16-rethin...
"Zig 0.16 introduces std.Io, a flexible I/O abstraction that uses dependency injection, similar to the Allocator interface"
Sigh. I meant that the zig authors did not make it a general pattern and just slapped on the DI pattern specifically for io, instead of generalising the abstraction so people can DI stuff.
While using "monads" in functional languages is a neat trick, I do not like them.
In my opinion, the concept of automaton is fundamental and it deserves equal standing with the concept of function (even if it is a higher level concept that is built upon that of function).
I believe that functional programming is preferable wherever it is naturally applicable, and most programs have components of this kind, but most complete application programs, i.e. which do input and output actions, are automata, not functions and it is better to not attempt to masquerade this with tricks that provide no benefits.
Therefore, I prefer a programming language that has a pure functional subset, allowing the use of that subset where desirable, but which also has standard imperative features (e.g. assignment), to be used where appropriate.
For me monads are similar to inheritance. There are areas where one topic/functionality is dominant and it can really help to define a base class in a library or define a monad like for async. The moment you start to mix/compose things, things get ugly pretty fast.
You can't just put assignment in a functional language though - you lose the ability to fearlessly refactor that's the whole point. You either need something like a stratified language (which I've never seen actually implemented, much less production-ready, as much as I like the design of Noether), or you use, well, monads.
> look at the era of software that garbage collectors have ushered in. Programs are bloated, slow, and wasteful compared to the literal super-computers that are running them.
I don't think this even qualifies as correlation.
My stack today is kinda nice but perhaps a bit odd:
- Go - backend + CLIs
- TypeScript - fronted, occasionally zx for more complex scripts
- Nushell as my scripting language (I’ve been relentlessly using it everywhere I can instead of bash/zsh and man it is such an improvement)
I heard so much good stuff about both Zig and Rust and would love to eventually get to know one of them.
nushell + 1. After ~20 years of bash+zsh, I'm translating all my scripts to nu.
Yesterday I noticed I still don't know how to write
in zsh after all years.Nushell, from their website, looks a lot like PowerShell's idea of a shell, but less verbose.
Yup. Which is kinda funny, because back when I was a young dev using Windows I never liked nor understood PS.
> when I was a young dev … never understood PowerShell
This makes me feel old.
Maybe procedural programmers should take a look instead. I don't see functions.
[https://en.wikipedia.org/wiki/Function_(mathematics)]Under this strict definition you can’t even throw exceptions!
Of course you can: you just have to define it in your type. The output set becomes a union type of the normal output and whatever you want as an exception.
If you write this as a monad, your get very similar syntax to procedural code.
I get what you are saying, but…
An exception is different to an Either result type. Exceptions short circuit execution and walk up the call tree to the nearest handler. They also have very different optimization in practice (eg in C++)
I'm what way is that different? You return early and the call Cascades up the call chain until you handle it (otherwise it's always an "either" results)
In practice you use something like an exception monad, which makes this a lot more ergonomic since you don't need to carry a case distinction around for every unwrap: an exception monad essentially has an implicit passthrough that says "if it's a value, apply the function, if it's an exception just keep that". You only need to "catch" the exception if you actually need the value. I'm this case the exception monad is not that different from annotating a function with "throws": your calling function either needs it's own throws (=error monad wrapper) in which case exceptions just roll through, or you remove the throws, but now need to handle the exception explicitly (=unwrap the monad).
Then allow partial functions too. Maybe even require them to be tagged as such. (Is that within the capabilities of Zig's programmable type system?)
I don't mind escape hatches - as long as they're visible/greppable in the source code. You can always write undefined/error/panic/trace directives while you're coding, then come back and remove them later.
I would love a language that distinguishes functions (pure mathematical constructs) from procedures (imperative constructs that map in a predictable way to the instruction set).
This feels like the direction Algebraic Effects might take us.
> Well, I’ve been radicalized. I’ve learned enough performance-oriented programming to be dissatisfied with the common functional languages (Haskell, OCaml, Common Lisp/Clojure, Scheme) because each of these languages are predicated on the existence of garbage collection and heaps.
I would take another look at Common Lisp if I were the author. Manual memory management is very much an option where you need it.
> Noise is anything that must be written for the program to function that is not relevant to the domain.
> ...
> What facilities does the language provide me to create correct-by-construction systems and how easily can I program the type-system.
Isn't programming the type-system orthogonal to the program's domain in the same way that manual memory management is?
No? I don't agree. The domain can be strongly modelled in the types; for instance, declaring kilometers, seconds, etc. instead of using primitive floats/reals everywhere, to statically prevent dimensional analysis issues.
Unrelated, but I was pleased at how fast the page opened: it felt pretty much instantaneous!
I opened the network log, disabled cache and reloaded to see it only transferred 8kb.
Keep up the good work!
I am still hearing about Monads, but is it not the case that they have well-known flaws? And that is the reason why algebraic effects are interesting, because they don't have these flaws?
Monads are a math/organization pattern. What flaws do you mean?
A functional programmer who casts away proper sum types and pattern matching is no functional programmer at all
You can do functional programming without strict typing. Not common, because strict types work just so well with the FP paradigm but definitely possible, it’s not in itself a contradiction
I thought lisps were all functional programming, and lack sum types and pattern matching?
In which case, what's the term for the "proper sum types and pattern matching" flavour of things?
Those are covered in Common Lisp, Scheme/Raket and Clojure, which are the Lisps most folks would be using, not Lisp 1.5 from McCarthy days.
I think the lisp situation is peculiar, for 3 main reasons:
- most of them are dynamically typed (thus don't need sum types, as there are no types). The ones that do have gradual type systems likely either implement some form of them (off the top of my head I can only remember typed racket, and I think it implements them through union types)
- not all lisps lean functional: I believe that's mostly a prerogative of scheme and clojure (and their descendants); something like CL is a lot more procedural, iirc
- in most lisps, thanks to macros, you probably don't need the language to support some sort of match construct out of the box: just implement it as a macro [1]
In general the "proper sum types" side of functional programming is just the statically typed one, but even in dynamically typed FP languages you end up adopting sum type-esque patterns, like elixir's error handling (which closely resembles the usual Either/Result type, just built out of tuples and atoms rather than a predefined type), and I assume many lisps adopt similar patterns as well
[1] https://github.com/clojure/core.match
Most Lisps have some sort of pattern matching in their standard library. Common Lisp has sum types with deftype.
(Pure) expression orientation is the true marker of FP
It’s possible (even true in my opinion) that garbage collected functional languages and low level languages like Zig are both great, and serve different purposes.
I actually ship stuff in Haskell believe it or not. I also think Zig is very cool and have played around with it quite a bit. Yes, garbage collection hurts performance, but the reality is that the overwhelming majority of all software does not suffer from the performance loss between well written code in a reasonably performant functional gc language and a highly performant language with manual memory management. It’s just not important. But not having to deal with the cognitive overhead of managing memory and being able to deal in domain specific abstractions only is a massive win for developer productivity and code base simplicity and correctness.
I think OxCamls approach of opting in to more direct control of performance is interesting. I also think it’s great that many functional patterns are making their way into imperative first languages. Language selection is always about trades offs for your specific use case. My team writes Haskell instead of Rust because Haskell is plenty fast for our use case and we don’t have to write lifetime annotations everywhere and think about borrowing. If we needed more performance we would have no choice but to explore other languages and sacrifice some developer experience and productivity, that’s very reasonable. I’m also not saying performance doesn’t matter (if you’re writing for loops in Python, stop). But this read to me like “because better performance exits with manual memory management, all garbage collectors are bad, so I’ll force zig to be something it’s not in order to gain performance I probably don’t need”. Which to me is an odd take. A more measured way of thinking about this might be, it can be useful to leverage functional patterns where appropriate in low level languages, if you find yourself needing to write code in one.
Anyone preferring functional programming will be extremely disappointed with Zig. And I'm saying this as a big user of Zig. It's a language for imperative code. And Io is not a monad, just a bunch of virtual methods doing the actual I/O.
Do Zig's algebraic types, seem clunky. Or is this a false impression, and I'm just not getting it.
From article " Where the next Programming Language will come from? that beautifully described the sad state of things. His main point is that the incentives for programming language innovation are at best misaligned and at worst non-existent"
Ok. Zig is great. But wont it still suffer from same headwinds as every other 'better' language. That industry wont adapt it? They have to much installed base and just want to hire Java/C#/etc...
Question for Zig users:
Can comptime blow up compile times? Does it have arbitrary cutoffs like C++ template depth?
You can think of comptime (as of zig 0.16) as an interpreter that evaluates code with very limited optimization. So yes, naive use of comptime can definitely grind compilation to a halt.
Zig tackles the halting problem a bit differently by putting the evaluation cutoff in userspace through the compiler builtin function `@setEvalBranchQuota`. You bump up the quota as you see fit.
I've been recently trying to port my simple program to Mojo to find out how the language looks like and feel. And the comptime feature (which inspired by Zig I think) is absolute joy to use. It helps a lot that the syntax looks like Python also. Excited to see how the language will become in the future particularly for its memory safety paradigm.
I would encourage everyone remotely interested in Zig to have a look at Odin[1]. If like me, you read that article and found yourself muttering "what the hell," then you might appreciate Odin's simplicity and design consistency.
I am definitely in the minority here, but I am not a fan of the kind of meta-programming that Zig and Rust offer, with Rust being especially atrocious. In the two decades I've been programming I can count on one hand the number of times meta-programming was an appropriate solution to a problem I had. Every time I reached for it, I got bit. There's a reason "when in doubt, use brute force" is sage advice, it may not be fast and glamorous, but it'll be a hell of a lot less opaque.
[1] https://odin-lang.org/
Same. Meta programming is nice when it fits the problem, but most meta programming I’ve seen has been a net negative.
Odin is also my favorite language in its class. It’s genuinely a gem.
I'm still fighting with Elixir and losing - for some reason I can't get my head around all the slightly different ways to initialise stuff.
Do you mean config and runtime variables etc (i.e. in Phoenix)?
"slightly different ways to initialise stuff."
can you elaborate? theres only what 11 datatypes in elixir?
Perhaps they are referring to the syntactic sugar around keyword lists?
[a: 1, b: 2] == [{:a, 1}, {:b, 2}]
Or maybe atom vs string keys in maps?
%{a: 1} vs %{"b" => 1}
Or keyword lists always needing to come last in lists?
[some: :value, :another] # error
[:another, some: :value] # valid
Or maybe something else entirely. Those are just things I remember having to lookup repeatedly when I was first learning elixir.
These are the ones. I just can't remember them.
> I see http_client as existing in a Reader monad that contains an allocator and an IO interface. This is exactly how the IO monad (and for that matter IO#) works in Haskell. The fact that the Zig people came up with this independantly speaks not just to the universal nature of monads (and the algebraic structures of programming languages)
Honestly this sounds like monad bullshit. That's a struct/class/ADT/whatever you want to call it, they existed since forever. The only idea Zig had was that maybe we shouldn't make them global instances.
This is the approach Jank is taking, which is ironic because Zig is decoupling from LLVM.
Programmers? What are they?
comptime is a restricted form of dependent typing.
In addition to the normal value to value, type to type, and type to value functions, in comptime, you can write static value to type functions.
In full dependent type, you can in addition write dynamic value to type functions, completing the value to type corner.
So in terms of typing strength, plain Haskell < Zig < dependent type languages.
Isn't the whole point of abstraction to not care about whats underneath unless you really have to? But ideally, you don't because the abstraction is "good enough"?
I haven't heard anyone writing code in Elixir complain about performance issues.
What’s up with the last paragraph? Nobody is complaining because the BEAM is good enough for the typical use case?
because you're not reaching for elixir when you need performance.
btw we do sometimes bitch about performance :)
But then we have ports/NIFs etc that shell out to Rust & Zig ...
Not a silver bullet. There are also C nodes but they’re used even less.
I asked because I do it sometimes too.
I don’t get it
Why write:
EqPoint.eql(a, c)
When you can write:
Point.eql(a, c)
I believe EqPoint allows you to pass around a bag of functions (aka an interface, which Zig does not have as a concept) to functions which can be written in terms of "I need these functions" rather than in terms of a concrete type.
the autistic instinct to re-write every wheel in the latest shiny thing