Am I the only one who never tried Elixir just because it has no strict typing?
Seems very hard for me to go back to a language with dynamic typing. Maybe I'm just wrong and I should give it a try.
It is alleviated quite a bit bz its pattern matching capabilities combined with the "let it crash" ethos.
They have a success typing system (which isn't very good) and are working on a fuller system (which isn't very mature).
If typing is the only thing keeping you out, have a look at Gleam.
Having worked with Elixir professionally for the last six years now, it is a very mature platform, very performant and offers many things that are hard in other languages right out of the box.
I see this phrase around a lot and I wish I could understand it better, having not worked with Erlang and only a teeny tiny bit with Elixir.
If I ship a feature that has a type error on some code path and it errors in production, I've now shipped a bug to my customer who was relying on that code path.
How is "let it crash" helpful to my customer who now needs to wait for the issue to be noticed, resolved, a fix deployed, etc.?
>How is "let it crash" helpful to my customer who now needs to wait for the issue to be noticed, resolved, a fix deployed, etc.?
Let it crash is more about autorestarting and less about type bugs. If you have a predictable bug in your codepath that always breaks something, it just means you never tested it and restarting will not fix it. But this kind of straightforward easy to reproduce bugs are also easy to test the hell out of.
But if you have a weird bug in a finite state machine that gets itself into a corner, but can be restarted -- "let it crash" helps you out.
Consider hot reload -- a field exists in a new version of a record, but doesn't exist in a old one. You can write a migration in gen server to take care of it, but if you didn't and it errored out, it's not the end of the world, it will restart and the problem will go away.
I wonder what production monitoring is like for that. Does Elixir have a good monitoring platform like NewRelic for tracking the state of these processes?
The core of a properly built, resilient/robust system is that you have compartmentalized code into different small erlang processes. They work together to solve a problem. A bug in one is isolated to that particular process and can't take the whole system down. Rather, the rest of the system detects the problem, then restarts the faulty process.
The reason this is a sound strategy is that in larger systems, there will be bugs. And some of those bugs will have to do with concurrency. This means a retry is very likely to solve the bug if it only occurs relatively rarely. In a sense, it's the observation that it is easier to detect a concurrency bug than it is to fix it. Any larger system is safe because there's this onion-layered protection approach in place so a single error won't always become fatal to your system.
It's not really about types. It's about concurrency and also distribution. Type systems help eradicate bugs, but it's a different class of bugs those systems tend to be great at mitigating.
However, if you do ship a bug to a customer, it's often the case you don't have to fix said bug right away, because it doesn't let the rest of the application crash, so no other customer is affected by this. And you can wait until the weekend is over in many cases. Then triage the worst bugs top-down when you have time to do so.
In addition to the other fine replies, it helps to remember that Erlang/BEAM, and by extension Elixir, comes from the telephony world. If something crashes in that world, you'd far rather terminate a phone call spuriously than bring the whole system down. (And in the 1990s, when the main alternative was C, and not just C as it may be today but 1990s C specifically, that was a reasonable concern.) Erlang is optimized for a world where that's a reasonable response to a failure. I've also used it in a context where a system makes a persistent connection up to a controller, and if either side crashes they automatically reconnect. "Let it crash" is a reasonable response to a lot of issues that can arise.
The farther you get from that being an issue, the less useful the "let it crash" philosophy becomes, e.g., if I hit "bold" in my word processor and it fails for some reason, "let it crash" is probably not going to be all that helpful overall.
I have seen systems that "should" have been failures in the field be held together by Erlang's restart methodology. We still had to fix the bugs, but it bought us time to do it and prevented the bad deployments from being immediate problems. But it doesn't apply to everything equally by any means.
Okay but “static typing avoids the crash to begin with”, is what everybody from the static typing world is thinking reading all these very long responses that don’t address this very basic idea.
I like to measure languages by "what do I miss when I leave them".
I used Erlang professionally for about 5-6 years.
I have not missed "let it crash".
It's an interesting idea that should be grappled with, but Erlang users have this tendency to very badly strawman their opposition and spout their propaganda like it's still 2005 and the rest of the world has just been twiddling their thumbs, gormlessly ramming into brick walls as they scale up and are just flabbergasted about how to handle the problems of scale, when in fact, to a first approximation all of the large systems in the world that are reliable are also not written in BEAM. There are other solutions to the problem. Strong types are a big component of how I deal with this in my current code, yes. It's not 100% a solution, but then, I can have both anyhow so I generally do.
I was just providing context. When trying to understand BEAM it is always helpful to have in the back of your head that it was written for telephony, with all that implies.
"Crashing is loud" below is a phrase to combine with "remote error recovery" from the link above. Erlang/OTP wants application structure that is peculiar to it, and makes that structure feel ergonomic.
> If I ship a feature that has a type error on some code path ... How is "let it crash" helpful to my customer?
The crash can be confined to the feature instead of taking down the entire app or corrupting its state. With a well-designed supervision structure, you can also provide feedback that makes the error report easier to solve.
However, while a type error in some feature path is a place that makes type annotations make sense, type annotations can only capture a limited set of invariants. Stronger type systems encode more complex invariants, but have a cost. "Let it crash" means bringing a supervisor with simple behavior (usually restart) or human into the loop when you leave the happy path.
> "Let it crash" means bringing a supervisor with simple behavior (usually restart) or human into the loop when you leave the happy path.
If a "human" has to enter the loop when a crash occurs, this limits the kind of system you can write.
I had to work on a system where a gen server was responding to requests from a machine, sent frequently (not high frequency, but a few times per second.)
If for some reason the client misbehaves, or behaves properly but happens to use a code path that has a type error, the only option given by "let it crash" was to, well... crash the actor, restart the actor, then receive the same message again, crash the actor, restart the actor, etc... and eventually you crash the supervisor, which restarts and receives the same message, etc...
Crashing is loud. You will get crashes in your logs. And you can let it happen because those crashes won't disrupt anything else - that's what the message passing gets you.
So sure, the code with the error won't work (it wouldn't work in any language - you can make an error in all of them), but you will get a nice, full stack trace and the other processes in your VM won't be impacted at all. You won't bring down the service with a crash. Sometimes this is undesirable - you could deploy a service where the only endpoint that functions is the health check - but generally people don't do that.
I feel like the other comments I see here, are not expressing the deeper point, and you're asking to really understand something, so that's what you care about? So I'm sorry to pile on when you've got like a dozen good answers, but.
Let It Crash refers to a sort of middle ground between returning an error code and throwing an exception. It does not directly address your customer's need, and you are right that they are facing a bug.
So if you were to use Golang with Let It Crash ethos, say, you would write a lot of functions with the same template: they take an ID and a channel, they defer a call to recover from panics, and on panic or success they send a {pid int, success bool, error interface {}} to the channel -- and these are always ever run as goroutines.
Because this is how you write everything, you have some goroutines that supervise other goroutines. For example, auto-restart this other goroutine, with exponential backoff. But also the default is to panic every error rather than endless "if err != nil return nil, err" statements. You trust that you are always in the middle of such a supervisor tree and someone has already thought about what to do to handle uncaught errors. Because supervision trees is just the style of program that you write. Say you lose your connection to the database, it goes down for maintenance or something. Well the connection pool for the database was a separate go routine thread in your application, that thread is now in CrashLoopBackoff. But your application doesn't crash. Say it powers an HTTP server, while the database is down, it responds to any requests that do not use the database just fine, and returns HTTP 500 on all the requests that do use the database. Why? Because your HTTP library, allocates a new goroutine for every request it handles, and when those panic it by default doesn't retry and closes the connection with HTTP 500. Similarly for your broken codepath, it 500s the particular requests that x.(X) something that can't be asserted as an X, we log the error, but all other requests are peachy keen, we didn't panic the whole server.
Now that is different from the first thing that your parent commenter said to you, which is that the default idiom is to do something like this:
type Message {
MessageType string
Args interface{}
Caller chan<- Message
}
// ...
msg := <-myMailbox
switclMessageType {
case "allocate":
toAllocate := args.(int)
if allocated[toAllocate
msg.Caller <- Message{"fail", fmt.Errorf(...), my mailbox}
} else {
// Save this somewhere, then
msg.Caller <- Message{"ok", , my mailbox}
}
}
With a bit of discipline, this emulates Haskell algebraic data types which can give you a sort of runtime guarantee that bad code looks bad (imagine switching on an enum `case TypeFoo`: foo := arg.(Foo)`, if you put something wrong in there it is very easy to spot during code review because it's a very formulaic format)
So the idea is that your type assertions don't crash the program, and they are usually correct because you send everything like a sum type.
Thank you for this very detailed explanation! I think between this and the other very welcome replies, I think I have discovered what I didn't understand: that "let it crash" addresses a different set of problems than static types, but that a dynamic type system can still be helpful in avoiding type errors in the first place, if it leads you to design data structures in a certain way.
Thanks for your answer!
I already checked Gleam several times and it looks amazing. The ecosystem just doesn't feel mature enough for me yet. But I can't wait for it to grow.
True. There is inter-op with both Elixir and Erlang, but thsts like early TypeScript.
If you're at all interested, I'd suggest doing the basic and OTP tutorials on the Elixir Website. Takes about two hours. Seeing what's included and how it works is probably the strongest sails pitch.
There is also pattern matching and guard clauses so you can write something like:
def add(a, b) when is_integer(a) and is_integer(b), do: a + b
def add(_, _), do: :error
It’s up to personal preference and the exact context if you want a fall through case like this. Could also have it raise an error if that is preferred. Not including the fallback case will cause an error if the conditions aren’t met for values passed to the function.
Writing typespecs (+ guards) feels really outdated and a drag, especially in a language that wants you to write a lot of functions.
It reminds of the not-missed phpspec, in a worst way because at least with PHP the IDE was mostly writing it itself and you didn't need to add the function name to them (easily missed when copy/pasting).
True but by using guards + pattern matching structs you can approximate type hinting, but it feels cumbersome and more of a workaround than a real solution.
I'm of the opinion that Erlang/Elixir are terrible for repeat tasks like a standard CRUD server over a SQL database. Because yes, it IS cumbersome! Behaviors and type hints only get so far, and it is exhaustingly slow to sit with epgsql in the REPL to figure out what a query actually returns.
I find them much better suited for specific tasks where there is little overlap or repetition.
I’d recommend it. I used to think I needed statically typed languages to write sound code, but after enough time with Elixir (10 years professionally and going) I really don’t believe that to be true… A huge portion of my change of heart is due to Elixir (and Erlang) being functional and only having a handful of types:
There are a few others but they are generally special cases of the ones above. Having so few data types tends to make it much more obvious what you’re working with and what operations are available. Additionally, because behavior is completely separate from data it’s infinitely easier to know what you can and can’t do with a given value.
Ruby being dynamic drove me insane at times, but Elixir/Erlang being dynamic has been a boon to productivity and quality of life. I recently had to write some TypeScript and was losing my mind fighting the compiler, even though I knew at runtime everything would be fine. Eventually I slathered enough “any” on to make the burning stop… But! That’s something I haven’t had to do in years, and it was 100% due to type system chicanery and not preventing a bug or making the underlying code more sound.
There are still some occasions where having some static typing would be nice— but they’re pretty rare and often only for things that are extremely critical or expensive to fix. And IMHO even in those cases, Elixir’s clarity and lack of (implicit) state generally make up for it.
Here's my take as TS dev. You just never learned to use types in the first place where there is very little ambiguity on any types anywhere because there is close to no any's used. Sounds like you guys get high and mighty on your tunnel-visioned copium—I mean I used to write JS for god's sake and thought it was okay. And I remember when I first tried TS felt annoyingly hampered by having to actually use interfaces properly.
Sure, the nature of Elixir probably makes it easier but I find little joy in dynamic whack-a-mole and mental gymnastics to infer types instead of fricking actually being able to see them immediately.
I could go commando in TS as well and switch to JS and JSDoc, leaving everything gradually typed and probably be fine but I'd feel terribly sorry for anyone else reading that code afterwards. It'd be especially silly since I can now just infer my auto-generated Postgres zod schemas with little effort. Moreover, a good type system basically eliminates typing-related bugs which you guys apparently still have.
So please, don't over-generalize just because you think you got it figured out.
Lmao. This has to be the funniest comment I’ve ever read on here. I can assure you, the only copium being huffed is by you, mate. I have written Scala, Swift, Java, and more (including TypeScript). I have zero trouble with type systems, but I have wasted enough time fighting them for little real world benefit. In Elixir, and this is the best part, I don’t fight type bugs. You probably do because type erasure means you can still have them if someone has mistyped something or shit on something at runtime— the joys of JavaScript.
It sounds like you’re really trying to justify something and that’s great for you. I’m really happy for you. Keep it up. May you soar where no junior dev has dared to soar before. God speed.
Hah. Sure buddy, Mr. Sr Dev. Sorry if I offended you and your vast experience. Without types there definitely aren't type bugs, I grant you that, and JVM languages are quite different to TS. I don't know if your argument on type erasure is any better than advocating for dynamic typing but it definitely happens when you just start throwing anys.
And look, I'm first to admit that TS type system isn't perfect (and it can cause some devs to go overboard) but I have read my share of Python scripts that were read-only from the minute they were born.
It sounds like you’re really trying to justify something and that’s great for you. I’m really happy for you. Keep it up. May you soar where no junior dev has dared to soar before. God speed.
And please, your condescension just sounds insecurity to me. It's highly amusing though that you try to play me down as a silly junior dev, I'm quite satisfied that my original assessment was correct.
I tried it for a job interview, and it was awful - because of no static typing. I spent most of my time tracking down dumb type errors, compounded by various language footguns (I can't remember exactly, but I think for example making a typo on a field name in a for loop condition is treated as "condition false" and so the for loop just doesn't do anything, no error).
It seems like the Elixir/Erlang community is aware of this, as is Ruby, but it's a rather large hole they have to dig themselves out of and I didn't feel particularly safe using the tools today.
I've heard a lot of good things about the Erlang runtime and I did really like Elixir's pipe operator, so it was unfortunate.
You can also peek inside objects and check for types and values of nested properties. In other languages you’d need to create composite types or even duplicate or extend types. In Elixir, all that is for free. I see that as much more powerful than types.
There's also tooling like dialyzer, and a good LSP catches a lot too. The language itself has some characteristics that catch errors too, like pattern matching and guard clauses.
With all that said, I'm still very keen for static typing. In the data world we mostly start with Python without mypy, and it's pretty hard to go back.
Type systems in this space seem to take a long time and never quite reach completeness. At least that's the experience I've taken away from occasionally glancing at Erlang's Dialyzer project every now and again which I don't think has ever reached any semblance of the maturity of something like TypeScript.
But pattern matching in Erlang does do a lot of the heavy lifting in terms of keeping the variable space limited per unit of code which tends to reduce nesting and amount of code to ingest to understand the behavior you care about at any moment.
Generally I find static typing to be overrated, even more so with elixir due to pattern matching and immutability. I think elixir’s set-theoretic types will be a nice addition and will provide some compile-time safety checks without needing to explicitly define types for everything. It remains to be seen how far they’ll take this approach.
Biggest benefit of typing in TS is just autocomplete that knows to filter down suggestions to field name of the object type before the dot. That and constants without typos. That, exhaustive maps and interface implementations are really good to not forget things that have to be done outside of the currently open file.
really you wind up making only a handful of type errors that make it into prod.
there are other things that contribute to this like pretty universal conventions on function names matching expected outputs and argument ordering.
it does suck hard when library authors fail to observe those conventions, or when llms try to pipe values into erlang functions, and yes, it WOULD be nice for the compiler to catch these but you'll usually catch those pretty quickly. you're writing tests (not for the specific reason of catching type errors), right? right?
I liked the article, but I found it a bit strange that it presented pure functions as a feature of Elixir rather than as a general good practice. Surely there's nothing particularly unique about Elixir in this regard compared to other languages?
> But honestly, I’ve always felt like this objection was overblown. I’ve got nothing against static typing; some of my best friends use statically-typed programming languages.
Elixir is a strongly typed language (as opposed to a weakly typed language like JavaScript or Perl). You cannot, for example, do `"4" - 1` in Elixir. Dynamic vs static typing is a mostly orthogonal scale, where Elixir is (mostly) dynamically typed.
There are some very sharp Computer Scientists who believe static typing is unnecessary. Joe Armstrong (co-designer of Erlang) once said: "a type system wouldn't save your system if it were to get hit by lightning, but fault tolerance would"
I've never had a system crash from a lightning strike, fault-tolerant or otherwise. I have had systems crash from null pointer errors though, and fault-tolerance did nothing to fix that except turn a crash into a crashloop.
I have the same attitude toward overly permissive type systems that I do toward the lack of memory safety in C: People sometimes say, "if you do it right then it isn't a problem," but since it objectively IS a problem in practice, I would rather use the tool that eliminates that problem entirely than deal with the clean-up when someone inevitably "does it wrong."
We had a cluster of servers, dynamically scaling up and down in response to load, and one day started seeing errors where an enum string field had an impossible value. Imagine the field is supposed to be "FOO" or "BAR" but one day in the logs you start seeing "FOO", "BAR", "GOO", and "CAR". Impossible. "GOO" and "CAR" did not exist in the code, nothing manipulated these strings, yet there they were.
Long story short, a particular machine that joined the cluster that morning had some kind of CPU or memory flaw that flipped a bit sometimes. Our Elixir server was fine because we were matching on valid values. Imagine a typed language compiler that makes assumptions about things that "can't" happen because the code says it can't... yet it does.
I am not sure I want the system to continue operating in that scenario. You have corrupted hardware that could be trashing other records. What if it is flipping bits on financial transactions?
It's likely Armstrong conflated the two there because a significant part of the fault tolerance of Erlang comes from the loose coupling via message passing between components, which in no small part is tied to the dynamic typing. It's a Postel's law thing, being generous in what you accept and strict about you send.
Elixir's seamless pattern matching paradigm, IMO, largely negates the need for strict typing. If you write your function signatures to only accept data of the type / shape you need (which you are incentivized to do because it lets you unpack your data for easy processing), then you can write code just for the pretty path, where things are as expected, and do some generic coverage of the invalid state, where things aren't, rather than the norm in software development of "I covered all the individual failure states I could think of". This generic failure mode handling, too, greatly benefits from dynamic typing, since in my failure state, I by definition don't know exactly what the structure of my inputs are.
Yep, I have written and maintained several large scale applications In elixir and live the computation model.
However,elixir depreately needs proper types. IMHO the needs for types are in no means negated by pattern matching, and I also see hints at why you would say so.
> If you write your function signatures...
The point of types is worry less refactoring.
If you work at a place where you can define the arhicture for the entire lifecycle of the application without ever needing to Refactor, then sign me up! I want to work there.
Well, the people behind elixir seems to have accepted that typing is necessary.
I see this story on an on: some hacker makes a language, they hate types because they want to express themselves. The languages gets traction. Now enterprise application and applications with several devs use it and typing gets essential - types will then gradually be added.
sure part of the decision is that you can't really soundly type (in a developer sanity preserving way) across a cluster that runs multiple versions of software (which is a real use case that the authors absolutely needed to support as part of the core business). but they let dialyzer in to the core so it's not like the team was actively hostile to typing.
Okay? But "the features you need would have taken effort to implement, so we didn't implement them" is not a very good sales pitch when you're trying to convince me to use your tool to do my job.
Any tool that justifies the choice not to implement an important and highly-requested feature with "effort is a finite resource" alone is not viable for serious development work. It's not prioritizing, it's a refusal to prioritize.
Seems a bit strange to posit fault tolerance as an alternative to a type system. Personally, I view type systems as a bonus to DX more than something strictly designed to prevent errors
It's in line with the erlang philosophy of letting things crash trivially and restart instantly. Due to the universal immutability, starting a new process from a given state in Erlang/Elixir is nearly instantaneous and has extremely little overhead as they are not OS threads, they are BEAM VM threads.
LOL. This only makes sense if you can know all the ways your code will fail... which you cannot.
Erlang/Elixir's approach is to simply say, "It's gonna fail no matter how many precautions we take, so let's optimize for recovery."
Turns out, this works fantastically in both cellphone signaling, which is where OTP originated, as well as with webserving, where it is perfectly suited.
whether that matters or not depends on whether the logic error occurs because of a rare combination of events or as a result of a certain state and whether that state remains after recovery. if there is for example a logic error that causes the app to crash after say 10 minutes of runtime, or eg. at a certain message size, then a recovery will reset the runtime and it will work again. it will of course invariably fail again after another 10 minutes or when the same message is resent, because it is a logic error, and logic dictates that the error won't go away no matter how often you restart, but it will work in the meantime.
in other words, any error that doesn't occur right at start can be recovered from at least for all those operations that do not depend on that error being fixed.
This sound like a wild and very contrived argument.
Both because that memory leaks are normal is types languages - and does usually not matter in most serious applications - and because this class of errors is usually not what types catch.
Types have value when you 1) refactor and 2) have multiple people working on a code base.
The error you see when you don't have types is something like a BadArityError.
It doesn't prevent the error, but it also won't take down your server when malicious users (or just lots of normal users) start to bang on that input with the problem, and your non-BEAM VM pool starts to run out of available preloaded stacks... You get a new Erlang process in well under a millisecond on modern hardware
It WILL log the error, with a stacktrace, so you have that going for you
Note that even with typing, you cannot avoid all runtime errors
Also note that this tech was first battle-tested on cellphone networks, which are stellar on the reliability front
Nope, getting a TypeError on "1"+2 is not being "punished". That's way better than getting malformed data that you weren't expecting which keeps the system "working" until an error happens somewhere else, later, where it's more confusing.
OpenTelemetry.js did this. Being written in typescript they wrongly believed they didn’t need to check the data type for user-supplied numbers. Yeah so my production data started having dropouts because somewhere we incremented by “2” instead of 2, and then OTLP choked on 100% of the data from that process because one stat out of thousands was not to its liking and had a three thousand digit ‘number’ in it.
I never did track down the last spot where we screwed that up. This was a system we shifted from statsd, so the offending callers were either working by accident or only killing some data points for one stat and nobody noticed.
So then OpenTelemetry.js had to start sanitizing its inputs and not assuming the compiler should catch it. I still think it odd that something called “.js” was actually “.ts” under the hood.
how so? most dynamically typed languages are also strongly typed. that includes python, ruby, common lisp, smalltalk... if it was the worst option, this would not be the case.
sure, but when the majority of popular languages use that paradigm then it surely can't be the worst option either. what is it that makes it a bad combination?
At this point I'm reaching into the low percentages. I think it's pretty clear that Strongly + Statically typed languages are massively over-represented on the list.
Both Strongly and Weakly Dynamically typed languages are similarly represented.
Note: I'm open to editing the comment to move languages from and to various categories. I haven't used many of them, so correct me if I'm wrong.
smalltalk and common lisp don't qualify as popular languages
not anymore. they were very popular in the industry at one point. very few other languages have so many independent implementations as smalltalk and lisp. a testament to their wide spread use in the past.
that doesn't make sense. typescript is javascript with types, it can't be both strong and weak at the same time.
but i believe we have a different definition of weak. just because javascript, php and perl have some implicit type conversions doesn't make them weakly typed. i believe it takes more than that. (in particular that you can't tell the type by looking at a value, and that you can use a value as one or another type without any conversion at all)
C is weakly typed, it was always a major criticism, C++ too i think (less sure about that).
once you correct for that you will notice that all languages in the strong and static category are less than 30 years old and many are much younger. (java being the oldest. but there are older ones like pike which is also strong and static and goes back to 1991)
the strong and dynamic category goes back at least a decade more. (if you include smalltalk and lisp)
what this does show is that static typing has experienced a renaissance in the last two decades, and also the worst is really using any form of weak typing.
i still don't get what makes strong and dynamic a bad combination, other than it's bad because it is not static.
I mean, if you define weak like that, then yeah. But also, what languages are you left with? Assembly? Then again, it does fit with your definition of popular being popular 45 years ago. Tbh, I'm not sure if smalltalk was ever even popular. It was certainly influential, but popular?
I know that there are discussion about what strong vs weak even means, but I think most people would place the weak distinction way above yours on a possible weak-strong spectrum.
C can certainly be argued to be weak. My understanding is that it's mostly due to pointers (and void* especially). C++ is much better in this regard. I mostly just did not want to add a Weak + Static category just for one language.
Well, now that you've defined Strong to also include all of the languages I consider Weak, then yeah, no issues at all.
definitions are hard. but i am mostly concerned to make sure we are talking about the same thing, regardless of what that thing is called. i do accept and agree that javascript, php and perl may be considered weaker than python or ruby. but to which degree may be a matter of taste. to me they are still strongly typed for the most part. (and let's not get into the definition of static, that's another can of worms)
but the interesting question is really the one posed at the beginning. what makes strong but dynamic a bad combination?
i think we agree that weak is bad. implicit type conversions like 1+"2" range from the annoying to problematic and dangerous. if we eliminate weak that only leaves strong and static vs strong and dynamic.
i agree that strong and static is better in most cases, type declarations help, and pike the language i use the most myself which is somewhere between python and go, has powerful types that are a joy to work with, but for comparison in typescript types come across as more annoying (in part because they are somewhat optional, and because they get lost at runtime, so they don't help as much as in truly typed languages) but strong and dynamic has shown to be a solid combination, especially with python and ruby. so i don't feel the combination is as bad as you seem to suggest.
> smalltalk and common lisp don't qualify as popular languages.
Sure, Smalltalk isn't, but Lisp is a different story. In this context I assume we all mean to say "Lisp" and not "Common Lisp" specifically.
Lisp (as the entire family of PLs) is quite massively popular.
Standard rankings have major blind spots for Lisp measurement, they miss things like, for example, Emacs Lisp is everywhere. There's tons of Elisp on GitHub alone, and let me remind you, it's not a "general-purpose" language, its sole function is to be used to configure a text editor and there's mind-boggling amount of it out there. AutoLISP is heavily used in CAD/engineering but rarely discussed online. Many Lisp codebases are proprietary/internal. Also, dialect fragmentation artificially deflates numbers when measured separately - many rankings consider them different languages.
If you count all Lisp dialects together and include: Emacs Lisp codebases, AutoLISP scripts in engineering, Research/academic usage, Embedded Lisps in applications, Clojure on JVM and other platforms - babashka scripts, Flutter apps, Clojurescript web apps, etc;
...Lisp would likely rank much higher than typical surveys suggest - possibly in the top ten by actual lines of code in active use.
Yes you are. First of all there isn't such a thing as "strict typing", types are either static/dynamic and/or strong/weak. I suppose you meant Elixir has no static types. It is however a strongly typed language.
And just like it usually happens, static typing enthusiasts often miss several key insights when confronting dynamically typed languages like Clojure or Elixir (which was inspired by ideas implemented in Clojure).
It's not simply "white" and "black", just like everything else in the natural world.
You have to address:
- Runtime flexibility vs. compile-time safety trade-offs — like most things, things have a price to it, nothing is free.
- Different error handling philosophies. Sometimes, designing systems that gracefully handle and recover from runtime failures makes far more resilient software.
- Expressiveness benefits. Dynamic typing often enables more concise, polymorphic code.
- Testing culture differences. Dynamic languages often foster stronger testing practices as comprehensive test suites often provide confidence comparable to and even exceeding static type checking.
- Metaprogramming power. Macros and runtime introspection enable powerful abstractions that can be difficult in statically typed languages.
- Gradual typing possibilities. There are things you can do in Clojure spec that are far more difficult to achieve even in systems like Liquid Haskell or other advanced static type systems.
The bottom line: There are only two absolutely guaranteed ways to build bug-free, resilient, maintainable software. Two. And they are not static vs. dynamic typing. Two ways. Thing is - we humans have yet to discover either of those two.
You act like OP has never experienced dynamic type programming.
They clearly said they "can't go back to" it, meaning they've experienced both, are aware of the trade-offs, and have decided they prefer static types.
> Gradual typing possibilities. There are things you can do in Clojure spec that are far more difficult to achieve even in systems like Liquid Haskell or other advanced static type systems.
That's great for clojure and python and PHP, but we're not talking about them.
You act as if I said anything about anyone's experience. "they prefer static types" can mean a whole lot of things - there's type inference, soundness, turing-completeness, type classes, GADTs, higher-kinded types, dependent types, gradual typing, structural vs nominal typing, variance annotations, type-level programming, refinement types, linear types, effect systems, row polymorphism, and countless other dimensions along which type systems vary in their expressiveness, guarantees, and ergonomics.
Dynamic typing also varies - there's type introspection, runtime type modification aka monkey patching, different type checking strategies - duck typing & protocol checking, lazy & eager, contracts, guards and pattern matching; object models for single & multiple dispatch, method resolution order, delegation & inheritance, mixins, traits, inheritance chains, metaprogramming: reflection, code generation, proxies, metacircular evaluation, homoiconicity; there are memory and performance strategies: JIT, inline caching, hidden classes/maps; there are error handling ways, interoperability - FFI type marshaling, type hinting, etc. etc.
Like I said already - things aren't that simple, there isn't "yes" or "no" answer to this. "Preferring" only static typing or choosing solely dynamic typing is like insisting on using only a hammer or only a screwdriver to build a house. Different tasks call for different tools, and skilled developers know when to reach for each one. Static typing gives you the safety net and blueprints for large-scale construction, while dynamic typing offers the flexibility to quickly prototype and adapt on the fly. The best builders keep both in their toolbox and choose based on what they're building, not ideology.
In that sense, the OP is wrong - you can't judge pretty much any programming language solely based on one specific aspect of that PL, one has to try the "holistic" experience and decide if that PL is good for them, for their team and for the project(s) they're building.
Runtime flexibility is not restricted to dynamically typed languages, it just happens to be less available in some of the popular statically typed languages.
Error handling, expressiveness, testing culture, meta-programming and gradual typing have nothing to do with static vs dynamic typing.
The main "advantage" of dynamically typed languages is that you can start writing code now and not thing about it too much. Then you discover all the problems at runtime...forever.
Statically typed languages force you to think about what you are doing in advance a lot more which can help you avoid some structural issues. Then when you do refactor computer helps you find all the places where you need to change things.
Dynamically typed languages force you to write more tests that are not required in statically typed languages and that might prompt you to write other tests but if also increases the chance you just give up when you start refactoring.
Finally, after some time has passed and few updates have been applied to the language and libraries, you may not have a working project anymore. With statically typed languages you can usually find and fix all the compile errors and have the fully working project again. With dynamically typed languages, you will never know until you explore every line of code, which will usually happen at runtime and on the client computer.
i see things like that written almost on the daily but i've never seen it come to pass.
this is anecdotal but i've worked professionally in ruby and clojure and it was a pretty good experience. java/kotlin/scala made me wish i could go back to dynamic land...
these days i'm in a rust shop but i keep clojure for all my scripts/small programs needs, one day i intend to go back to some kind of lisp full time though.
Sure you're right on most of this, but allow me a slight pushback here. I am sorry, I am inclined to use Clojure/Lisp in my examples, but only because of its recency in my toolbelt, I could probably come up with similar Elexir examples, but I lack intimate familiarity with it.
- Dynamic languages can harbor bugs that only surface in production, sometimes in rarely-executed code paths, yes. However, some dynamically typed languages do offer various tools to mitigate that. For example, take Clojurescript - dynamically/strongly typed language and let's compare it with Typescript. Type safety of compiled Typescript completely evaporates at runtime - type annotations are gone, leaving you open to potential type mismatches at API boundaries. There's no protection against other JS code that doesn't respect your types. In comparison, Clojurescript retains its strong typing guarantees at runtime.
This is why many TS projects end up adding runtime validation libraries (like Zod or io-ts) to get back some of that runtime safety - essentially manually adding what CLJS provides more naturally.
If you add Malli or Spec to that, then you can express constraints that would make Typescript's type system look primitive - simple things like "The end-date must be after start-date" would make you write some boilerplate - in CLjS it's a simple two-liner.
- Static type systems absolutely shine for refactoring assistance, that's true. However, structural editing in Lisp is a powerful refactoring tool that offers different advantages than static typing. I'm sorry once again for changing the goalposts - I just can't speak specifically for Elixir on this point. Structural editing guarantees syntactic correctness, gives you semantic-preserving transformations, allows fearless large-scale restructuring. You can even easily write refactoring functions that manipulate your codebase programmatically.
- Yes, static typing does encourage (or require) more deliberate API design and data modeling early on, which can prevent architectural mistakes. On the other hand many dynamically typed systems allow you to prototype and build much more rapidly.
- Long-term maintenance, sure, I'll give a point to statically typed systems here, but honestly, some dynamically typed languages are really, really good in that aspect. Not every single dynamic language is doomed to "write once, debug forever" characterization. Emacs is a great example - some code in it is from 1980s and it still runs perfectly today - there's almost legendary backward compatibility.
Pragmatically speaking, from my long-term experience of writing code in various programming languages, the outcome often depends not on technical things but cultural factors. A team working with an incredibly flexible and sophisticated static type system can sometimes create horrifically complex, unmaintainable codebases and the opposite is equally true. There's just not enough irrefutable proof either way for granting any tactical or strategic advantage in a general sense. And I'm afraid there will never be any and we'll all be doomed to succumb to endless debates on this topic.
> The bottom line: There are only two absolutely guaranteed ways to build bug-free, resilient, maintainable software. Two. And they are not static vs. dynamic typing. Two ways. Thing is - we humans have yet to discover either of those two.
That's true but some languages don't let you ship code to prod that multiplies files by 9, or that subtracts squids from apricots
> that multiplies files by 9, or that subtracts squids from apricots
I don't understand why when someone mentions the word "dynamic", programmers automatically think javascript, php, bash or awk. Some dynamically typed PLs have advanced type systems. Please stop fetishizing over one-time 'uncaught NPE in production' PTSD and acting as if refusing to use a statically typed PL means we're all gonna die.
A funny thing is I once had a type bug while coding elixir, that bash or perl would've prevented, but rust or haskell wouldn't have caught. I forgot to convert some strings to numbers and sorted them, so they were wrongly sorted by string order rather than numerical order.
In haskell (typeclasses), rust (traits), and elixir comparison is polymorphic so code you write intending to work on numbers will run but give a wrong output when passed strings. In perl and bash < is just numeric comparison, you need to use a different operator to compare strings.
In the case of comparison elixir is more polymorphic than even python and ruby, as at least in those languages if you do 3 < "a" you get a runtime error, but in general elixir is less polymorphic, ie + just works on numbers, not also on strings and lists and Dates and other objects like python or js.
I also experienced more type errors in clojure compared to common lisp, as clojure code is much more generic by default. Of course noone would want to code in rust without traits, obviously there are tradeoffs here, as you're one of the minority in this thread recognizing. There is one axis where the more bugs a type system can catch the less expressive and generic code can be. Then another axis where advanced type systems with stuff like GADT can type check some expressive code, but at the cost of increasing complexity. You can spend a lot more time trying to understand a codebase doing advanced typesystem stuff than it would take to just fix the occasional runtime error without it.
A lot of people in this thread are promoting gleam as if its strictly better than elixir because statically typed, when that just means they chose a different set of tradeoffs. Gleam can never have a web framework like Phoenix and Ash in elixir, as they've rejected metaprogramming and even traits/typeclasses.
>>> open('foo') * 3
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unsupported operand type(s) for *: '_io.TextIOWrapper' and 'int'
You have to go to some length to get Python to mix types so badly.
Well, yes, Python can sure feel pretty fucking awkward from both perspectives. It started as fully dynamic, then added type hints, but that's not the main problem with it, in my opinion, the problem that you're still passing around opaque objects, not transparent data.
Compare it with Clojure (and to certain extent Elixir as well). From their view, static typing often feels like wearing a suit of armor to do yoga - it protects against the wrong things while making the important movements harder.
- Most bugs aren't type errors - they're logic errors
- Data is just data - maps, vectors, sets - not opaque objects
- You can literally see your data structure: {:name "Alice" :age 30}
- The interesting constraints (like "end-date > start-date") are semantic, not structural and most static type systems excel at structural checking but struggle with semantic rules - e.g., "Is this user authorized to perform this action?" is nearly impossible to verify with static type systems.
What static types typically struggle with:
- Business rules and invariants
- Relationships between values
- Runtime-dependent constraints
- The actual "correctness" that matters
Static type systems end up creating complex type hierarchies to model what Clojure does with simple predicates. You need dependent types or refinement types to express what clojure.spec handles naturally.
Elixir also uses transparent data structures - maps, tuples, lists, and structs are just maps. It has powerful pattern matching machinery over type hierarchies - you match on shape, not class. Elixir thinks in terms of messages between processes and type safety matters less when processes are isolated.
Python's type hints do help with IDE support and catching basic errors, yet, they don't help with the semantic constraints that matter. They add ceremony without data transparency. You still need runtime validation for real invariants.
So Python gets some static analysis benefits but never gains a truly powerful type system like Haskell's, while also never getting "it's just data" simplicity. So yes, in that sense, Python kinda gets "meh" from both camps.
I'm not sure what you're saying, but just in case if you're implying that choosing to use something like Clojure or Elixir adds "unnecessary layer of problems" just because they are not statically typed, let me remind you that "simplicity" is basically Clojure's middle-name. Rich Hickey made a seminal talk on simplicity, https://www.youtube.com/watch?v=SxdOUGdseq4 it's quite eye-opening and isn't really about Clojure. Every programmer should watch it, perhaps even multiple times throughout different stages of their career progression.
I don’t imply anything. I state (pun very much intended).
I cannot consider seriously anybody who thinks that programmer convenience and maintainability are different things on any level. And focusing on one doesn’t need the other.
Also this speech is academic blabla. I understand it, but there is a reason why he had to build a completely new vocabulary… because it’s not simple and easy at all. And there is a reason why there is exactly zero examples in almost the whole speech.
Because I can give you a very, very good example to contradict the whole speech: NAND gate.
Also you can code in Haskell basically with the same logic as in imperative languages (I used Clojure rarely, so I cannot say the same). So the “simplicity” is there only if you want to. But that’s true also the other way around: you can have the same “simplicity” in languages where mutability is the default. I agree that you should do immutability by default, but it’s stupid to enforce it. And he said the same thing, just it was a half sentence, because it contradicts everything what he preached: there are cases when his simplicity toolkit is the worse option.
Ruthless immutability causes less readable and maintainable code in many cases. And I state it, and I can give you an example right away (not like Hickey did): constructing complex data.
Also every time when somebody comes up with ORM and how bad it is, I just realize that they are bad coders. Yes, a lot of people don’t know how to use them. But just because it allows you to do bad things, doesn’t mean that it’s bad. You can say the same thing about everything after all. Every high level languages are slower than code in Assembly. Does that mean that every high level languages are OMG, and we should avoid them? Obviously not. You need to know when to touch lower layers directly. This is especially funny because there is a thread about that part on Reddit, because he used some vocabulary which is basically non existent, and the thread is a clear example of that people don’t even know what’s the problem with it. It’s a common knowledge that it’s bad, and they don’t know even why. For example, whoever fuck up a query through ORM, would fuck up the same way just with different syntax, like with a loop, because they clearly don’t know databases, and they definitely don’t even heard about normal forms.
And yes I state it again, using flexible type systems add unnecessary layer of problems. I also like that he mentioned Haskell, because it makes it clear, that his speech is completely orthogonal to the discussion here.
You seem to be making several interconnected points, but your writing style is making it somewhat difficult to follow.
It looks like you're mischaracterizing Hickey's talk. The distinction between "simple" (not intertwined) and "easy" (familiar) isn't "academic blabla' - it's fundamental to building maintainable systems.
Your NAND gate example actually supports his point: we build higher-level abstractions precisely because working at the lowest level isn't always optimal. That's what Clojure's immutable data structures do - they provide efficient abstractions over mutation.
As for "constructing complex data" being harder with immutability - have you ever heard about Clojure's transients or update-in, lenses or Specter? They make complex data manipulation both readable and efficient.
The ORM critique isn't about "bad coders" - it's about impedance mismatch between relational and object models. Even expert users hit fundamental limitations that require workarounds.
Calling dynamic typing an "unnecessary layer of problems" at this point is pretty much just your opinion, as it is clear that you have one-sided experience that contradicts everything I said. The choice between static and dynamic typing involves real tradeoffs; there isn't a clear winner for every situation, domain, team, and project, you can "state" whatever you want, but that's just fundamental truth.
It’s academic blabla, because as I stated it created a new vocabulary for no good reason. You can say the same thing without introducing it. You can say those things even better with examples.
I told you that I don’t know clojure, I know Haskell, which was also topic of Hickey’s presentation. I also know builder pattern… But looking into it:
> Clojure data structures use mutation every time you call, e.g. assoc, creating one or more arrays and mutating them, before returning them for immutable use thereafter.
So the solution is mutability… when Hickey preached about immutability. And basically builder pattern. A pattern which is in almost every programming language, most of the times a single line. So… immutability is not the best option every time, just as I stated, and it causes worse code. We agree.
You hit limitations with everything. ORM is not different regarding that. When I write s.isEmpty() in a condition in Java, I had to use a workaround… very painful. And yes, I state that if you think an entity is a simple class, then you are a bad coder, and if you cannot jump this “complexity”, you will never be a not bad coder. Same with sockets, ports, and pins.
Your last paragraph is totally worthless. You can say the same thing about everything.
I simply don’t preach. And I also wouldn’t say such things without considered an expert in ORM for example. I also wouldn’t say anything about which I don’t have first hand experience.
And as I stated, no matter what’s your tooling, bad coders will make bad code. On the other hand, you don’t need restrictive tooling to make good code. So what’s my problem is that, you don’t need Clojure or Haskell. It’s good to see something like that in your life, the reasoning etc, but it’s pointless to say, that you must be “simple”, when we can see that that is simply not true. Not even in Clojure according to Clojure.
You seem to have been frustrated with prescriptive programming advice and language evangelism, which is understandable. Yet it looks like you're conflating Hickey's design philosophy with language zealotry. The talk's core message about reducing accidental complexity remains valuable, even if the presentation style and some specifics, sure, can be characterized as debatable.
I can buy your vocabulary criticism - fair point, okay. Although vocabulary often can help crystallize concepts. Clojure's internal mutation - alright, accurate observation. Clojure uses mutation internally for performance while presenting an immutable interface. This does show pragmatism over purity. "Tooling doesn't fix bad coders" - true. No language or paradigm prevents bad code if developers don't understand the principles. Immutability isn't always best - correct. Even functional languages acknowledge this (IO monads, ST monad, etc.), but Clojure doesn't force immutability - it provides default immutability with escape hatches.
Now things that I can't agree with:
"Academic blabla" is outright dismissive. The talk addresses real architectural problems many developers face. The criticism of ORMs isn't just about "complexity" - it's about impedance mismatch, hidden performance costs, and leaky abstractions. These are well-documented issues. "Must be simple" is a misrepresentation of Hickey's point. He advocates for simplicity as a goal, not an absolute rule. Builder pattern equivalence is oversimplification. Persistent data structures offer different guarantees than builders (structural sharing, thread safety, etc.).
I think a lot of people feel repulsed by dynamic typing for the same reason I used to: they wrote a lot of JS, and now they write a lot of TS. The experience of working in a JS codebase vs working in a (well-typed and maintained) TS codebase is a wide, wide gulf. I love working in TS, and I absolutely despise working in JS.
For a while I extrapolated my experience to mean “static typing is awesome and dynamic typing is horrible”, but then I started learning clojure, and my opinions have changed a lot.
There are a ton of things that make a codebase nice to work with. I think static typing raises the quality floor significantly, but it isn’t a requirement. Some other things that contribute to a good experience are
- good tests, obviously. Especially property based tests using stuff like test.check
- lack of bad tests. At work we have a very well-typed codebase, but the tests are horrible. 100 lines of mocks per 20 lines of logic. They’re brittle and don’t catch any real bugs.
- a repl!!
- other developers and managers who actually care about quality
All four of these examples seem pretty easy to find in the clojure world. Most people don’t learn clojure just to get a job, which is maybe a hidden feature of building your company with a niche language.
At the same time, I recognize that most of those examples are “skill issues”. Static typing does a good job of erasing certain skill issues. Which is great, because none of us are perfect all the time!
Exactly this. When speaking about static or dynamic type systems, you can't discuss them in isolation, you need to specify because the holistic experience of types in JS, Python, Clojure, Elixir, Haskell, Rust, etc. will differ, sometimes drastically. Leaning too much to any side is perilous; it may create a false narrative in your head about what types fundamentally mean and how they should be used.
Camping too long on either side may create wrong assumptions like that all/most problems are type problems; make you conflate implementation with concepts and make you miss the real tradeoffs; you'd start ignoring context and scale.
Static types aren't always about "safety vs. flexibility" - sometimes they're about tooling, refactoring confidence, or documentation. Dynamic types aren't always about "rapid prototyping" - sometimes they enable architectural patterns that are genuinely difficult to express statically.
One really needs to see how Rust's ownership system or Haskell's type inference offers completely different experiences, or how Clojure's emphasis on immutability changes the dynamic typing game entirely.
> Over time you will probably feel drawn to both, for different reasons
I agree 100%. At first I liked C# and Java types, but then I moved to Python and I was happy. Learning some Typescript pulled me back into the static typing camp, yet then I discovered Clojure it revealed to me how needlessly cumbersome and almost impractical TS type system felt to me in comparison. Experimenting with Haskell and looking into Rust gave me a different perspective once again. If there's a lesson I've learned, it's that my preferences at any point in life are just that - preferences that seldom represent universal truths, particularly when no definitive, unambiguous answer even exists.
That's quite the list of languages. If you're interested in types I would suggest also looking into F#. Shouldn't be tricky for someone who has already experienced C# and is open to different approaches to programming. For me F# was a revelation into what programming language typing can do for the developer.
I do know and like F#, and just like the other commenter I think I personally prefer it over some Haskell's ways. F# was my first FP language, I was so excited about it, but it was a hard sell back then to my team of charpers, so I sneaked it in by writing tests in it for an otherwise altogether C# codebase.
Too sad that the dotnet world largely still operates in csharpland and F# unfairly gets ignored even within dotnet community. I never have found a team where F# is preferred over other options, and I'd absolutely love to see what it's like. Unfortunately, F#'s hiring story is far worse than other less popular languages and even if there is anything, you most likely end up supporting other projects in C#, and honestly, even though it's a fine language by many measures, I see C# as "past experience" that I personally am not too eager to try again and do it as my daily job.
I really like the way F# (and presumably ocaml) do typing, it truly feels like the best of both worlds. Rarely do you need to actually write a type anywhere, and when you do it's because otherwise there might have been a runtime error.
I wouldn’t say that static types remove the need for unit tests… but they do significantly reduce that need.
Static types and unit tests are not equivalent either. A static type check is a proof that the code is constructed in a valid way. Unit tests only verify certain input-output combinations.
If you're trying to hot swap a function for a version with a different type signature, how was the new code going to run successfully anyway? Unless the type signature changes aren't reflecting the runtime behaviour changes?
I mean, lots of extremely talented and successful engineers, e.g., DHH, think strict typing is actually a negative. I think if you think strict typing is an absolute disqualifier, you should steelman the opposing side.
His "argument" for strict typing on the Lex Fridman podcast was also that it's mostly if you have "hundreds or (of?) thousands of engineers collaborating", which "strictly" puts his opinion in the bin in my case. I have absolutely, positively become more productive as a single developer due to strict typing. Parsing various kinds of binary data was enough to convince me. A.k.a. each to their own. On the whole, Elixir and BEAM seem really cool though, and there's work being done on typing, as well as the new Gleam language.
Note: I've not yet done any serious web-development, mostly command line tools, which I realise is not DHH's main focus.
I tried both and I respect different opinions about this. I feel like TS' type system makes me crazy because it's going too far. But no typing makes me anxious. I guess there is a sweet spot to find for me personally.
Anxious about what? What's your issue with dynamic typing?
Thorough tests of the behavior of your system (which should be done whether the language is dynamic or not) catch the vast, vast majority of type errors. "More runtime errors" in a well designed codebase don't mean errors for the user - it means tests catch them
Seriously.. the secret to writing great dynamic code is getting very good at testing
I come from a PHP and Python background. I can see how progressive addition of types has changed the way we deal with code. Working on a big old project without types is not even possible for me anymore. Refactoring things with types is already such a tedious task. Without it it's barely possible.
The thing is most useful for me about types is it serves as forced documentation. I look at function signature with types and know what kinds of values it accepts and what it returns.
For sure. The biggest advantages of static types I see is the better tooling + faster feedback for type errors. They're advantaegs but not that big of a deal and they're equal to the tradeoffs you're making to where it doesn't matter much
I see dynamic codebases being written differently though. We all know those tradeoffs - so conventions pop up to deal with them. A method that ends in a ? in ruby - like user.admin? - always returns a boolean. You get better at naming to make types clearer
I definitely miss the perfect automated refactorings though
I knew TS's type system went too far when I saw the 8 queens problem solved in just the type system, not the actual language. Me, I just want to get stuff done, not fight my type system.
And that was even before someone wrote DOOM in TS's type system.