Anyone trying to do this... the first thing you do is avoid lex/yacc/bison/antlr. You do not need all this ceremony. A recursive descent parser that uses Pratt parsing will work for a vast majority of cases.
The lexer/parser is never the bottleneck. In fact, you can write those two by hand over a single weekend for a largish language. With LLMs, it takes 15 minutes if you have an unambiguous spec.
The biggest time sink, and the reason you will fail for sure, is the inability to restrict the scope of the project. You start with a limited feature set and produce the entire compiler/vm toolchain. Then you get greedy and fiddle with the type system, adding features that you have never used and probably never will. And now you have to change every single phase from start to end.
Jonathan Blow wrote his own game enginee and for that he wrote his own programming language.
He went through straight recursive descendant parser and said same thing.
I think compiler courses teach from yacc, bison etc that's where this whole thing came from but in practice people discovered that hand written recursive descendant parsers are all you need.
> I think compiler courses teach from yacc, bison etc that's where this whole thing came from
Very true. I have a shelf full of books on compiler development and optimization. I have read them selectively, a chapter here, a chapter there. But that shelf is useless for a vast majority of people.
You might find it useful if you are developing a production-level compiler/vm (I cannot make this statement with a straight face while Python rules the world). But a simple and sensible architecture that uses recursive-descent parsing takes you a long way.
Most hobbyist compilers (and even some production ones) are written as a heavy front-end compiling down to C or LLVM. Very few people actually write their own backend.
> You might find it useful if you are developing a production-level compiler/vm
Not any of the ones I have worked on, nor the ones I know about: they all use hand-written parsers. In practice, error reporting and recovery tends to be tedious and/or difficult with a generated parser, which is a serious issue for practical tools.
Parsing has turned out to be simpler, in practice, than the computing pioneers expected it to be, because simpler grammars are easier for both machines and humans to reason about. Instead of using sophisticated parser generators, we just design dumb grammars: that works out better all around.
Yeah. I added the caveat because I haven't looked at the source of the major production compilers and didn't want to overreach. The hobbyist ones mostly stick to hand-rolled recursive descent.
Yep. I started out using ANTLR for one project of mine. I ended up spending loads of time fighting its syntax to do really quite simple things, and it was slow! I probably wasn't holding it right. In the end, I wrote a simple lexer and recursive descent parser (with a small amount of lookahead) in a weekend. The code was easy to read, easy to extend, and fast.
Probably the most fun I’ve had with LLMs has been slowly making a programming language as a side project.
I used to give up somewhere around the type system, too, but this time I’m approaching something vaguely useful. It even has a basic LSP.
It’s been both enjoyable and enlightening, and LLMs turn out to be an excellent pair designer as (in addition to implementation) they’re really good at summarising the impact of various decisions.
> the reason you will fail for sure, is the inability to restrict the scope of the project
This will be the reason, for sure. But then the scope of every project like this tends towards building an OS with it then replacing every piece of software, including all embedded devices :)
I cannot do slow. It is either burn the candle at both ends, or do nothing at all.
I am using LLMs this time as well, but I spent close to 400 hours over a period of 6-7 weeks on my project before I put it to the side temporarily (got bored once the thinking part was done). About 300 of those were spent on iterating over the language and VM specs and eliminating all ambiguities and needless features. The remaining 100 were used to produce the code --- the VM, the assembler and the compiler --- and to repeated rewrite it to conform to my way of doing things.
LLMs have let me become extremely choosy about which code I am willing to keep.
I've taken the approach of writing and even directly reviewing almost no code for this, otherwise I'd simply not have time for it as another side project. It's also interesting to see how far I can push this "vibe engineering" approach, and although it's not perfect, the answer is much further than I'd have expected going in.
I've managed to get OpenCode setup such that I can have a productive discussion about the design or an issue / change then leave the LLM iterating for long periods while I do other work. It's instructed to maintain test coverage and treat quality very seriously - as a result there are over 5000 tests (some I suspect are useless...) and it's pretty rare to get a regression.
I'm pretty sure there are plenty of significant bugs and gaps, but also that once found it seems like all of them will be fixed pretty quickly by the LLM.
I learned to do this about 2 years ago (pre LLM). I have been developing software for ~30 years and somehow doing something like this was a major mental obstacle, mostly created by the perception of "the dragon book", as in this topic being full of mystical unobtainable incantations, so I never even dared venture into this space. Silly, I know. However, after diving into this and learning to write a recursive descent parser for a DSL I wanted to write, it felt like I'd acquired a superpower. Totally understand that there is many more layers to all of this, layer that can get very complex, but just learning that first bit...
I wish people would start with Nystrom's https://www.craftinginterpreters.com/ and avoid the dragon etc unless they really, really need it. Almost everything I have learnt about compiler/vm development, I have done so by reading random blogs and articles on various aspects and small tutorials on writing parsers and vms.
Even stuff like Crenshaw's Let's Build a Compiler was more useful to me than all these books that do lexical analysis using regular expressions. I have written lexers and parsers hundreds of times for all kinds of DSLs and config languages and not once have I used regular expressions to scan the text.
Isn't using regex in this space kinda shunned, when you can easily write a grammar and parse things more reliably that way? Surprised to read that any books do that.
Many projects wish they had a proper grammar. When a project turns useful and people want to port it, or support it on other platforms, a grammar makes that job much easier.
I am not quite sure what you mean by having a recursive descent parser, because you can write one manually, or you can generate one from a grammar, which would have the additional benefits of having a grammar. I recommend having a grammar.
I like writing parsers, and nowadays just use handwritten recursive descent functions, using a couple of simple utility functions. It is easy to reason about and flexible. I do start each parsing function with a comment stating the informal grammar the function should parse (and LLM autocomplete usually types the rest of the function).
With regard to portability: I've found cross-language parser generators especially unpleasant to work with. Instead, I just implement the parser in a language that runs on all platforms I care about.
I agree. I have written lexer/parser for my language twice (for compiler0 and for a self-hosted compiler). It's a very dumb task requiring almost to mental load.
Profiling results show that the amount of time spent lexing/parsing is negligible - less than 1% of the total compilation time.
I wrote a few of these due to an interest in compilers and hardware.
The easiest syntax to copy if you’re looking for a high level language is Smalltalk.
But most of the time, I wouldn’t even use that. Simple imperative languages that look like BASIC works pretty well in most domains. If you simplify the syntax a little, it’s very easy to understand the compiler and use it for say when you want users to input code into existing systems.
I have written compilers for two families over the years: C and ML. My current preference is Python. I am currently working on a statically typed language that is inspired by Python (minus objects and OOP) that runs on a register VM.
Syntax is a minor issue but something that people are very opinionated about. You could technically build multiple front ends that share the typechecking, CFG validation, optimization, register allocation and byte code emission phases. But it is too much work for what is presently a personal project.
To me the most interesting part of a notation is the underlying thing that actually runs the code. The virtual machine, if you will. There are many ways to do that but I don't know a good systematic overview. E. g. what is Forth, if we ignore the notation? What is Lisp? What is Pascal and how it is different from C?
I've also made my own language for making games. It's a scheme with some tricks to make some gamdev specific aspects much nicer. Making it work was indeed not that hard, but making it good has taken its toll. Really happy with it currently!
Async stuff, vector math (vector value type) and special let-style forms for pushing/popping render states. For example, if I want a drop shadow, I can do (vfx/drop-shadow [settings] ...) and a drop shadow effect will be applied to what is rendered within that scope.
Also GC that works well with game style allocations where most of the allocations are dropped every frame.
I think more people having a crack at a language is a good thing. It demystifies a lot. For a long while I wanted the install guide for EYG (my language) to be a tutorial to write an interpreter in the language of your choice. I thought following the guide should take about a weekend and cover every feature in the language. For production you might want someone else's implementation, but for getting started what a great intro.
I've been having a lot of fun building my own programming language [1]. Getting to the point where you can write programs in your own language was surprisingly easy.
The language, Sapphire, is Ruby inspired, so the most interesting part is digging into the internals of the latter when I'm trying to figure out how something should work.
I had a similar surprise about how approachable PL is, but from going from 'the bottom up' instead from a normal language.
I wrote a compiler toolchain and debugger that takes a Turing machine description plus input string and emits an encoded tape runnable by a Universal Turing Machine [0]. I had some prior PL experience, but never did an end-to-end compiler pipeline, at least not this low level.
It started as a joke/experiment, but I couldn't believe how fast it pulled me into designing:
- a small low-level ASM for building the UTM
- an ABI for symbol widths and encoding grammar
- an interpreter used as the behavioral oracle
- raw TM transitions for each ASM instruction, generated by having an LLM iterate on candidate emissions and checked against the interpreter oracle
- a CFG-style IR to fix the LLM mess once direct ASM -> TM emission became too hard to keep sane (LLM did a decent job actually, I don't think I would have done a much better job without the IR either)
- a gdb-style debugger for raw transitions, ASM routines, and blocks
- a trace visualizer
- a bootstrapping experiment where an L1 UTM/input pair was itself run through an L2 UTM
- optimisation experiments
And every step came quite naturally and was easy to tie in with everything else. Each one was just the next local repair needed to make the previous layer tractable.
I wrote my own interpreted language about 25+ years ago to write online surveys. It made it easy to create complex surveys with many branches. I think I wrote it in Objective-C.
The team implementing the survey system wound up using the same language to implement the runtime portion, something I never expected or designed in.
I don't recall anything about what it looked like now. I do remember it was a lot of fun to write.
Yes, it's true that someone can put together a simple language like in a university course. The difficulties, as mentioned at the bottom of the post, are things like metaprogramming features or optimizing compilers.
The tail ends of a language implementation (parsing and code generation) are a fixed cost; the "middle end" can grow unbounded as more production-quality items are added.
this project is pretty interesting, although i'm wondering how they're planning to address the "easy sandboxing" design goal in a compiled language with raw pointer arithmetic and clib interop... in that regard i think lua would have been a lot easier to sandbox, despite the author's concerns.
(also, they might want to look into lua userdata, since that would address their concern about the overhead of converting between native and lua data structures. the language is designed to be embedded in C programs after all)
Just making a better C with no real compiler (only JIT) is easy, I agree.
It's much harder to make something innovative and mature. It requires years of development.
I was very deep into .NET until recently but somehow I didn't know this existed. Looks like C# with extra Linq-to-SQL syntax; I guess it's a DSL made with Roslyn for ERP jobs? I wonder how they picked the name.
Like most things in programming, handling the easy stuff is easy, but it’s all the edge cases that kill you. I’m writing an IDE in flutter right now, and all of the defensive programming I have to do to handle the unhappy path, is where 50% of my code goes.
defer is not a proper replacement for destructors. One need to write it manually each time in each function where some cleanup is needed. It's easy to forget to do so or to do this in a wrong way. Destructors in the other hand are called automatically and all cleanup logic is written exactly once (within destructor body).
At least in my opinion, it might depend what kind of game. For example, there can be: card game, certain kind of puzzle games, Pokemon game, etc. There might also be consideration of such things like what portability you want, what sandboxing you will want, etc.
(There are game engines that have their own programming language for those kind of game (and some of them are the ones I had made up too).)
> > If I were to make my own programming language, it would look an awful lot like Python.
> I agree, Python allows anyone to write bad code, but makes up for it by running the code slow enough that it can't do real damage.
In the same sentence you agree with the implied beauty of the syntax of Python and then go on sarcastically about the performance of CPython. Assumably you deliberately mixed language and implementation because you needed a soapbox, so hey, here's my comment to which you can reply and continue your rhetoric.
It is absolutely not the case that all problems worth solving are solved already. Programming language development isn't necessarily about being a genius but rather a willingness to put in a monumental amount of work. Writing a language that compiles is easy enough. Getting a language off the ground to an actually useful place is tedious, simply in terms of the sheer amount of work to be done. Specification, implementation, documentation, diagnostics, optimization, configuration, tooling support, and creating a standard library (especially a cross-platform one) are things that will mire you in many hundreds of hours of work.
Making you own language is easy. Creating the library that will actually solve problems without forcing the developers to reinvent the wheel is the crux. There is a reason why C++ / Java / JavaScript etc are established, it's the already proven libraries around those languages that allows them to be so successful.
I have only read the first end of the article but I can't help but think that a project like libriscv[0] would've/could've worked for their game project too because fun fact but the creator of librsicv, the legendary fwsgonzo is also making a game. I highly recommend for people to check out their discord server.
But my main point is that libriscv is one of the fastest libriscv emulators and then something like C/C++/lua could've been used with sandboxing purposes for the purposes of the game then.
Am I missing something? Although, making a programming language is one kind of its own projects and that's really cool as well :-D
but I would also love to hear the author's opinion on libriscv as it feels like it ticks of all the boxes from my understanding
57 comments:
Anyone trying to do this... the first thing you do is avoid lex/yacc/bison/antlr. You do not need all this ceremony. A recursive descent parser that uses Pratt parsing will work for a vast majority of cases.
The lexer/parser is never the bottleneck. In fact, you can write those two by hand over a single weekend for a largish language. With LLMs, it takes 15 minutes if you have an unambiguous spec.
The biggest time sink, and the reason you will fail for sure, is the inability to restrict the scope of the project. You start with a limited feature set and produce the entire compiler/vm toolchain. Then you get greedy and fiddle with the type system, adding features that you have never used and probably never will. And now you have to change every single phase from start to end.
I mostly give up at this stage.
Jonathan Blow wrote his own game enginee and for that he wrote his own programming language.
He went through straight recursive descendant parser and said same thing.
I think compiler courses teach from yacc, bison etc that's where this whole thing came from but in practice people discovered that hand written recursive descendant parsers are all you need.
> I think compiler courses teach from yacc, bison etc that's where this whole thing came from
Very true. I have a shelf full of books on compiler development and optimization. I have read them selectively, a chapter here, a chapter there. But that shelf is useless for a vast majority of people.
You might find it useful if you are developing a production-level compiler/vm (I cannot make this statement with a straight face while Python rules the world). But a simple and sensible architecture that uses recursive-descent parsing takes you a long way.
Most hobbyist compilers (and even some production ones) are written as a heavy front-end compiling down to C or LLVM. Very few people actually write their own backend.
> You might find it useful if you are developing a production-level compiler/vm
Not any of the ones I have worked on, nor the ones I know about: they all use hand-written parsers. In practice, error reporting and recovery tends to be tedious and/or difficult with a generated parser, which is a serious issue for practical tools.
Parsing has turned out to be simpler, in practice, than the computing pioneers expected it to be, because simpler grammars are easier for both machines and humans to reason about. Instead of using sophisticated parser generators, we just design dumb grammars: that works out better all around.
Yeah. I added the caveat because I haven't looked at the source of the major production compilers and didn't want to overreach. The hobbyist ones mostly stick to hand-rolled recursive descent.
Re: bison and yacc. It came from the dragon book which for forever was the way to learn to write languages.
Yep. I started out using ANTLR for one project of mine. I ended up spending loads of time fighting its syntax to do really quite simple things, and it was slow! I probably wasn't holding it right. In the end, I wrote a simple lexer and recursive descent parser (with a small amount of lookahead) in a weekend. The code was easy to read, easy to extend, and fast.
Probably the most fun I’ve had with LLMs has been slowly making a programming language as a side project.
I used to give up somewhere around the type system, too, but this time I’m approaching something vaguely useful. It even has a basic LSP.
It’s been both enjoyable and enlightening, and LLMs turn out to be an excellent pair designer as (in addition to implementation) they’re really good at summarising the impact of various decisions.
> the reason you will fail for sure, is the inability to restrict the scope of the project
This will be the reason, for sure. But then the scope of every project like this tends towards building an OS with it then replacing every piece of software, including all embedded devices :)
> slowly
I cannot do slow. It is either burn the candle at both ends, or do nothing at all.
I am using LLMs this time as well, but I spent close to 400 hours over a period of 6-7 weeks on my project before I put it to the side temporarily (got bored once the thinking part was done). About 300 of those were spent on iterating over the language and VM specs and eliminating all ambiguities and needless features. The remaining 100 were used to produce the code --- the VM, the assembler and the compiler --- and to repeated rewrite it to conform to my way of doing things.
LLMs have let me become extremely choosy about which code I am willing to keep.
I've taken the approach of writing and even directly reviewing almost no code for this, otherwise I'd simply not have time for it as another side project. It's also interesting to see how far I can push this "vibe engineering" approach, and although it's not perfect, the answer is much further than I'd have expected going in.
I've managed to get OpenCode setup such that I can have a productive discussion about the design or an issue / change then leave the LLM iterating for long periods while I do other work. It's instructed to maintain test coverage and treat quality very seriously - as a result there are over 5000 tests (some I suspect are useless...) and it's pretty rare to get a regression.
I'm pretty sure there are plenty of significant bugs and gaps, but also that once found it seems like all of them will be fixed pretty quickly by the LLM.
I just have to avoid looking at the code...
I learned to do this about 2 years ago (pre LLM). I have been developing software for ~30 years and somehow doing something like this was a major mental obstacle, mostly created by the perception of "the dragon book", as in this topic being full of mystical unobtainable incantations, so I never even dared venture into this space. Silly, I know. However, after diving into this and learning to write a recursive descent parser for a DSL I wanted to write, it felt like I'd acquired a superpower. Totally understand that there is many more layers to all of this, layer that can get very complex, but just learning that first bit...
I wish people would start with Nystrom's https://www.craftinginterpreters.com/ and avoid the dragon etc unless they really, really need it. Almost everything I have learnt about compiler/vm development, I have done so by reading random blogs and articles on various aspects and small tutorials on writing parsers and vms.
Even stuff like Crenshaw's Let's Build a Compiler was more useful to me than all these books that do lexical analysis using regular expressions. I have written lexers and parsers hundreds of times for all kinds of DSLs and config languages and not once have I used regular expressions to scan the text.
Isn't using regex in this space kinda shunned, when you can easily write a grammar and parse things more reliably that way? Surprised to read that any books do that.
Every single book starts with regexes and DFA/NFA for lexical analysis. Too much ceremony for something you can write in 30 minutes and 300 lines
Many projects wish they had a proper grammar. When a project turns useful and people want to port it, or support it on other platforms, a grammar makes that job much easier.
I am not quite sure what you mean by having a recursive descent parser, because you can write one manually, or you can generate one from a grammar, which would have the additional benefits of having a grammar. I recommend having a grammar.
I like writing parsers, and nowadays just use handwritten recursive descent functions, using a couple of simple utility functions. It is easy to reason about and flexible. I do start each parsing function with a comment stating the informal grammar the function should parse (and LLM autocomplete usually types the rest of the function).
With regard to portability: I've found cross-language parser generators especially unpleasant to work with. Instead, I just implement the parser in a language that runs on all platforms I care about.
I agree. I have written lexer/parser for my language twice (for compiler0 and for a self-hosted compiler). It's a very dumb task requiring almost to mental load.
Profiling results show that the amount of time spent lexing/parsing is negligible - less than 1% of the total compilation time.
I wrote a few of these due to an interest in compilers and hardware.
The easiest syntax to copy if you’re looking for a high level language is Smalltalk.
But most of the time, I wouldn’t even use that. Simple imperative languages that look like BASIC works pretty well in most domains. If you simplify the syntax a little, it’s very easy to understand the compiler and use it for say when you want users to input code into existing systems.
I have written compilers for two families over the years: C and ML. My current preference is Python. I am currently working on a statically typed language that is inspired by Python (minus objects and OOP) that runs on a register VM.
Syntax is a minor issue but something that people are very opinionated about. You could technically build multiple front ends that share the typechecking, CFG validation, optimization, register allocation and byte code emission phases. But it is too much work for what is presently a personal project.
Are they public? Can we study from them? Got later into compilers and I'm trying a little bit of everything
There are many open source compiler and interpreter projects on github.
also:
https://github.com/BaseMax/AwesomeInterpreter
and probably there is one for compilers too.
To me the most interesting part of a notation is the underlying thing that actually runs the code. The virtual machine, if you will. There are many ways to do that but I don't know a good systematic overview. E. g. what is Forth, if we ignore the notation? What is Lisp? What is Pascal and how it is different from C?
I've also made my own language for making games. It's a scheme with some tricks to make some gamdev specific aspects much nicer. Making it work was indeed not that hard, but making it good has taken its toll. Really happy with it currently!
What modifications did you make to help with gamedev?
Async stuff, vector math (vector value type) and special let-style forms for pushing/popping render states. For example, if I want a drop shadow, I can do (vfx/drop-shadow [settings] ...) and a drop shadow effect will be applied to what is rendered within that scope. Also GC that works well with game style allocations where most of the allocations are dropped every frame.
I think more people having a crack at a language is a good thing. It demystifies a lot. For a long while I wanted the install guide for EYG (my language) to be a tutorial to write an interpreter in the language of your choice. I thought following the guide should take about a weekend and cover every feature in the language. For production you might want someone else's implementation, but for getting started what a great intro.
Easier than you think to get started, but harder than you think to turn into something truly usable that isn’t a toy of an experiment.
I've been having a lot of fun building my own programming language [1]. Getting to the point where you can write programs in your own language was surprisingly easy.
The language, Sapphire, is Ruby inspired, so the most interesting part is digging into the internals of the latter when I'm trying to figure out how something should work.
[1] https://github.com/sapphire-project/sapphire
I had a similar surprise about how approachable PL is, but from going from 'the bottom up' instead from a normal language.
I wrote a compiler toolchain and debugger that takes a Turing machine description plus input string and emits an encoded tape runnable by a Universal Turing Machine [0]. I had some prior PL experience, but never did an end-to-end compiler pipeline, at least not this low level.
It started as a joke/experiment, but I couldn't believe how fast it pulled me into designing:
- a small low-level ASM for building the UTM
- an ABI for symbol widths and encoding grammar
- an interpreter used as the behavioral oracle
- raw TM transitions for each ASM instruction, generated by having an LLM iterate on candidate emissions and checked against the interpreter oracle
- a CFG-style IR to fix the LLM mess once direct ASM -> TM emission became too hard to keep sane (LLM did a decent job actually, I don't think I would have done a much better job without the IR either)
- a gdb-style debugger for raw transitions, ASM routines, and blocks
- a trace visualizer
- a bootstrapping experiment where an L1 UTM/input pair was itself run through an L2 UTM
- optimisation experiments
And every step came quite naturally and was easy to tie in with everything else. Each one was just the next local repair needed to make the previous layer tractable.
[0] Repo: https://github.com/ouatu-ro/mtm
I wrote my own interpreted language about 25+ years ago to write online surveys. It made it easy to create complex surveys with many branches. I think I wrote it in Objective-C.
The team implementing the survey system wound up using the same language to implement the runtime portion, something I never expected or designed in.
I don't recall anything about what it looked like now. I do remember it was a lot of fun to write.
Yes, it's true that someone can put together a simple language like in a university course. The difficulties, as mentioned at the bottom of the post, are things like metaprogramming features or optimizing compilers.
The tail ends of a language implementation (parsing and code generation) are a fixed cost; the "middle end" can grow unbounded as more production-quality items are added.
My language: https://www.empirical-soft.com
This URL was posted two days ago: https://news.ycombinator.com/item?id=48040422
Making a programming language is easy if you just copy ideas already existing in other languages.
Coming up with new ideas is hard. Especially since you have to test them in the real world.
this project is pretty interesting, although i'm wondering how they're planning to address the "easy sandboxing" design goal in a compiled language with raw pointer arithmetic and clib interop... in that regard i think lua would have been a lot easier to sandbox, despite the author's concerns.
(also, they might want to look into lua userdata, since that would address their concern about the overhead of converting between native and lua data structures. the language is designed to be embedded in C programs after all)
Just making a better C with no real compiler (only JIT) is easy, I agree. It's much harder to make something innovative and mature. It requires years of development.
Strange to read that C++ can be someone's favorite programming language.
Only thing that goes for C++ is that it has acceptable (not straightforward) C interop.
I don't like C# and X++ because the language surface is huge but if you use a limited subset than needles to say, very useful and handy languages too.
> X++
I was very deep into .NET until recently but somehow I didn't know this existed. Looks like C# with extra Linq-to-SQL syntax; I guess it's a DSL made with Roslyn for ERP jobs? I wonder how they picked the name.
There are many like it, but this one is mine https://loonlang.com
Like most things in programming, handling the easy stuff is easy, but it’s all the edge cases that kill you. I’m writing an IDE in flutter right now, and all of the defensive programming I have to do to handle the unhappy path, is where 50% of my code goes.
So maybe we need programming languages that are really good/supportive at handling errors (while not introducing more of them)?
I watched a lot of youtube videos explaining in detail how to do it but i admit i never tried myself.
I'm kind of curious and want to try it for fun as long as i get some free time ^^
Any reasons for not using odin? It seems great for gamedev
Odin has no destructors. That's a fatal flaw.
well neither does C or zig right? but zig and odin do offer defer, which should be good enough while maintaining simplicity right?
defer is not a proper replacement for destructors. One need to write it manually each time in each function where some cleanup is needed. It's easy to forget to do so or to do this in a wrong way. Destructors in the other hand are called automatically and all cleanup logic is written exactly once (within destructor body).
Great write up!
For years I've been fantasizing about a language designed specifically for gameplay development that doesn't try to be like C.
Maybe AI is good enough now to help me with that..
The last time I tried, Claude couldn't even help me build a syntax highlighter for a hypothetical language.
At least in my opinion, it might depend what kind of game. For example, there can be: card game, certain kind of puzzle games, Pokemon game, etc. There might also be consideration of such things like what portability you want, what sandboxing you will want, etc.
(There are game engines that have their own programming language for those kind of game (and some of them are the ones I had made up too).)
If I were to make my own programming language, it would look an awful lot like Python.
Roughly 100%.
I agree, Python allows anyone to write bad code, but makes up for it by running the code slow enough that it can't do real damage.
> > If I were to make my own programming language, it would look an awful lot like Python.
> I agree, Python allows anyone to write bad code, but makes up for it by running the code slow enough that it can't do real damage.
In the same sentence you agree with the implied beauty of the syntax of Python and then go on sarcastically about the performance of CPython. Assumably you deliberately mixed language and implementation because you needed a soapbox, so hey, here's my comment to which you can reply and continue your rhetoric.
If someone smarter than me didn't think to invent a new language to solve what is likely a common problem, the solution already exists.
It is absolutely not the case that all problems worth solving are solved already. Programming language development isn't necessarily about being a genius but rather a willingness to put in a monumental amount of work. Writing a language that compiles is easy enough. Getting a language off the ground to an actually useful place is tedious, simply in terms of the sheer amount of work to be done. Specification, implementation, documentation, diagnostics, optimization, configuration, tooling support, and creating a standard library (especially a cross-platform one) are things that will mire you in many hundreds of hours of work.
Yeah except my version would only accept tabs instead of allowing (and even encouraging!!) spaces for indentation.
I see that and I raise Elastic Tabstops!
https://nick-gravgaard.com/elastic-tabstops/
Making you own language is easy. Creating the library that will actually solve problems without forcing the developers to reinvent the wheel is the crux. There is a reason why C++ / Java / JavaScript etc are established, it's the already proven libraries around those languages that allows them to be so successful.
I have only read the first end of the article but I can't help but think that a project like libriscv[0] would've/could've worked for their game project too because fun fact but the creator of librsicv, the legendary fwsgonzo is also making a game. I highly recommend for people to check out their discord server.
But my main point is that libriscv is one of the fastest libriscv emulators and then something like C/C++/lua could've been used with sandboxing purposes for the purposes of the game then.
Am I missing something? Although, making a programming language is one kind of its own projects and that's really cool as well :-D
but I would also love to hear the author's opinion on libriscv as it feels like it ticks of all the boxes from my understanding
[0]: https://github.com/libriscv/libriscv