Skip to main content

Designing a Programming Language: Lessons from Building Jelt

What I learned designing and implementing the Jelt programming language, from syntax decisions to compiler passes and C interop.

  • Compilers
  • C++
  • Language Design
  • Programming Languages

By Jenn Barosa

Designing a Programming Language: Lessons from Building Jelt

I wrote a compiler for a language called Jelt. It started as a way to learn how compilers work end-to-end, and turned into an exercise in language design that taught me more than I expected.

Syntax Choices

Jelt uses symbols for common declarations: @ for functions, & for structs, && for enums. Most languages spell these out as keywords, which reads well in prose but adds visual weight. I wanted to see what a language looked like when you stripped that away.

The result is denser code. A Jelt file looks different from C or Rust at a glance. Whether that's good or bad depends on who you ask. For me, once the symbols are in muscle memory, scanning code gets faster. For someone reading Jelt for the first time, there's a learning curve.

Compile-Time Evaluation

Jelt has a $$ prefix for compile-time constants. These get fully evaluated during compilation and inlined into the output. The idea is that if a value can be known at compile time, it should be resolved then.

The module system uses "sections," which are named scopes that can be imported into other files. Sections can nest, so you can organize libraries as hierarchies. The compiler carries full type and signature information through section imports, unlike C headers where you're basically copy-pasting text.

C Interop

Early on I realized that a language without access to existing libraries is basically useless for anything practical. Jelt can call C functions directly. The compiler generates compatible calling conventions, and you can import C header declarations. So you get the entire C ecosystem for free. Need to open a file? Call fopen. Need sockets? Include the header and go.

This was probably the single most important design decision. It meant I could write real programs in Jelt immediately, even while the standard library was still nonexistent.

Compiler Passes

The compiler is structured as four passes:

Lexer turns source text into tokens. The symbolic syntax actually makes this easier since @, &, and && are unambiguous.

Parser builds an AST using recursive descent. I went with a hand-written parser instead of a generator because error messages are so much better when you have full context at every parse point. Parser generators give you "unexpected token at line 47" and that's about it.

Semantic analysis does type checking, scope resolution, and compile-time evaluation. This is where $$ expressions get resolved and their values get stored in the AST.

Code generation currently targets C. This is a common bootstrapping strategy and it means Jelt gets whatever optimizations the C compiler applies for free.

Things I'd Change

Error recovery in the parser is weak. One syntax error tends to cascade into a wall of secondary errors that are all noise. Rust's compiler invests a lot of effort into suppressing cascading errors and making the first message actually useful. I should have prioritized that earlier.

I also should have built a REPL sooner. When you're iterating on syntax design, the compile-run-check loop is slow. A REPL lets you try ideas in seconds. I added one late and immediately started making better design decisions because the feedback loop was so much tighter.

The Hard Part

Parsing and codegen are well-understood problems with known solutions. The hard part of building a language is that every feature interacts with every other feature. You add closures and now you have to think about how closures interact with your memory model. You add generics and now type inference gets more complicated. The whole thing is a web of interlocking constraints, and finding a set that feels coherent takes a lot of trial and error.