One of my goals for Objective-Smalltalk is to achieve instantaneous builds and seamless live programming.
I recently rediscovered the interactive programming experience of my old Apple ][+, which boots instantly and allows for immediate code execution and modification.

While this level of interactivity can be found in more complex systems like Smalltalk, it highlights that responsiveness is achievable without cutting-edge technology. It’s a matter of prioritization.
Over time, build times seem to be increasing. At my work, build time increased 2.5 times to almost an hour in 3 years (using a 2014 Mac mini). Even an iMac Pro now takes 8 minutes š¤¦āāļø
ā š½ (@et_rc1) November 7, 2019
This observation reflects a concerning trend.
Although some systems, like Swift, prioritize optimizations and static type-checking, leading to longer compile times, they often lack concrete evidence for their claimed benefits.
Minimum Viable Program:
“A running program, even if not correct, feels closer to working than a program that doesn’t run at all”
(from a paper about Scratch)https://t.co/pNzDJYL9Gc pic.twitter.com/Nh2sBFnDvB
ā Geoffrey Litt (@geoffreylitt) November 6, 2019
In contrast to this trend, Objective-Smalltalk prioritizes interactivity. With today’s powerful hardware, there’s no reason why development experiences can’t be more interactive, even in demanding environments like Unix, macOS, and iOS.
Objective-Smalltalk aims to be fast, near-live, and have instant builds using:
An interpreter: A simple AST-walking interpreter capitalizes on modern hardware speeds, proving sufficient for many tasks. (See The Death of Optimizing Compilers by Daniel J. Bernstein for more on this concept). This also allows for seamless iOS development.
Late binding: Objective-Smalltalk uses late binding for messages, identifiers, storage, and dataflow. This enables modularity and separate compilation, but requires careful design to avoid performance issues.
Separate compilation: Inspired by Smalltalk and traditional C programming, Objective-Smalltalk emphasizes separate compilation. This means changes in one file don’t require recompiling the entire codebase, a feature often neglected in modern macOS and iOS development.
A fast and simple native compiler: While an interpreter is suitable for interactive development, a fast compiler is crucial for full builds. Objective-Smalltalk leverages TinyCC’s backend (backend) to generate optimized machine code directly, without intermediate representations.
` ``` static void gcall_or_jmp(int is_jmp) { int r; if ((vtop->r & (VT_VALMASK | VT_LVAL)) == VT_CONST && ((vtop->r & VT_SYM) && (vtop->c.i-4) == (int)(vtop->c.i-4))) { /* constant symbolic case -> simple relocation / greloca(cur_text_section, vtop->sym, ind + 1, R_X86_64_PLT32, (int)(vtop->c.i-4)); oad(0xe8 + is_jmp, 0); / call/jmp im / } else { / otherwise, indirect call / r = TREG_R11; load(r, vtop); o(0x41); / REX / o(0xff); / call/jmp *r */ o(0xd0 + REG_VALUE(r) + (is_jmp « 4)); } }
``` `
Early tests demonstrate promising speed, capable of generating binary code for 600KLOC/s. Even with potential slowdowns, this approach is significantly faster than traditional methods.
While tcc might not offer extensive optimizations, its simplicity and speed, combined with Objective-Smalltalk’s design, aim to provide a fast and efficient development experience.
Defense in Depth
By combining an interpreter, late binding, separate compilation, and a fast native compiler, Objective-Smalltalk aims to provide a consistently fast and enjoyable development environment that rivals existing toolchains.
Achieving this responsiveness doesn’t require groundbreaking techniques, just a commitment to prioritizing speed and efficiency.
