SB#>_
tangent
Back in the 50s, computers were room-sized machines — essentially massive calculators. The first IO interface was control panels, manually entering programs via switches; Next came batch processing and punch cards. Everything changed with CTSS, the Compatible Time-Sharing System, developed at MIT in the early 60s. For the first time, multiple terminals could share a single computer, and users could have an interactive programming experience.
Louis Pouzin, a visiting French scientist, had been accumulating programs and routines for CTSS and realized he should be able to combine and reuse them — use them as building blocks for larger commands. He wrote an early prototype called RUNCOM, which supported argument substitution and basic scripting.
Around 1964, the Multics project got started — an ambitious collaboration between MIT, General Electric, and Bell Labs. Pouzin didn't stay on, but contributed a paper coining the term shell: 'The SHELL: A Global Tool for Calling and Chaining Procedures in the System.' Glenda Schroeder picked it up and implemented it as the Multics command language.
Multics envisioned computing as a utility - like water or electricity, a central resource users would subscribe to. It was a bold vision, but eventually imploded under its own weight: over-engineered, slow, arriving just as the industry was shifting to smaller minicomputers. Bell Labs pulled out first. Despite being a commercial failure, it spawned many features still around today.
While working on Multics, Bell Lab’s Ken Thompson had written a video game called Space Travel. Off the project now, he ported it to a DEC PDP-7 — ignored the existing OS entirely, wrote his own routines from scratch, and eventually found himself building a minimal operating system from the best ideas Multics had generated: hierarchical filesystem, simple process model, command line shell. Punning on the name, he called it Unix. Its DNA lives on today in Linux, Android, and macOS.
main thread
Starting out as a Linux sysadmin in the 90s, I've lived my work life in a terminal shell — it's how you manage servers, log into remote machines, troubleshoot the slow website. How you get shit done. The alternative is the point-and-click interfaces of Windows and Mac, which are convenient until you need to do something fifty times, or to reproduce it exactly. With a shell, you save your commands in a script, and it's consistent, repeatable, editable, shareable.
Sysadmins write a lot of scripts — automating password changes across machine and network fleets, archiving logs, monitoring bandwidth. But a script isn't a program. A web browser, a music player, a video editor — that's a different order of complexity entirely. After years working with operating systems, I wanted to learn the language Linux was written in: C.
C goes right back to Unix at Bell Labs. The first iterations were written in assembly — barely a step above machine language. Thompson needed something higher, and adapted Martin Richards' BCPL into his own stripped-down language: B. His colleague Dennis Ritchie improved on it, incrementing the name to C. The full manual — the classic K&R The C Programming Language — is a modest book you can read in a weekend.
I always think of C as the original hacker language. There is a fairly recent O’Reilly book on C which describes it as punk rock - comparing its brevity to the classic “Here’s 3 chords…now form a band”.
One of my very first C exercises was in how to write a simple command line shell. The purpose of the tutorial was to demonstrate how to read text input from the terminal, compare the text to a list of commands and then execute the command. My simple shell only had one command to start with - list the files in your current directory.
My desire to learn C wasn't coming from sysadmin work. For a few years I'd been doing creative coding on the side — node audio scripts over RabbitMQ, sensor-driven Arduino cars. I'd been making music in Ableton for years but was never satisfied with using samples and loops. Inspired by friends and house mates I’ve known, who built their own systems via max/msp and custom languages, I decided I wanted to learn how to program audio and create my own sounds.
A few weeks later I was learning to synthesize sine waves. Unlike GUI music programs where you can turn dials and sliders, I was writing and launching command line utilities. I could write code to create a frequency playing at 440Hz, compile, run it, and hear the audio—but I had no way to control it once started. How could I actually interact with the audio process after it had started?
That's when I realized I could combine my audio code with a shell. Rather than launching individual processes, I could add commands to my shell to create a sine wave, and then other commands to view its status, change its frequency. Suddenly, rather than having individual tools, I had a platform, an environment to grow and expand.
SBShell began to take shape - i had a metaphor to build on - rather than a shell around the operating system, SbShell was a shell around a sound engine, a way to perform live, creating and manipulating audio computations through a command line interface.
I found a book, perfect for my use case, which both taught C and audio programming - "The Audio Programming Book" by Richard Boulanger and Victor Lazzarini. It laid a great foundation of practical C knowledge. The next big influence was “BasicSynth: Creating a Music Synthesizer in Software” by Daniel Mitchell. This one taught me how to organize my instruments, arming me with the concept of a central Mixer which owns all the Sound Generators, and which is responsible for the Audio Callback. I had a simple FM synth, basic time counter and step sequencer. I added sample support for looping and one shot playback.
I spent a year working my way through Will Pirkle’s insanely great “Designing Software Synthesizer Plug-Ins in C++”, which starts you off with simple synth components - oscillators, then envelope generators, a DCA, then filters - building up a simple synth from the beginning, gradually adding features throughout the book, until you’ve built a series of increasingly complex implementations, which is where both my FM and subtractive synth come from. The FM synth is a 4 operator synth based on the DX100 architecture. The subtractive synth is based on a MiniMoog.
I worked my way through Curtis Road’s “Microsound”, with a detour in the writings of Ross Bencina and Robert Henke, before creating my granular synth implementation. Using granular synth was the perfect way to upgrade my sample looper, which previously had been rigidly incrementing through the sample array, strictly following sample time, but it turns out sample time is not the same as wall time and has to account for drift. Around this same time I replaced my own hand-rolled time-keeping with Ableton Link, using it as the source of timing truth for my mixer, which beautifully solves the wall clock / sample time issue and allows soundb0ard to automatically sync with any other Loop enabled device, and, combined with a granular system, my loops flowed smoothly.
After a few years working in C I felt I had earned an upgrade, and updated the code to use the C++20 standard, followed by spending another year replacing my massive regex text matching interpreter loop with a real programming language, based on Monkey from the amazing “Writing An Interpreter In Go” by Thorsten Ball. Although that book is in Go, I worked my way through it implementing it in C++, then extended the language with objects representing the sound generators, allowing them to be manipulated and controlled in real time through code. Beyond the sound generators, the Soundb0ard language (Slang) also contains a novel object called a Computation. It was inspired by the shape of Arduino and Processing sketches, whereby the object has two functions - there is a setup() function, called once for initialization and a run() function that is called repeatedly. In the Processing world, the run function is called for every frame; in Soundb0ard, it’s called once per bar.
Here’s a simple example:
Entering long code sections at the command line is unwieldy. Instead you can work in a code editor, and import/monitor the file from the Soundb0ard Shell, with live re-loading to enable live coding. More recently I added a Drum Synth with 9 voices, finally getting a deep kick drum I was happy with.
So - all these features - several unique synths, a drum machine, granular sample loopers and one shot playback, a number of FX (delay, reverb, smudge, scramble) - all controllable in realtime via a fashionable shell and a full extensible programming language which supports integers, booleans, strings, arrays, hash maps, first-class functions (with closures and recursion), sound generators, computations, and a live coding interface - I feel like I have something good here that's worth sharing! I’ve been working on it for about 10 years now, using it for live performances and recordings during that whole time. I think it’s pretty damn solid at this point, and to be honest, I think it's the best art I’ve made!
If this sounds like something you’d be interested in playing with, check out the full USER_GUIDE.md and more examples at: SBShell