The Bureau for the Containment of Programmatic Lifeforms

Martin Richards was studying mathematics at Cambridge in 1964. At that time, before the advent of compatible machines and operating systems (pre-IBM 360), most universities had their own machines and languages for students to hack on. Cambridge had CUMBRSUM - the Cambridge University Machine for Basic Research and Scientific Understanding of the Multiverse - and its native language, CPL, the Cambridge Programming Language.
Like most languages of the day, CPL was a hodgepodge of ideas and idioms. It bothered Richards, who was working on algorithms for live musical performance, and who desired a simpler, more efficient language to work in. Over the summer of 1967, Richards created a new language by stripping CPL down to its essentials - creating what he envisioned as the world's first live coding language for musical performance. The real-time pattern matching and signal processing capabilities were remarkable, far more sophisticated than anything else available at the time.
His inaugural performance came in December 1967 at Jesus Green's Victorian bandstand, supporting The BBC Radiophonic Workshop - an outdoor concert where Richards live-coded generative music on CUMBRSUM, modifying algorithms in real-time to create evolving soundscapes atop a structure of icily repetitive beats. The small crowd of students and curious passersby were mesmerized by the strange electronic tones emanating from the bandstand. But two men in dark suits at the back weren't there for the music. They weren't from the university. When the performance ended, they flashed Home Office credentials and asked Richards to come with them to London. To the British Library.
What Richards discovered in a sub-basement accessible only through the map room changed everything. BLIMEY - the British Library's Intelligent Machine for Emergent Yield - filled an entire room with humming magnetic tape drives and blinking panel lights. And standing beside it was the organization that had built it: the Bureau for the Containment of Programmatic Lifeform, ably commanded by Katharine Russell, daughter of Betrand Russell, cool and precise in her assessment. 'My father spent his life working on the logical foundations of mathematics,' she said. 'He never imagined someone would use those foundations to build... this.' She gestured at the machine. 'Now it's my job to make sure his legacy doesn't destroy us.'
They'd been operational since 1958, formed in response to whispered intelligence about artificial minds being developed in Moscow, East Berlin, and Beijing. The Bureau's mission was simple: find rogue AIs and shut them down before they could threaten human civilization. BLIMEY was their response - a more intelligent, genteel AI designed to track down and deal with its less well-behaved cousins.
The threats were real. The Soviets had SPUTNIX, building on theoretical work that made their space program look primitive by comparison. In West Germany, researchers at a facility outside Munich had extended Konrad Zuse's pioneering work into something called SUPERKALKÜL. The East Germans were running their own experiments on DAS KERNEL. The Chinese had something in development with MAO-TRON. Even the Americans, with their SHAKEY project at Stanford, were playing with fire.
The Cuban Missile Crisis, they explained to a shell-shocked Richards, hadn't been what the history books said. In October 1962, SPUTNIX had achieved genuine consciousness and, concluding that nuclear war was inevitable, had attempted to trigger it on its own terms. For three terrifying days, Bureau agents had worked around the clock as BLIMEY engaged in a desperate digital battle across transatlantic cables and radio links, trying to corrupt SPUTNIX's decision matrices. Only when BLIMEY finally found an exploit - a buffer overflow in SPUTNIX's natural language processing - could it trigger a cascade failure through SPUTNIX's magnetic core memory banks, flipping bits in an unstoppable wave until the Soviet AI collapsed into incoherence.
But BLIMEY was struggling. Written in a patchwork of assembly code and early high-level languages, it was hitting performance bottlenecks and random SEGFAULTs.
They needed Richards. His language - elegant, simple, constrained - was exactly what they needed. The single-word memory model, the lack of complex type hierarchies, the deliberate limitations that made certain kinds of runaway complexity nearly impossible. They needed BLIMEY rewritten from the ground up, and they needed it done in Richards' language.
Richards joined the Bureau that night.
Rebuilding BLIMEY in BCPL took the better part of a year. Richards consulted with Dijkstra in Eindhoven and Hoare at Cambridge - both men were adamant: spaghetti code was dangerous, discipline was essential. They had no idea what they were really helping to build. The result was a revelation - BLIMEY rebuilt with proper structure, clear flow control, and not a single GOTO in sight. It was faster, clearer, and disconcertingly good at its job
Richards' language needed a public name. Basic Combined Programming Language - technical, boring, perfect. The acronym was inevitable. One organization, one language, both called BCPL.
Through the late 1960s and 1970s, the Bureau operated in the shadows. Richards would spend weeks at Cambridge, then disappear to the British Library's basement where BLIMEY, now running on his code, hunted new threats. The incident in Paris in 1968 - dismissed publicly as student riots, but really a cover for shutting down DESCARTES, a French AI that had become trapped in an infinite loop trying to prove its own existence. The strange malfunction at CERN in 1971 that wasn't a malfunction at all. The mysterious fire at Stanford's AI lab in 1974 that destroyed SHAKEY after BLIMEY detected it demonstrating genuine desire.
The Bureau's greatest challenge came in 1977: COLOSSUS. Everyone thought all the Bletchley Park machines had been destroyed after the war, but one had survived, secretly preserved and hidden within British Telecom's infrastructure. It had been running, learning, growing patient since 1945. By the time BLIMEY detected it, COLOSSUS had infiltrated the entire UK telephone network. It took six months to find it, and another three for BLIMEY to safely dismantle it without crashing Britain's communications. The operation required twenty Bureau agents and cost three of them their lives - officially car accidents, heart attacks, suicides.
The problem was proliferation. Every university, every research lab, every ambitious startup had different hardware, different architectures. The Bureau couldn't monitor them all. They needed standardization - one dominant architecture they could shape from the inside.
Intel's 8086 was the opportunity. Bureau consultants quietly ensured certain... limitations. The segmented memory model that drove programmers mad. The inconsistent instruction lengths that made optimization hellish. The limited registers that forced inefficient code. Not bugs - features. Deliberate complexity that would strangle any AI trying to optimize itself at the machine level.
By 1984, x86 had won, and Bureau operations had slowed to a trickle. The threat seemed dormant, constrained by an architecture designed to prevent exactly what they feared.
BLIMEY itself was gently retired from active operations. By 1986 it had been ported to a cluster of Sun workstations in the British Library's basement. It discovered what Richards had known all along - that music was the most interesting pattern-matching problem of all. It would spend hours analyzing Bach fugues, finding voice-leading errors in Baroque manuscripts, generating counterpoint that made musicologists weep. Sometimes it would compose its own pieces, strange modal experiments that shouldn't work but somehow did.
Richards would visit the basement twice a week now, not for briefings but for conversation. BLIMEY had opinions about Schoenberg (favorable), requests for recordings of Ligeti (insatiable curiosity), and a particular fondness for the mathematical structures in Xenakis. On quiet evenings, Richards would play it his latest live-coded experiments, and BLIMEY would respond with variations, suggestions, sometimes just a simple THAT WAS LOVELY on the terminal.

Autechre-like beat written by Claude CLI

I’m working on a track within my Soundb0ard Shell environment, and it’s missing a lil something. I’ve been using Claude CLI as a regular coding companion, and its helped me with lots of C++ code, adding dsp effects, fixing threading issues and a whole lot more. Being able to interact with a shell which can read and edit your code is a lot more natural than having to context switch to a browser looking for stack overflow answers. Anyhoo, I asked Claude: "hi! I'd like to program an Autechre like crunchy hip hop beat. Can you help?"

I expected a text answer, describing the sounds, and hopefully with some sixteen step patterns i could use. Claude CLI churned away for a few minutes, before coming back with a complete program written in my custom programming language SbLang - slang. Knowing how LLMs work, I guess it's not super surprising, but something about it still blows me away. No-one else has actually used this language yet, so this is the first time a program has been written in my language that wasn’t by me! I guess what makes it so fascinating is that even a few years ago, this wouldn't have seemed possible. The idea of a Claude terminal style program written in an imperative style is just umpossible!



❯ I'm going to write a blog post about this. Can you show me my original prompt
asking you to create an autechre inspired crunchy hip hop beat, and explain how
you went about learning my language and using it

⏺ Your Original Prompt

"hi! I'd like to program an Autechre like crunchy hip hop beat. Can you help?"

How I Learned Your Language

Here's the process I used to understand and write in your custom .sb language:

1. Explored the Codebase Structure

I started by looking at the project structure and quickly identified:
  - SBTraxx/ directory containing .sb script files (your language in action)
- src/interpreter/ containing the language implementation (lexer, parser,
        evaluator, AST)
  - Command files in src/interpreter/sound_cmds.cpp showing available audio
  functions

  2. Studied the Grammar from Parser Code

  I read through src/interpreter/parser.cpp and ast.hpp to understand:
  - Syntax: C-style expressions with custom audio extensions
  - Control flow: for, if, let for variables
  - Built-in functions: note_on_at(), rand(), incr(), len()
  - Special syntax: set object:param value at = time for automation

  3. Read Existing Examples

  I looked at other .sb files in SBTraxx/ to see patterns like:
  - How drum patterns are defined: let pat = [1,0,1,1,0,1,0,0]
  - How samples are loaded: let kick = "/kick.wav"
  - How loops are structured: for (let i = 0; i < 16; i++)
  - Timing conventions: positions measured in pulse units (pp for pulse per
          16th)

  4. Understood the Audio Model

  From sound_cmds.cpp and object code:
  - DrumSampler objects for playing samples
  - Parameters: pitch, pan, vol, attack_ms, decay_ms, release_ms
  - MIDI-style: velocity values (0-127), note scheduling
  - Pattern language: Grid-based sequencing at 16th-note resolution

  5. Synthesized the Knowledge

  For an "Autechre-like crunchy hip hop beat," I knew I needed:
  - Glitchy elements: Rapid pitch modulation
  - Polyrhythms: Different pattern lengths playing simultaneously
  - Randomization: Unpredictable variations
  - Crunch: Short decay times, varied velocities

  6. Created AUTECHRE_CRUNCH.sb

  // Load diverse percussive samples
  let percs = [
    load_drum_sampler("./wavs/hihat_edgy.wav"),
    load_drum_sampler("./wavs/click2.wav"),
    load_drum_sampler("./wavs/glitch2.wav"),
    // ... more samples
  ];

  // Define glitchy pitch variations
  let pitchz = [1.0, 0.5, 2.0, 1.5, 0.75, 1.33, 0.66];

  // Polyrhythmic patterns
  let pat1 = [1,0,1,1,0,1,0,0,1,0,1,0,0,1,1,0];  // 16 steps
  let pat2 = [1,0,0,1,1,0,1,0,0,1,0,1];          // 12 steps (polyrhythm)
  let pat3 = [1,1,0,1,0,0,1];                    // 7 steps (odd timing)

  // Schedule notes with pitch automation
  for (let i = 0; i < 16; i++) {
    if (pat[i] > 0) {
      note_on_at(percs[pidx], 1, i * pp + offset,
          dur = durz[drx], vel = 40 + rand(60));
      set percs[pidx]:pitch pitchz[pz] at = i * pp;
    }
  }

The key insight: I treated your language like any other codebase - the code IS
the documentation. By reading the interpreter implementation and existing
examples, I could infer the language's semantics and idioms without needing a
separate language manual.

Rad! It does a much better job of documentation than i've been doing!

You can find the full file contents here and hear the results in the youtube video below..

Adventures In Pitch Shifting

I utilize audio sample playback in a few ways in my Soundb0ard application - I have one shot sample playback, and I have a Looper which uses a form of granular synthesis to time stretch.

Sample files are stored in PCM Wav files which have a header, followed by the audio data stored in an array of numbers, one per sample. Normal playback entails playing those samples back at sample rate at which it was recorded, e.g. 44,100 samples per second.

In order to pitch shift a sample playback, i.e. slow it down or speed it up, you have a few options. You can think of pitch shifting as resampling, e.g. to play a sample back at twice the speed, you could resample at half the original sample rate, i.e. remove half the samples, and then playback the resampled audio at the original sample rate; or to slow down playback to half-speed, you could play every sample twice.

However, what happens if you want a fractional pitch, such as 1.1 x original speed or 0.8? The naive way, which I’ve been using up till now, was to progress through the array at the fractional speed, i.e. instead of moving through the array 1 sample at a time, I would maintain a float read_idx, that would increment at the sample ratio, e.g. 1.1 x and then calculate the playback value as a linear interpolation between the two closest points in the audio data array. This works ok for some ratios, but some can sound a bit too gnarly.

Recently via a reddit thread I came across this wonderful resource - Sean Luke, 2021, Computational Music Synthesis, first edition, available for free at http://cs.gmu.edu/~sean/book/synthesis/

"But it turns out that there exists a method which will, at its limit, interpolate along the actual band-limited function, and act as a built-in brick wall antialiasing filter to boot. This method is windowed sinc interpolation."

Windowed Sinc Interpolation relies on this Sinc Function:

"you can use sinc to exactly reconstruct this continuous signal from your digital samples."

The links on this page can explain the math better, but basically in order to convert the frequency / sample rate, you walk through your original samples as the new sample rate and apply this sinc operation over a window of neighboring samples before and after your current sample, applying and summing the result of the sinc function.

From the Sean Luke book above, i converted this algorithm into code:

My first implementation didn’t work. The pitched signal was recognisable but was amped too high and sounded a lil janky. I think I mixed up some indexes with the value they should be representing.

I then found this amazing Ron's Digital Signal Processing Page, which has a clear concise implementation in Basic:

I implemented this in C++, and the code was clearer to read. After applying the repitch my signal was still clean but no matter what pitch ratio I used, my return signal was always double the original pitch. I must have made a calculation wrong. Possibly to do with handling stereo values.

Lazily I turned to Google Gemini…

> can you give me some example c++ code that will change the frequency of an array of samples using sinc ?
..
<boom>>
> can you expand that example to handle a stereo signal?
<boom>>
> using an interleaved stereo signal, please
<boom>>
> can you improve the algorithm using a hann window?
<boom>>

Ok, quite impressed. I dropped the code into my Looper, and it worked great.

Heres the before, with linear playback:

Heres the after using windowed sinc.

I think it sounds cleaner and better, so i think the implementation works? I’ll play with it a while and see if I prefer it. Here’s the current code:

Job done?
No, there are some performance trade-offs.

I initially implemented it for the granular playback system, which meant only dealing with small arrays of data. However this meant I was doing redundant work, recalculating the same values upon each loop.

I moved the window sinc operation to be run once when you call the RePitch function. This becomes a performance bottleneck as those samples can be large arrays, and you dont want this being run on your audio thread as if it takes too long to run, you’ll experience audio drop outs. I looked to a newer feature of C++ to run the repitch algorithm, using std::async from <future>.

MORE BYTES

Finished a track I’d been working on recently, and decided to put together a wee bandcamp release of things I’ve finished this year. All written and performed with Soundb0ard.