Reading (and Understanding) Derek Parfit

Derek Parfit has been justly described as “the most famous philosopher most people have never heard of.” If you’re into moral philosophy, he’s a must-read. And despite his spending a decades-long academic career with no formal teaching commitments, Parfit’s publication record is relatively manageable: you can cover the essentials in two books – Reasons and Persons (1984) and On What Matters (2011).

Why is Parfit a must-read? Well, that’s probably worth an entire extra post – but he has come up with a lot of very, very interesting stuff. In Reasons and Persons he comes up with completely fresh arguments about personal identity, and links them to arguments about how to balance self-interest and more impartial moral theories. There is then a cracking section on how to balance the interests of future people, including the “Repugnant Conclusion”, which you really should know about. In On What Matters, Parfit sets out his “Triple Theory”, which combines Kantian deontology, consequentialism and contracturalism – three separate moral traditions that are supposed to be incompatible. He then includes comments by 4 other philosophers to this exposition, and sets out his response to each of those philosophers. Whether you agree with him or not, this is truly mind-expanding material.

There’s a catch, right? Well, kinda. The books are accessible to the layperson, but they need to cover a lot of ideas. Reasons and Persons has 400 non-appendix pages, and On What Matters has getting on for 1,000. Even if there’s some filler in there, that’s a lot to get into your head.

The style, at least is accessible. Parfit alternates between “thought experiments” – short descriptions of moral dilemmas – and dense argument and reasoning that sets out the implications. It might just be me, but I find this an effective way to guide people through the terrain that Parfit is attempting to map out. The thought experiments help to stop everything from getting too bogged down.

At this point I should note that not everyone is a fan of Parfit’s style! One of the philosophers responding to Parfit in On What Matters, Allen Wood, goes on an extended rant about how much they hate the thought-experiment approach (which they denigrate as “trolley problems”). I do have some sympathy – I’ve never been a fan of Searle’s “Chinese Room” for exactly the reasons that Wood sets out. Parfit deals with Wood’s rant in his response by ignoring it completely. Stephen Mulhall’s LRB review of a biography of Parfit (“Non-Identity Crisis“) goes even further – frankly, it verges on the bitchy, concluding that the biography “presents its subject as an epigram on our present philosophical age – a compact, compellingly lucid expression of its own confusions and derangements.” It was reading that review that made me decide I should make a determined effort to understand Parfit.

You have been warned: see how you get on with Reasons and Persons before buying On What Matters!

But even if you do get on with Parfit’s style, there’s still an awful lot to cover! I’m used to being able to speed-read through a lot of complex material and retain the gist as I power on through. But when I tried reading through Reasons and Persons, I hit the buffers a quarter of the way in; there was simply too much to hold in my head. More traction was needed. After some experimentation, I found an approach that has successfully carried me through .

Parfit splits his material out by chapters, which then tend to have multiple numbered sections. The recipe that worked for me is simple: cover material in the following three stages:

  • First, read through, making sure you understand what you’re reading. Feel free to mark up significant passages in pencil, but don’t take detailed notes at this stage.
  • Then, at least a day later than reading, make detailed section-level written notes. Elide detail that may have been useful as scaffolding, but is not necessary for you to express knowledge of the overall structure of reasoning once you’ve got it in your head. I ended up with something like 1 page of A4 per 10-15 pages of original text – the optimal compression ration depends on what you’re trying to summarise.
  • Then, at least a day later than making written notes, make word-processed chapter-level summaries. You should be able to do this solely by referring to your written notes. Aim to sum up each chapter in a single side of A4, although some will chapters require more – you’ll know those when you see them.

This all can take place simultaneously – e.g. you’ll be covering the material in 3 waves with reading being furthest on, section-level written notes being behind that, and word-processed chapter-level summaries being further back again.

How long to leave between stages is up to you, but I would recommend a day at the absolute minimum – I would say a few days is probably optimal for reading-to-notes, and notes-to-summaries could be left for longer if convenient.

And how long will all this take? Well, I embarked on my Parfit binge when I was on gardening leave. So I had a lot of free time – but I did find there was a limit to how quickly I could absorb the material, even so. I had to take breaks. Reasons and Persons took me a month. Volume 1 of On What Matters took another month. The first 2 sections of Volume 2 (comments and responses) took another fortnight.

Another person’s experience: the “Only a Game” blog took 4 months to cover Volumes 1 and 2 of On What Matters, which feels in the right ballpark for someone who has the cognitive load of a job to handle at thr same time – read their take here. They reckon the best bit of On What Matters is the second half of Volume 2, which is up next – hopefully, I have a treat in store!

Cognitive Empathy and Software Development

(TL;DR: software is a social endeavour, and if you can’t figure out some way to engage with that fact, then that fact will engage with you nonetheless – and things are unlikely to go well for you from that point)

The ideas in this post will be obvious when I set them out. But they are clearly not obvious up front. I know this because I see many very clever and able people – far more clever and able than I – who clearly don’t understand these ideas, and who suffer greatly as a consequence.

Am I writing this because I’m a nice chap, who wants to help these poor people out?

No: sorry.

I mean, I’m not a monster, but I’m certainly not any nicer than the average person. Probably less nice than average, if we’re all to be honest with each other. I’m just tired of seeing people who are nice screw themselves up. People who I admire and respect sabotage themselves in the same way, time after time, and it’s so ugly! And unpleasant. And sad. And, mostly, really fucking boring. I want to take them by the scruff of the neck and… gah. It’s like a fishook in my head and I want it to stop! But it will never, ever stop. Ah, me. Nevertheless, here goes…

There is a very specific kind of empathy deficit that is rife in IT. If you manage to be better than average in this respect, your efforts will greatly benefit both you and those you interact with. The nature of this empathy deficit is not obvious to most people; this is because empathy is a more complex concept than most people realise. The distinction necessary to understand the deficit’s nature is set out on Wikipedia’s Empathy page: it is between Cognitive Empathy (“Empathy of the Head”) and Affective Empathy (“Empathy of the Heart”).

Affective Empathy is common-or-garden human sympathy, or as Wikipedia puts it “the ability to respond with an appropriate emotion to another’s mental states”. If you are incapable of exercising any of this variety of empathy, I’m afraid that we are unlikely to be friends – not that you would care, of course. You may as well stop reading and go back to tearing the wings off flies, or whatever else it was doing before you encountered this essay. Have fun! Please don’t kill me!

Cognitive Empathy is a very different beast: it’s all about how much you get inside other people’s heads (Wikipedia: “the ability to understand another’s perspective or mental state”). Unlike Affective Empathy, this doesn’t carry any intrinsic moral virtue: you can be a lovely, wonderful person, and absolutely suck at Cognitive Empathy. Note that if you lack innate ability in Cognitive Empathy, there are workarounds – ways to “fake it ’till you make it” – much more so than for a lack of innate Affective Empathy.

Life is unfair in many ways, and here is one of them: people who use a lot of Cognitive Empathy are liable to appear far nicer than they actually are. An example is worth a thousand words, so let’s look at Dr. Hannibal Lecter. Dr. Lecter is very good at Cognitive Empathy, and unfailingly polite. To someone who is meeting him for the first time, he generally comes across very positively. However I have been given to understand that he leaves something to be desired as a social companion in the longer term, and I would definitely advise against inviting him to your next dinner party.

In IT, Affective Empathy is present in normal amounts, but Cognitive Empathy is very thin on the ground. I’m not going to get into why these things are so. There are many reasons why these things are so; even an overview of the “why” would be a substantial essay all by itself. But we all know that IT is chock-full of people who would far rather focus on technical, analytical framings of issues than get all “touchy-feely” and worry overmuch about the thoughts and feelings of others.

What’s the problem with that? To start seeing why, let’s start with a classic example of how this attitude goes wrong that isn’t limited to IT: have you ever slogged your guts out and felt that you’ve not been given sufficient recognition for your efforts, while someone else has risen, seemingly without any justification? If you don’t understand why this happened, then you didn’t understand enough about how the work was being evaluated: who was pronouncing on its worth, and what they attached value to. This is where even a smattering of Cognitive Empathy can help tremendously.

I know that this line of argument will infuriate some readers. They will be thinking: Why should I pander, and be a people-pleaser? Surely, if I just carry on doing Good Things then I should be recognised? And, if I’m not recognised, surely that means it’s the fault of the company; that I should go and work for somewhere else with a better value system? Right?

Listen up: if you don’t engage with what the people around you are thinking, you are at a massive disadvantage compared to anyone else who is paying even the slightest attention to the mental states of others. There is a romantic myth in IT of the sole coder who has an amazing idea and weaves a magical blanket of abstractions by working 24/7 in their bedroom, unveils it to the world, and receives instant fame and recognition for their genius. Like most myths and archetypes, it’s fatal to take this as an unadulterated guide to living in the real world. Almost everywhere, and almost all the time, creating software is a social endeavour. Most great ideas come from cross-fertilisation with a wide range of other people – some of whom are likely to think very differently from you – and a lack of appropriate context makes even geniuses appear stupid.

Here’s a story that I’ve seen unfold many times over my career: someone spends a long time (10 years or more) working on the same part of the same product, for the same company, without interacting with other teams much, whether internally or externally. Other team members (whether more extraverted ICs or managers) interface between them and the world outside their team. While the person most certainly adds value, hardly anyone outside their team knows who they are. This is an insanely dangerous way to (not) manage your career. At some point your local technological landscape will get invalidated, there will be redundancies, and you will get the chop, because the decision of who to chop is made by multiple people (managers, client reps, etc) getting together and voting, and only one of these people will know who you are, and so they will be outvoted by all the others.

You’re still here? Amazing. Look, first of all, I owe you an apology. I’ve probably come across as a bit of an arsehole. This isn’t an accident; I’m attemping to shock you out of complacency (and, in my defence, I did say that I wasn’t a particularly nice person). I wouldn’t be trying to do this at all, but no-one else is going to bother: they’ve been spending their whole life cruising past your low-cognitive-empathy slum in their high-cognitive-empathy limousines. And most of the people in the limos don’t really get why you’re in the slum, and the minority who do have an inkling: well, I wouldn’t say that they’re laughing at you, not exactly, but they don’t care about you either; at least, not enough to help you out. So you’re stuck with me, doing my best. Sorry about that. But also: fuck those guys, right?

And there is a way out. If you work in the IT realm and think that you may be suffering from this kind of disadvantage, and would like to make the pain stop, I recommend studying the works of Gerald Weinberg. I didn’t get to meet him in person, which is one of my few major regrets in life: he died in 2018, leaving a legacy of marvellous writing behind. His original classic “The Psychology of Computer Programming” was written in 1971 but is still as relevant as ever. I have read every non-fiction book he has written and haven’t regretted a single one. He has the technical chops: he was a project manager on Project Mercury. But also he transcended personal tragedy in his family life, and as part of his journey he produced this amazing synthesis of Rogerian psychological thought with the practical process of writing systems. I cannot recommend his writing highly enough.

So there is lots of good news here. You can learn Cognitive Empathy. And this isn’t just about career benefits, about helping yourself out. While being able to better comprehend what everyone around you thinks, feels, and wants is very rewarding professionally, it’s also tremendously rewarding in itself: you will be able to help more people and generally do more good in the world.

And on top of all of this – circling back to career benefits – knowledge of Cognitive Empathy will be hugely more durable than all those technology hamster wheels we spend all our time running around. I have lost count of the number of networking technologies I have learnt over the last 30 years – but people? They are just the same now as they were 30 years ago, or the last few hundred come to that. It’s an incredibly useful bit of technology that is going to be just as relevant when you retire as it is today. You know, like key bindings for vi?

But maybe you’re not convinced. Well, fine, I’ve done my best and, like I said, I’m really not a particularly nice person so I don’t really care. Feel free to ignore me and focus on those sweet, sweet technical/analytical problems that we all love so much. But don’t come running to me when an idiot manager or daft fellow developer who happens to be exercising slightly more Cognitive Empathy than a broken toaster runs rings around you.

Incidentally, don’t despair if you feel that a self-perceived lack of neurotypicality on your part might make all this too difficult. I’m not neurotypical either, having a nasty dose of Emotional Contagion. Depending on the mix at your workplace, non-neurotypicality might actually make it all easier: check out The Double Empathy Problem to see what I’m talking about.

Excel Tetris

Because you’re worth it.

Written back when it was fashionable to make Excel do strange things.

(Disclaimer: Uses Windows Timers. Probably doesn’t work nowadays. Will spoil your milk, make strong men weep, and turn the weans against you)

Millennials

At some unspecified time in the past, when I was in a managerial/quasi-managerial role in an investment bank (keep it vague, Matt, keep it vague), we had An Occasion when anyone who was managerial/managerial-adjacent was brought into a half-day session organised by HR about how to deal with Millennials. The reason? We were having a lot of problems around retention for graduate Millennial joiners in our most recent year group, and we wanted to stop the rot.

The session covered the familiar talking points that the older generation typically raise about feckless youth. Millennials expect too much on coming into an organisation, when they aren’t ready for the responsibility. They don’t respect hierarchy. They require coddling. They’re snowflakes. And so on, yadda yadda yadda.

But I knew why the Millennials were really leaving. In the problematic year group, a few people had been placed, post-internship, with a part of the organisation that had been deemed non-profitable. They were laid off rather than places being found for them elsewhere. This was a violation of an implicit contract that this and similar organisations had at the time – while we might have a lot of churn and instability, we would not crap on people fresh out of an internship by making them redundant, no matter what. The norm would be that you would reassign them. They’ve been around for a few years? Fine, it’s open season. But less than 12 months after starting full-time work? Nope.

As a consequence of this norm violation, meerkat-like, everyone in that year’s cohort raised their heads, nodded at each other, and a decent fraction (disproportionately selected from those with most moxie and smarts) had changed their employer to be someone other than us before the year was out. They were absolutely right to have done so, and I would have done the same thing myself in their position.

I knew what was really going on because – unlike most of my colleagues in the managerial sphere at that organisation, at that time – I took pains to check in with everyone I felt responsible for on a fortnightly basis.

(Yeah, I know everyone does this weekly nowadays but this was many years ago, and we’re talking 14 people here. I was busy, OK?)

Much to my shame, I didn’t speak up during the session. I nodded along with everyone else, tutted about how terrible the Millennials were, and was working somewhere else myself before the year was out.

270 Playing Card Lampshade

In 2011, Nick Sayers made a roughly spherical design out of 270 playing cards. Well, 265 if you leave a hole to put a light-bulb in – 5 whole decks, plus one joker from each. It’s easy to find pictures of it on the internet, e.g. here and here. It has gaps to allow light through. It has a pleasing mixture of structure and irregularity. It looks fantastic. I wanted to figure out how to make it myself.

The design looks complicated, but it’s not so bad. It’s a Platonic solid in disguise – either a dodecahedron or an icosahedron, depending on what you say the centre of the face is: from the angle of the playing card cuts, it’s pretty much a wash since everything bulges outwards a bit, compared to a true Platonic solid.

You can figure out how to fit the cards together from the pictures (whether of Nick Sayer’s build or mine). But before getting started on this, I highly recommend you figure out how to make some Platonic solids first – it’ll give you an idea of how this kind of thing works. See my previous post for more on that.

Assuming you’ve gained some basic idea of how to build the simpler stuff, then here’s the key to building the 270-card behemoth: there is a 9-card motif that you need to repeat 30 times, which is shaped like a diamond. At each of the two sharp points of the diamond , 5 cards come together. At each of the other two points of the diamond, 3 cards come together. Everywhere else, 4 cards come together.

The really interesting bit is exactly where to cut the cards so they slit together in the right way. On my previous post for building Platonic solids, you can see how this kind of thing gets worked out. For the 270-card construction, by all means work out a theoretical starting position (this helped me) but eventually you will need to be prepared to experiment with a pack or so of cards to see what alternatives pan out best given the way the physical material behaves. I recommend using packs of Waddingtons No. 1 cards when experimenting – they are pretty tough and nice and cheap.

(Note that each 9-card motif uses 5 cards with one pattern of cuts, and 4 cards with the mirrored version. )

In terms of actually putting it together, build one 9-card motif off to one side, and refer to it as you make the main build. Start with a 5-vertex and build outwards from there. Once you have all the slots cut, and any other backing applied (see below), it will still take you at least 4 hours to put everything together. There are some points where you can take a rest – basically you’ll need to complete something radially symmetric so it doesn’t start bending too much.

While a finished build looks great, it does have one drawback as a lampshade: playing cards are not designed to let light through – it’s kind of a major part of their job description, come to think about it. So, while the lampshade does look lovely with a light bulb in the middle, it’s somewhat underpowered as a light source: more of a glowing coal than a blazing fire, let alone a UFO that will eat your mind as you gaze dumbly into its alien projection of Nirvana.

Sorry, where was I?

Ah, yes: improving light output. I did wonder if it might be possible to get more light out by building it with cheaper playing cards, which lack the “core” that makes normal playing cards opaque. But such cards tend to be generally weak, whether from the lack of the “core” layer or just because the cardstock is thinner. This weakness rules them out: the twisting inherent in the design means that those cheaper cards rip and can’t be used successfully.

My best idea so far: put stick-on chrome mirror vinyl on the parts of the backs of the cards that lie entirely within the construction: the idea being that the light will bounce around until it eventually makes its way out. From the outside, it still looks like normal playing cards – but now, when you chuck light out from the inside, you now get more than just a gentle glow – enough light escapes to get some general illumination of the surrounding space. An encouraging sign is that the colour of the light that escapes is a closer match to the original bulb colour – before, it took on the hue of the playing cards, since so much of it was being absorbed by their surface.

More external light would still be good. The photos below are all I currently get from a 26W corn led light bulb, which puts out 3,000 Lumens (around 200W equivalent for an incandescent) so is already quite punchy. I’ll try ramping it up and see how far I can go before setting everything on fire.

As to which playing cards to use? The main problem is potential tearing where the cards meet – once the whole construction is made you’re fine, but while putting things together you can end up with quite a lot of stress being put present at some stages. I prototyped with Waddingtons No 1 bridge cards, only £1.40 a pack, and they stood up to the twisting forces involved remarkably well. The final build uses Compag Jumbo Index Poker Cards, which are more like £10 a pack in the UK: about as resilient as you are going to get. But to be honest, they didn’t really deal with the forces much better than the Waddingtons did – although I did dial up the amount of card twisting to the maximum level I thought I could get away with.

You handsome devil
The interior, all lovely and shiny
If you want to do this yourself, I recommend you cut out the vinyl using a Silhouette/Cameo machine. One A4 sheet should give you 15 shapes – so 2 10-packs will suffice for one build.

Making platonic solids out of playing cards

Yet another very niche post. Would you like to do this?

This image is taken from https://mathcraft.wonderhowto.com/how-to/make-platonic-solids-out-playing-cards-0130512/ – however the templates document that this page refers to is no longer available, and there’s no information on the underlying maths either.

I’m trying to figure out how to build more complex stuff of this type, so I figured it was worth sitting down and working it out from scratch. See below for both the maths, and the cut angles for all 5 platonic solids.

Plotter Directions

There are a number of ways to go

  1. Op-Art: Bridget Riley blocks, moire, etc
  2. “Natural” – textured trees, flower fields, etc
  3. “Glitch” – a more robotic/computerised feel with explicit acknowledgement of the underlying medium
  4. Organic growth abstracts
  5. Textual integration

I’d like to explore the textual side, but it’s hard to bridge the gap between text and drawings, integrating them effectively. I feel like it’s maybe going to be the best to explore though, as the other territory seems to be thoroughly colonised. The counter-argument is that I think that Cy Twombly sucks ass.

One possibility would be to extend the non-line nature of text into collage?

Vaccines – An Excerpt

This the concluding part of a longer essay I wrote a while ago, when sorting out my thoughts on anti-vax. It’s based on “The First Rotavirus Vaccine and the Politics of Acceptable Risk”, Milbank Q. 2012 Jun; 90(2): 278–310.

(Context: there was a “RotaShield” vaccine withdrawn in 1999 following confirmation of a serious adverse event associated with its use with infants. The story around this is an important part of the mythology of anti-vax in respect to one of their prime targets, Paul Offit – one of the most public faces of the scientific consensus that vaccines have no association with autism, among many other things)

The history of the Wyeth/RotaShield vaccine’s approval while Paul Offit was on the ACIP panel is crucial It is at the heart of disagreements around Offit himself, but it also operates as a key part of a negative feedback loop that poisons discussion across the pro-vaccine/vaccine-sceptic divide. I will explain why.

The pro-Offit position is that everyone made decisions around Wyeth/RotaShield in good faith, and that the decisions made remain perfectly understandable based on the evidence available at the time the decisions were made, and that there is no evidence to support the notion that there was wrongdoing involved.

I believe that this position is true.

I simultaneously believe that the ACIP process at the time needed improvements to its conflict-of-interest policy. The very fact that people can accuse Offit over RotaShield in the way they do, in a manner that carries a reasonable degree of credibility to a casual observer, surely proves that there is an issue here. I do not think there is any inconsistency in holding these two beliefs at the same time: you can believe that a COI policy needs improvement, without believing that people in any given situation in fact acted badly.

But Offit’s hard-line critics see a very different picture. They see someone who joined the ACIP panel primarily to enrich himself – using the influence of his position to create a vast and open market for his own (Merck/RotaTeq) vaccine by rushing through the approval of a competitor (Wyeth/RotaShield) vaccine with known safety issues. At its strongest, the narrative is that Offit was fully expecting RotaShield to be withdrawn, leaving the pre-established market wide open just as his own vaccine was available to fill the gap. The consequence of this was the suffering of around 100 children, of which 1 actually died and 50 had to have surgery. In return for this, he has profited to the tune of tens of millions of dollars.

In other words, the essential narrative for many of Offit’s opponents is that he has killed a child for money – that Paul Offit is literally a baby-murderer.

Everything else in people’s views of Offit flows from their interpretation around the RotaShield episode. To call this an “interpretation gap” is a huge understatement: it is a gulf, a chasm. It is impossible to reconcile the views. Offit is a medical researcher who has saved hundreds of lives. He has killed a child for money. He campaigns energetically to save lives in the face of death threats. He schemes endlessly to further his own interests, while children suffer and die as a direct consequence. He is a good man who is doing is best to do good things. He is “the devil’s servant” (whale.com).

And standing on the other side of the divide is Offit’s dual, the closing half of the feedback loop – Andrew Wakefield. The anti-Offit characterisation is echoed in many of the accusations that are flung at Andrew Wakefield by the more intemperate pro-vaccine parties. Truly, many from each side honestly believe that the other side’s prophet is a baby-murderer. This is such a deeply unpleasant thing to consider that the more decent among those on each side rarely articulate it openly – they don’t even like to call the thought fully to mind – but the thought is there on both sides all the same.

Both Offit and Wakefield give a lot of speeches, but they don’t use this kind of rhetoric directly about their opposite number, and there is a good reason for this. It is a deeply primal, a tremendously powerful thing to accuse someone of – there is so much energy to be tapped from the sense of revulsion that results. But it never ends well, because the energy is diseased, tainted at its source. Once you believe someone is a baby-murderer it is hard to even think of them as fully human. Discussion turns ugly even if the underlying accusation is never fully brought to the surface – the unspoken thought poisons everything that it touches, killing respect and goodwill.

More generally, it is healthy for all of us to reject accusations like this wherever they crop up, whoever they are aimed at. No monsters here – only us.

CMY/CMYK pen plotting with Sakura Pigma Microns

This is possibly the most obscure post I will ever write.

Having recently built a “4xidraw” as per Misan’s instructable at https://www.instructables.com/4xiDraw/ I’ve been investigating various uses: I would totally recommend this as a project for anyone into electronics. The accuracy is pretty solid, the speed is good, and the cost a fraction of the “Axidraw” from Evil Mad Scientists – although the “Axidraw” is obviously a far better buy for anyone who wants the machine to get out of the way and just get on with the art bit.

One thing that interested me was the prospect of doing full-colour images, and the “4xidraw” design seems more than capable of overlaying multiple plots in different colours. The theory here is that you combine Cyan, Magenta and Yellow in various combinations – the “CMY” part. If there’s a certain base level shared by all 3, you can draw that out into a “Key” level that is black, giving “CMYK” as the overall name. So by using 3 pens, you can potentially do full-colour designs.

However the pens that many recommend for archival quality plot prints (Sakura Pigma Microns) don’t come in CMY colours. The best we can do is “Blue” (not “Royal Blue”) for “C”, while “M” is “Rose” and “Y” is yellow. The blue, in particular, is way too dark. What to do?

It turns out that with a bit of adjustment, one can rebalance the colour to something pretty acceptable. I arrived at my mix by eye, dong test plots with a small image containing blocks with each of red, green, blue, cyan, magenta, yellow and 50% grey. Note that I only tuned colours in 10% increments. First thing to do was to reduce the “C” (blue, really) in the mix. Setting this at 50% was best – but then “Y” was slightly overpresent. Reducing that to 80% produced the best mix that I was able to come up with using Sakura colours.

I ended up with the following Python code to map from RGB to CMYK. Note that I’m always setting K to zero because, for the kind of plotting I’m doing (“wiggle plots” – where you spiral outwards and wobble back and forth by an amount proportional to the amount of colour), it doesn’t make sense to draw out colour onto a K-layer.

def rgb_cmyk_convert_sakura(r, g, b):
    RGB_SCALE = 255

    # rgb [0,255] -> cmy [0,1]
    # white = no wiggle (0), black = maximum wiggle (1)
    if (r, g, b) == (0, 0, 0):
        # black
        return (1, 1, 1)

    rc = r / RGB_SCALE
    gc = g / RGB_SCALE
    bc = b / RGB_SCALE
    k = 0.0 # we don't use black as this we are calculating for "wiggle" plots: here, using black doesn't make sense
    c = (1 - rc - k) / (1 - k) * 0.5
    m = (1 - gc - k) / (1 - k)
    y = (1 - bc - k) / (1 - k) * 0.8

    return (c, m, y, k)

The results can be seen below. The green of the clown’s body at the bottom right is furthest off – Sakura’s blue doesn’t really have any green in it so we are missing some saturation there. But the match overall is not too shabby.

Original
“Wiggle Plot” rendition using CMY mapping above
Staedler CMY Pigment Liner pens with a “true” (unweighted) mapping – admittedly, far better colour balance!

Green Screen Webcam For £5

Amaze your friends and irritate your work colleagues by setting up your own custom video background to webcam calls for a total outlay of £5!

Commercial software is out there which will attempt to extract your image and replace the background – but it’s (a) expensive to buy and (b) demanding on the processor. Far cheaper to go old-school and use chroma keying – this will enable you to use open source software to replace your background, and seems to run fine on my pre-i3 box which I got off eBay for £50 a few years ago.

1. Get Green Screen Fabric

Search eBay for “green screen fabric”. You should be able to buy 1×1.6m / 3x5ft fabric for around £5 – larger amounts will cost more correspondingly but I managed to get it working with this size.

You can jury-rig a way to hang it behind you (my own solution involves a stepladder, mop pole, g-clamp and 4 large paperclips) or buy a backdrop stand for around £20 more (search eBay for “photography backdrop stand”).

2. Install Open Broadcaster Software

This is free. Go here: Open Broadcaster Software project

3. Install the OBS-VirtualCam Plugin

This is also free. Go here: OBS VirtualCam Plugin

4. Set up ChromaKey output based on your camera feed, and make that available as a virtual camera

The basics:

  • Set up a profile with 3 sources – your webcam, your microphone (for audio) and a media source (which should be below your webcam in the layering order).
  • On the webcam, right-click, pick “Filters” and add “Effect Filter” of “Chroma Key”. You may need to tweak the settings slightly to get a clear image
  • On the media source, set up whatever you want. Don’t worry if it’s wider than the camera.

Some more detailed instructions (the latter includes how to make the output available as a camera):

5. Pick your background

Both static backgrounds and videos are possible by setting them up in OBS.