making things that make things


The New Literacy.


That's how I felt this morning reading 'Coding is not the new literacy' by Chris Granger.

He weaves together an array of topics I've been dwelling on – namechecking Mindstorms, Bret Victor and Alan Kay along the way.

The power of code is in how it supports new kinds of thinking. It's in how it lets us represent, explore and communicate ideas. It's a means to such ends, not the end in itself.

The well-intentioned 'learn to code' movement frequently misses this point.

Reading and writing gave us external and distributable storage. Coding gives us external and distributable computation. It allows us to offload the thinking we have to do in order to execute some process. To achieve this, it seems like all we need is to show people how to give the computer instructions, but that's teaching people how to put words on the page. We need the equivalent of composition, the skill that allows us to think about how things are computed. This time, we're not recording our thoughts, but instead the models of the world that allow us to have thoughts in the first place.

The article pinpoints modeling as 'the new literacy'. Models let us play and explore. Such acts are an important form of creativity, and an important way of understanding complex things.

Interestingly, Granger points to Excel as the most ubiquitious tool for modeling:

Through Excel we can model any system that we can represent as numbers on a grid, which it turns out, is a lot of them. We have modeled everything from entire businesses to markets to family vacations. Millions of people are able to use spreadsheets to model aspects of their lives and it could be argued that, outside of the Internet, it's the single most important tool available to us on a computer. It gains this power by providing a simple and intuitive set of tools for shaping just one material: a grid of numbers. If we want to work with more than that, however, we have to code.

The beautiful thing about this is that millions of people, who would never considered themselves programmers, do indeed practice certain forms of (advanced) computation on a regular basis. There are many parallels in other forms of modeling and problem-solving, and yet the peculiar nature of code with its alien syntax is deeply off-putting.

Which is, of course, a huge opportunity for intervention.

He breaks down the process of modeling into these stages, which I think could be a useful framework for qualifying some of my ideas:

Specification: How to break down parts until you get to ideas and actions you understand.

Validation: How to test the model against the real world or against the expectations inside our heads.

Debugging: How to break down bugs in a model. A very important lesson is that an invalid model is not failure, it just shows that some part of the system behaves differently than what we modeled.

Exploration: How to then play with the model to better understand possible outcomes and to see how it could be used to predict or automate some system.

The context of this post is Granger's work on an unreleased product called Eve. A useful precursor is his post 'Towards a better programming' in which some deeper issues to do with our archaic programming languages are explored. Some clues to how Eve may work are given in this demo for a prototype product called Aurora:

It's hugely reassuring – and validating – to find other people asking many of the same questions as me.

I listened to a discussion of the subject in which an important idea – advocated by Piaget and Papert – was mentioned. To teach children (and I believe anyone) complex ideas you have to start with something familiar. We encounter this informally all the time: "It's like x but different because y."

It would seem that Granger wants to take the familiarity of Excel and let people move to something hugely more versatile and powerful. That seems like a strong approach.

I'm encouraged to use this notion as an anchor for my own ideas in building technical / procedural literacy. What are the familiar activities, with parallels to programmatic thinking, that might be a jumping off point?

The Narrows.

The Narrows is the thin stretch of water separating Staten Island and Brooklyn, forming a tight entrance to the Port of New York and New Jersey. Also a workable metaphor for where I'm at – I'm going to need to narrow in on an idea if I'm going to pass this threshold and make it up the river to the SVA Theater on time.

We shared super lo-fi prototypes in class by "bodystorming". I played around with how classmates perceived an opaque (actually random) selection and a selection reached through a winding and overly-complex 'algorithm'. We're faced with content and choices presided over by the wisdom of algorithms numerous times a day. One angle on improving technical literacy could be helping make the way such systems work tangible and explorable, aiding a greater critique and mastery over the systems and services around us.

This recent post was an inspiration: The Cathedral of Computation

Algorithms aren’t gods. We need not believe that they rule the world in order to admit that they influence it, sometimes profoundly. Let’s bring algorithms down to earth again. Let’s keep the computer around without fetishizing it, without bowing down to it or shrugging away its inevitable power over us, without melting everything down into it as a new name for fate

One attraction I have to this space is that the notion of algorithms has pervaded culture without an accompanying understanding. People talk about them. They are mysterious and powerful. That feels like an opportunity for intervention.

The in-class prototype didn't really push things forward for me – I didn't construct it appropriately to get the most useful feedback, but there was useful follow up discussion.

My recent quandary has centered mostly on whether to chase the 'how to nudge people into greater computational literacy?' angle or 'how to make the experience of programming more human-centered?'. These are pulling from both ends of the same problem, under the rubric of a more participatory technology culture. But being split between them is diluting my thinking and slowing me down.

I'm asking myself some tough questions:

  • What will I be most proud of?
  • What will have the greatest impact?
  • What am I best placed to tackle?
  • What will I learn?

I need to be honest about the fact that I'm not a 10,000+ hour programmer. I'm not even a 1,000+ hour programmer. What I bring is a new-found passion for the craft, design-thinking and a philosophically-inclined perspective. Are these the right components to tackle such a huge, decades-old problem?

I'm not sure.

Sometimes I think my perspective from the fringe of engineer-culture gives me freedom to question things. Sometimes I think I'm not deep enough immersed to solve problems.

For our next prototype we've been asked to consider...

  • Areas (attributes, features, part of the experience) that you have explored that need refinement.
  • Areas (attributes, features, part of the experience) that you have not yet explored.

...then to...

  • Prioritize these areas. Which are most important? Which parts can be left out?
  • What is the appropriate method for prioritized areas of focus – prototyping? research gathering? brainstorming?

Until I settle on my direction, these are hard questions to answer. In my next post I'll cover some material I came across today which is helping me hone in.

Natural Language Understanding (NLU).

The most ambitious of my current avenues of exploration involves looking for ways to reduce the burden of people having to ‘think like computers’ to program, instead pushing a programming environment incrementally towards more ambiguous, human, goal-oriented thinking.

In the context of this, I was pointed towards Wolfram|Alpha's work on a 'Natural Language Understanding System'.

The following is their description of the technology. They're deploying this as part of the Wolfram Language and specifically refer to programming as an intended function: 'Wolfram NLU lets you specify simple programs purely in natural language then translates them into precise Wolfram Language code.'

Complex linguistics, not statistics

Wolfram NLU is set up to handle complex lexical and grammatical structures, and translate them to precise symbolic forms, without resorting to imprecise meaning-independent statistical methods.

Learning from users

The high performance of today's Wolfram NLU has been achieved partly through analysis of billions of user queries in Wolfram|Alpha.

Knowledge-based disambiguation

Wolfram NLU routinely combines outside information like a user's geolocation, or conversational context with its built-in Knowledgebase to achieve extremely high success rates in disambiguating queries.

Curating natural language

Wolfram NLU has a huge built-in lexical and grammatical Knowledgebase, derived from extensive human curation and corpus analysis, and sometimes informed by statistical studies of the content of the web.

Understanding raw human thoughts

Wolfram NLU is set up not only to take input from written and spoken sources, but also to handle the more "stream-of-consciousness" forms that people type into input fields.

To be explored in more depth, but certainly a reassurance that my line of inquiry is valid.


On the wise advice of Paul Pangaro I started mapping and modeling my domain of interest to begin identifying targets for intervention.

These are first drafts examining the present and near future [I hope to build on them].

Firstly, types of interaction between people and software, with the Y-axis indicating control, empowerment, technical literacy and critical perspective – both as necessities for the person to move upwards but also as outcomes of such deeper interactions. It should work both ways.

Consider this model in relation to this snippet of conversation between Alan Kay and Doug Engelbart.

Software Interaction

Next I considered layers of abstraction when people interact with software. The Y-axis indicates a spectrum from object to representation and precision to ambiguity. That is, the physical manifestation of the software in binary and hardware, up to its representation as an experienced artifact, and the precision of this object up to its ambiguity as something a person interprets rather than a machine.

Within this model, we see the distinction between the abstractions dealt with by a user versus a programmer.

Digital Abstraction

Finally I mapped a potential shift in how we instruct computers to do novel things (i.e program them). While oversimplified, the intent is to show how a programmer must today take significant mental leaps from an idea to a series of steps or commands, and from there develop an explicit machine-readable and unambiguous program.

Flattening the Development Process

The goal to enable more people to participate in software rather than simply consume it (see first diagram) would be to close the gaps between these mental leaps. Firstly, perhaps tools and literacy could enable more people to think in the logical and sequential steps requisite of computers. Much of software development is working out the logic, before any code is written, and this step – crucial to an understanding and any critique of software – remains alien to most.

The further step – very much speculative and perhaps far-fetched – is to harness current work on interpreting context and intent, and processing natural language. Consider the progress a service like Google has made in inferences of context, intent and disambiguation in the realm of search. It's possible that developing these kinds of next-gen tools has never been prioritized as those with the necessary technical skills have (by definition) no shortage of ability in abstract thinking and explicit symbolic notation. In other words, they don't need it.

Plenty more to do, but this has been a useful start in representing recent thinking.

Audience + Lifecycle.

Audience [structure: for – who have – in an – that – unlike – the product]

For people who consider themselves "non-technical" or assume that learning code is something they would never do, yet who have – at least abstractly – a preference for greater control over and knowledge of the digital products and services they interact with. Additionally, anyone who feels anxiety about technology with a likely root being in their "limited" understanding.

My intervention will ideally live in an everyday or casual learning context that will generate understanding, insight and mental tools for thinking about technology and beyond. Some aspects may lean towards making leaps from creative concepts to execution, in this case the emphasis is on a creative development context.

Unlike simply using technology, or even customizing it, the intervention should empower people in both creative ability and critical thinking around technology. I imagine a different socio-cultural impact to the 'learn to code' movement (which, while constituing many fantastic initiatives, focuses acutely on the tools – perhaps like learning vocabulary and grammar rather than a higher level consideration of literature, to draw a somewhat unfair analogy).

The product should support enhanced "procedural literacy" and insight into technology, with a higher order goal of closing the gaps and psychological leaps between the three component steps of software development:

concept < > logic < > symbolic notation


Discover / Explore / Learn / Reflect / Extend

For Exclusion

My ideas aren't quite narrow enough yet to clearly indicate this, but broadly: not developing a new programming language or any such radical solution, nor a fully-featured new development tool. I'm seeking a limited scope to encapsulate some of these much larger ideas in the hope that through explanation and demonstration the audience can extrapolate.