Models.

On the wise advice of Paul Pangaro I started mapping and modeling my domain of interest to begin identifying targets for intervention.

These are first drafts examining the present and near future [I hope to build on them].

Firstly, types of interaction between people and software, with the Y-axis indicating control, empowerment, technical literacy and critical perspective – both as necessities for the person to move upwards but also as outcomes of such deeper interactions. It should work both ways.

Consider this model in relation to this snippet of conversation between Alan Kay and Doug Engelbart.

Software Interaction

Next I considered layers of abstraction when people interact with software. The Y-axis indicates a spectrum from object to representation and precision to ambiguity. That is, the physical manifestation of the software in binary and hardware, up to its representation as an experienced artifact, and the precision of this object up to its ambiguity as something a person interprets rather than a machine.

Within this model, we see the distinction between the abstractions dealt with by a user versus a programmer.

Digital Abstraction

Finally I mapped a potential shift in how we instruct computers to do novel things (i.e program them). While oversimplified, the intent is to show how a programmer must today take significant mental leaps from an idea to a series of steps or commands, and from there develop an explicit machine-readable and unambiguous program.

Flattening the Development Process

The goal to enable more people to participate in software rather than simply consume it (see first diagram) would be to close the gaps between these mental leaps. Firstly, perhaps tools and literacy could enable more people to think in the logical and sequential steps requisite of computers. Much of software development is working out the logic, before any code is written, and this step – crucial to an understanding and any critique of software – remains alien to most.

The further step – very much speculative and perhaps far-fetched – is to harness current work on interpreting context and intent, and processing natural language. Consider the progress a service like Google has made in inferences of context, intent and disambiguation in the realm of search. It's possible that developing these kinds of next-gen tools has never been prioritized as those with the necessary technical skills have (by definition) no shortage of ability in abstract thinking and explicit symbolic notation. In other words, they don't need it.

Plenty more to do, but this has been a useful start in representing recent thinking.