making things that make things

{ A THESIS JOURNAL BY SAM WANDER }

Evaluating.

We were asked to create two lists for evaluating our ideas.

  • The user needs and design criteria uncovered in the course of your research to date.
  • A list of your personal criteria - what are attributes that are important to you?

As I shift towards a very different line of enquiry the first part is tough, but a first attempt follows.

User needs & design criteria

  • Empowers people to become more than just consumers of technology
  • Encourages active participation in technology culture
  • Simultaneously educational and creative
  • Progressively disclosing design (doesn't over-simplify or over-complicate)
  • Offers clear value, making it 'worth the effort' to use / engage
  • Creates a sense of opportunity and inspiration

Personal criteria

  • Aligns with and expresses my values on human-technology relationships
  • Should be a clear step towards an explicit larger vision
  • Important to have made something tangible rather than just a 'fiction'
  • Process should have deepened my understanding of the subject and sharpened my values and perspective
  • A meaningful contribution to the field that provokes further discussion

"A totally different approach."

The class was asked to explore "a totally different approach" to their problem space. As it happens, I missed the class. But brewing in my mind during the winter break was a different – but tangentially related – concern, and in the last few days that idea has started to bark for my attention.

The inception of my work was a response to a low level anxiety about the rapid shift to mobile and smart devices. As groundbreaking as many of these devices are, I became worried about their affordances for creativity. I grew up tinkering on a PC, I looked nervously behind the curtain at the system files, I taught myself rudimentary 3D design and animation. If your primary computer is a phone or tablet, how do you do these things?

My fundamental question is this: as the PC industry declines in favor of mobile, what does this mean for creativity? From child's play, via adult creative exploration, to professional production — are we making things that are more or less capable of making things?

This led me down the path of thinking about input devices, as this felt like one of the blocks with creativity on a tablet (namely screen occlusion, lack of delicate control, inability to develop muscle memory behavior). I extended this to concerns with the desktop, and the wider inhumanity of optimizing for our visual sense alone.

But I do think there is a deeper problem.

In this post at the start of the year I drew together some quotes.

  • Licklider wanted "...to enable men and computers to cooperate in making decisions and controlling complex situations without inflexible dependence on predetermined programs"

  • Maeda warns that "The true skill of a digital designer is the practiced art of computer programming, or computation."

  • Haverbeke determines "For open-ended interfaces, such as instructing the computer to perform arbitrary tasks, we’ve had more luck with an approach that makes use of our talent for language: teaching the machine a language."

The trend towards slick but obstinately inflexible, user-friendly but simplified, closed or otherwise gated systems and services has helped millions more people become regular computer users. In many ways this was the vision of the PC pioneers. But this trend has abstracted most notions of 'computation' so fundamentally that computers – and particularly the newer classes of mobile or smart device – have truly become 'appliances'.

Consider this illuminating article preceding the iPad's release, which laid out pioneering Macintosh designer Jef Raskin's vision that 'an information appliance would be a computing device with one single purpose—like a toaster makes toast, and a microwave oven heats up food. This gadget would be so easy to use that anyone would be able to grab it, and start playing with it right away, without any training whatsoever. It would have the right number of buttons, in the right position, with the right software.' One single traditional computing device couldn't achieve this simplicity, but if the screen could morph to dynamically display different buttons it could...

600 million iPhones and 300 million iPads later it seems like that idea has legs.

One of the most inspiring books I've read in pursuit of my thesis thus far is Seymour Papert's Mindstorms.

In the book, from 1980, Papert outlines his philosophy of using computation to help children learn, with examples from his research. Instead of poorer young math students being told they were simply wrong, they were to explore math problems through trial and error. This opens up new ways of critical thinking instead of closing them down. Any programmer knows the pain and joy of problem-solving. The traditional view that there is a right and wrong answer in math (in turn ensuring that poor-performing children assume they just don't get it) is painfully narrow-minded – as it would be to say a programmer is poor for not nailing a problem first time.

"The child programs the computer. And in teaching the computer how to think, children embark on an exploration about how they themselves think. The experience can be heady: Thinking about thinking turns the child into an epistemologist, an experience not even shared by most adults."

It's a wonderful book.

As a new programmer, I wrote last year about the joys of the experience. It was a revelation, that took me 29 years to discover.

At this point I anticipate the following: OK, but kids are being encouraged to learn code today like never before. There are code clubs, apps like Hopscotch and Scratch, online courses. And adults too – it's the new cool thing to learn to code so you can make an app. What's the problem?

It's subtler than this. But perhaps deeper. And it's going to take me a few more posts to explain. I'm trying to bring into focus a vision of procedural literacy and empowerment that makes the technological world more participatory and malleable to the "casual programmer". I came across the latter term in frog design's 2015 Tech Trends: "A shift is underway in software and service design where the command and control of this complex connected world around us will rely on “casual programming” experiences — giving every day, non-programming people the tools, services, and APIs usually reserved for the hackers and technology elite in friendly and accessible forms."

As I lean further and further into this new line of enquiry, I want to remind myself that the problem space hasn't really changed. I'm still concerned with empowering individuals to be creative in an increasingly closed mobile-centric world. What's changed is the types of creativity and thinking I'm concerned with, and the audience – which I'd argue to be multiple times larger (and hopefully even more worthy of my pursuit for that reason).

Tangible, Embedded and Embodied Interaction.

I just got back from TEI 15 at Stanford University.

The Association for Computer Machinery’s conference on Tangible, Embedded and Embodied Interaction addresses issues of human-computer interaction, design, interactive art, user experience, tools and technologies. A strong focus of the conference is how computing can bridge atoms and bits into cohesive interactive systems.

It seemed like a perfect fit for my field of research, so – despite being quite possibly the only representative from a Design rather than Computer Science or Research program – I was determined to attend.

The majority of the conference consisted of paper presentations from Masters and PhD students from around the world.

Here are some of the projects that stood out, and felt most relevant to me:

MagnID: Tracking Multiple Magnetic Tokens

While many smartphones have magnetometers (see this other interesting demo I came across recently) they are only capable of tracking one nearby magnet. This project overcomes the problem by creating a set of tokens in which a motor rotates a magnet at different speeds. This creates "sinusoidal magnetic fields" that can be isolated, also giving the sensor access to the distances of each token.

It's an interesting concept, but the tokens each need battery power and a functioning motor, making it a little impractical. The precision of the measurements also leaves much to be desired. For the kinds of application they demonstrate however, a novel – and sophisticated – approach to connecting the virtual to the physical.

SPATA: Spatio-Tangible Tools for Fabrication-Aware Design

Eerily similar to this concept from Unfold, though far more realized and technically developed, this project seeks to connect the virtual to the physical in the domain of 3D design and digital fabrication.

The physical tools used when designing new objects for digital fabrication are mature, yet disconnected from their virtual accompaniments. SPATA is the digital adaptation of two spatial measurement tools, that explores their closer integration into virtual design environments. We adapt two of the traditional measurement tools: calipers and protractors. Both tools can measure, transfer, and present size and angle. Their close integration into different design environments makes tasks more fluid and convenient.

It's a compelling and well-executed vision, that seems to demonstrate real utility. I was intrigued that they used a motorized fader, given my exploration of the device as a means to manage both analogue input and output.

THAW: Hybrid Interactions with Phones on Computer Screens

An imaginative investigation of congruence between devices from MIT's Tangible Media Group, with some illuminating application ideas.

THAW is a novel interaction system that allows a collocated large display and small handheld devices to seamlessly work together. The smartphone acts both as a physical interface and as an additional graphics layer for near-surface interaction on a computer screen. Our system enables accurate position tracking of a smartphone placed on or over any screen by displaying a 2D color pattern that is captured using the smartphone’s back-facing camera. The proposed technique can be implemented on existing devices without the need for additional hardware.

The method for detecting the location of the phone on the screen is really smart. A color gradient is overlaid on the screen, and the phone's camera detects the precise colors it's seeing to determine its position. The overlay shrinks to a small circle by the camera as it gains confidence, for the most part the human eye will miss the split second the gradient actually fills the entire screen for its first detection attempt.

Obviously this demonstration is using custom software on both the laptop and phone, but it's impressive that no additional hardware was required.

I also took part in an interesting workshop, focused on abstracting expressive movements and re-imagining them as mechanics in low fidelity foam core prototypes. We began by building an interesting 'Master and Slave' device in which six potentiometers were followed by six equivalently positioned servo motors. Following that we built a drawer with foam core, and attempted to make expressive movements with its (servo-controlled) opening and closing motion.

An enlightening few days. Oh, and I briefly got to meet the inimitable Hiroshi Ishii.

Man-Computer Symbiosis.

Three quotes, many years apart, that I want to draw together:

Man-computer symbiosis is an expected development in cooperative interaction between men and electronic computers. It will involve very close coupling between the human and the electronic members of the partnership. The main aims are 1) to let computers facilitate formulative thinking as they now facilitate the solution of formulated problems, and 2) to enable men and computers to cooperate in making decisions and controlling complex situations without inflexible dependence on predetermined programs. In the anticipated symbiotic partnership, men will set the goals, formulate the hypotheses, determine the criteria, and perform the evaluations. Computing machines will do the routinizable work that must be done to prepare the way for insights and decisions in technical and scientific thinking.

J. C. R. Licklider, Man-Computer Symbiosis [1960]

In place of motor skills, today's digital designer must develop an awareness of the many capabilities and sequences of interactions in the continuously growing set of pre-packaged digital tools. In other words, skill in the digital sense is nothing more than knowledge, and the reality is that we implicity glorify rote memorization as the basis of skill for a digital designer.

The true skill of a digital designer is the practiced art of computer programming, or computation.

John Maeda, Design By Numbers [1999]

We’ve found two effective ways of bridging the communication gap between us, squishy biological organisms with a talent for social and spatial reasoning, and computers, unfeeling manipulators of meaningless data. The first is to appeal to our sense of the physical world and build interfaces that mimic that world and allow us to manipulate shapes on a screen with our fingers. This works very well for casual machine interaction.

But we have not yet found a good way to use the point-and-click approach to communicate things to the computer that the designer of the interface did not anticipate. For open-ended interfaces, such as instructing the computer to perform arbitrary tasks, we’ve had more luck with an approach that makes use of our talent for language: teaching the machine a language.

Marijn Haverbeke, Eloquent JavaScript [2014]

All of these thoughts recognize the power that comes with an intimate empathetic relationship between a human and a computer. Licklider pre-empted the value in augmenting intellect and capabilities, the others – coming much later – take views on existing creative computation. For Maeda, familiarity and skill as a user is not enough, to really create we must understand our materials. We would say this of a sculptor, so why not of someone whose material is pixels? For Haverbeke a distinction is drawn between graphical user experience and the unlimited potential of written programming (a distinction that would make Bret Victor weep).

What fascinates me, and led me to draw these quotes together, is that they all insightfully conceive computers as tools yet think only about what we see and read / write. When we think about tools, almost everything we think of is held with or manipulated by hand, yet computing exists abstractly and almost post-physically or post-spacially.

Not to belittle this great thinking of course, simply an observation about something the industry and culture has consistently tended towards – as true in 1960 as it is in 2015.

Survey.

I've been keeping my eye on a few interesting projects of late that bear relation to my investigation.

Palette

A successful Kickstarter project, the makers describe Palette as 'a freeform hardware interface'. Over the summer I began thinking about building a set of dials and sliders to augment the keyboard and mouse, so finding this was a pleasant – validating – surprise. They understand (and want to optimze for) the way physical controls offer precision and utilize muscle memory. Their efforts to make the inputs as versatile as possible – letting the user mix and match components and set their uses – seems smart, though I wonder if it leaves too much onus on the user to figure out how to optimize their workflow. Difficult to say without using the setup – I pre-ordered months ago so looking forward to trying it out.

Modulares Interface B.A.

An ambitious project from German student Florian Born that takes the limitations of touchscreens to an extreme, building a re-organizable series of dials, sliders and buttons that sit above an iPad and interact with the screen itself using capacitive touch. It's boldy engineered, if a little... cyberpunk.

The customization capabilities makes good use of the iPad's dynamic display – control values are visible through little windows. The example demonstrates the device being used to control Ableton Live music software, but it's a little unclear how the controls are mapped from the setup to the laptop software. This mapping is important.

A really intriguing and timely inspiration.

Flow

Flow is an Indiegogo project from a small team in Germany. Here's how they describe it:

We work on graphic design, video editing or CAD on a daily basis. Keyboard and mouse are great but they are far from giving you the same sensitivity and abilities as your hand.

We need a tool that gives us flexible shortcuts and perfect control, a tool that makes the things we love fast, precise, intuitive and fun.

I could almost have written this myself. The team are definitely examining as very similar problem space, and the result is pretty fascinating.

Currently the device is intended to support the following interactions:

  • Ring (2): Left and Right. It can also detect how fast you turn.
  • Buttons (5), Left, Right, Middle, Up, Down
  • Capactive Touch (4): Swipe Left, Right, Up, Down. You can also use certain areas of the touch surface as digital inputs
  • Gesture Recognition (6): Left, Right, Up, Down, High, Low. It can also detect how fast you wave your hand.

Not mentioned here is the precision of the ring. There are apparently 3600 values in a 360 degree rotation, which gets at the kind of nuanced control I've been advocating (and I feel is distinctly lacking with touchscreens).

Some thoughts on why this should matter to creatives here and a nice piece on how the team went about building it here. Their campaign is now superbly overfunded so I looked forward to seeing the product develop (and of course, that overfunding seems a good indication there is an unmet demand in this space).