Lately I’ve been thinking hard about creatives’ role in a world of artificial intelligence, but what about the reverse: how about AI’s role in creative pursuits? Alexis Madrigal reports for The Atlantic on SketchRNN, one of several Google efforts to teach machines to make art:

The implicit argument is that when humans draw, they make abstractions of the world. They sketch the generalized concept of “pig,” not any particular animal. That is to say, there is a connection between how our brains store “pigness” and how we draw pigs. Learn how to draw pigs and maybe you learn something about the human ability to synthesize pigness. …

What can SketchRNN learn? Below is a network trained on firetrucks generating new fire trucks. Inside the model, there is a variable called “temperature,” which allows the researchers to crank the randomness of the output up or down. In the following images, bluer images have the temperature turned down, redder ones are “hotter.”

SketchRNN's AI sketches of firetrucks

[…]

What [project leader Doug] Eck finds fascinating about sketches is that they contain so much with so little information. “You draw a smiley face and it’s just a few strokes,” he said, strokes that look nothing like the pixel-by-pixel photographic representation of a face. And yet any 3-year-old could tell you a face was a face, and if it was happy or sad. Eck sees it as a kind of compression, an encoding that SketchRNN decodes and then can re-encode at will.

In other words, sketches might teach AI portable, human-understandable symbols of abstract concepts—a shorthand description of the world. It strikes me that all creative pursuits, including design and language, traffic in similar symbols and shorthands. I’m impatient to find out how this particular branch of AI develops to understand (and create) the interfaces and interactions that designers make on an ongoing basis.

At the moment, this is the stuff of the research lab. But other flavors are starting to emerge in consumer products, too. Apple has been training iOS to anticipate strokes in sketches and handwriting in make writing with Apple Pencil seem buttery smooth. In Buzzfeed’s overview of iPad updates, John Paczkowski reports:

Meanwhile, the Apple Pencil’s latency — that slight lag you get when drawing — has been reduced to the point where it’s virtually imperceptible; Apple says it’s just 20 milliseconds. And since Apple is so intensely focused on capturing the experience of putting pen to paper, it’s doing additional work in the background to remove the lag entirely with machine learning–based algorithms designed to predict where a Pencil is headed next.

“We actually schedule the next frame for where we think the Pencil’s going to be, so it draws it right when you get there, instead of right after you have been there,” Schiller says.

While Google and SketchRNN are chasing the lofty goal of understanding how humans communicate in symbols, Apple is meanwhile learning the commonplace but useful skill of learning how you write and draw. Machines may not yet be capable of their own creative works, but they’re already beginning to learn to understand and anticipate our own.

Read more about...