Ellen Huet, writing for Bloomberg, peeks in on the worklife of the people who backstop the bots by reviewing answers and frequently stepping in to provide their own. They are the “humans pretending to be robots pretending to be humans.”

Huet talked to people who filled this role at two services that automate calendar scheduling, X.ai and Clara, and I t doesn’t sound like the world’s most fulfilling work:

Calvin said he sometimes sat in front of a computer for 12 hours a day, clicking and highlighting phrases. “It was either really boring or incredibly frustrating,” he said. “It was a weird combination of the exact same thing over and over again and really frustrating single cases of a person demanding something we couldn’t provide.”

[…]

As another former X.ai trainer put it, he wasn’t worried about his job being replaced by a bot. It was so boring he was actually looking forward to not having to do it anymore.

I’m confident that putting people in the bot role is the right way to prototype bot services with very small trial audiences. It lets you hone your understanding of what people actually want and build a good set of training data as well as the voice and tone of the service. But it’s also clear that this kind of work—focusing relentlessly and mind-numbingly on the same narrow micro-interaction—is not meant for long-term job roles.

This is why people are trying to automate this stuff in the first place. The risk is that, during the transition, the tedium of modeling this automation will fall heavily and narrowly on a small group who wind up working for the bots, rather than the reverse. How might we avoid making this the future of work?

Read more about...