I missed this a few weeks back. At Search Engine Land, Danny Sullivan reported that Google is empowering its 10,000 human reviewers to start flagging offensive content, an effort to get a handle on hate speech in search results. The gambit: with a little human help from these “quality raters,” the algorithm can learn to identify what I call hostile information zones.

Sullivan writes:

The results that quality raters flag is used as “training data” for Google’s human coders who write search algorithms, as well as for its machine learning systems. Basically, content of this nature is used to help Google figure out how to automatically identify upsetting or offensive content in general.…

Google told Search Engine Land that has already been testing these new guidelines with a subset of its quality raters and used that data as part of a ranking change back in December. That was aimed at reducing offensive content that was appearing for searches such as “did the Holocaust happen.”

The results for that particular search have certainly improved. In part, the ranking change helped. In part, all the new content that appeared in response to outrage over those search results had an impact.

“We will see how some of this works out. I’ll be honest. We’re learning as we go,” [Google engineer Paul Haahr] said.

Read more about...