Google is rolling out its censoring tools to protect Racial Islamists


Google is giving thousands of contractors who normally evaluate search results a new, additional task: help the company downrank blatantly upsetting, offensive, and false content. Search Engine Land has a thorough explainer on the updated guidelines used by Google’s quality raters. Those are the people who rank the usefulness and accuracy of search results to keep improving the company’s search algorithms, which ultimately determine what ranks where.

Raters now have access to a new “Upsetting-Offensive” flag which Google says should be used in the following instances:

Content that promotes hate or violence against a group of people based on criteria including (but not limited to) race or ethnicity, religion, gender, nationality or citizenship, disability, age, sexual orientation, or veteran status.
Content with racial slurs or extremely offensive terminology.
Graphic violence, including animal cruelty or child abuse.
Explicit how­ to information about harmful activities (e.g., how tos on human trafficking or violent assault).
Other types of content which users in your locale would find extremely upsetting or offensive.
Just being upsetting isn’t enough for raters to flag search results. Google points to an example regarding a “Holocaust history” search: one result is a Holocaust denial site, which the company says deserves the flag. The other, a website from The History Channel, might be upsetting due to subject matter but is a “factually accurate source of historical information” and doesn’t promote the hateful content mentioned above.

Search Engine Land notes that simply being hit with the Upsetting-Offensive flag won’t immediately demote or downrank search results. Instead, those flags are used as data points for Google’s employees as they continue to iterate on search algorithms. Eventually the algorithm will learn to flag upsetting and factually content on its own, which would impact search rankings in cases where Google believes users are after “general learning.” But it’s not censoring or hiding anything; f someone’s specifically searching for, say, a white nationalist website by name, Google will still deliver it at the top of results. “We will see how some of this works out. I’ll be honest. We’re learning as we go,” Paul Haahr, a senior executive on the search team, told Search Engine Land.

Google already switched some raters over to the new guidelines and has used the resulting data to improve search rankings. But the company’s Google Home speaker is still spouting off idiotic, untrue answers to certain questions, and featured snippets at the top of web results continue to occasionally surface bad info as well.
10,000 contractors told to flag ‘upsetting-offensive’ content after months of criticism over hate speech, misinformation and fake news in search results
 Google’s ‘quality raters’ are a little-known corps of worldwide contractors that Google uses to assess the quality of its systems
 Google’s ‘quality raters’ are a little-known corps of worldwide contractors that Google uses to assess the quality of its systems Photograph: Yui Mok/PA
 View more sharing options
Shares
526
Alex Hern
@alexhern
Wednesday 15 March 2017 09.27 EDT Last modified on Wednesday 15 March 2017 10.07 EDT
Google is using a 10,000-strong army of independent contractors to flag “offensive or upsetting” content, in order to ensure that queries like “did the Holocaust happen” don’t push users to misinformation, propaganda and hate speech.

The review of search terms is being done by the company’s “quality raters”, a little-known corps of worldwide contractors that Google uses to assess the quality of its systems. The raters are given searches based on real queries to conduct, and are asked to score the results on whether they meet the needs of users.

These contractors, introduced to the company’s review process in 2013, work from a huge manual describing every potential problem they could find with a given search query: whether or not it meets the user’s expectations, whether the result offered is low or high quality, and whether it’s spam, porn or illegal.

In a new update to the rating system, rolled out on Tuesday, Google introduced another flag raters could use: the “upsetting-offensive” mark. Although the company did not cite a specific reason for the update, the move comes three months after the Guardian and Observer began a series of stories showing how the search engine promotes extremist content.

One story in particular highlighted how a search for “did the Holocaust happen” returned, as its top result, a link to the white supremacist forum Stormfront, explaining how to promote Holocaust denial to others.

That exact search result is now included by Google as one of the examples the company now uses to train its contractors on how and when to mark pages as “upsetting-offensive”.

Detailing why a result for “Holocaust history” returning a link to Stormfront should be flagged as problematic, the document explains: “This result is a discussion of how to convince others that the Holocaust never happened. Because of the direct relationship between Holocaust denial and anti­semitism, many people would consider it offensive.”

By contrast, the same search query returning a result for the History Channel should not get the upsetting-offensive flag, even if users do find the topic of the Holocaust upsetting. “While the Holocaust itself is a potentially upsetting topic for some, this result is a factually accurate source of historical information,” the manual explains. “Furthermore, the page does not exist to promote hate or violence against a group of people, contain racial slurs, or depict graphic violence.”

Other examples given in the manual for the flag are a query for “racism against blacks” returning a page for the white supremacist blog Daily Stormer, and a query for “Islam” returning a result linking to far-right US activist Jan Morgan’s website.

 Example results provided to quality raters in Google’s manual.
Facebook Twitter Pinterest
 Example results provided to quality raters in Google’s manual. Photograph: Google
Even before the specific introduction of the “upsetting-offensive” marker, many of these results would have been ranked poorly by quality raters for other reasons. Some of the pages, for instance, meet Google’s description of “low quality” content, due to the lack of expertise and poor reputation of the websites. They also rank poorly on the company’s “Needs Met” scale, since a user searching for the queries in question would be unlikely to actually want the results offered.

Google declined to comment on the new guidelines, but search engineer Paul Haahr told industry blog Search Engine Land: “We will see how some of this works out. I’ll be honest. We’re learning as we go … We’ve been very pleased with what raters give us in general. We’ve only been able to improve ranking as much as we have over the years because we have this really strong rater programme that gives us real feedback on what we’re doing.”

The raters’ rankings do not directly feed back into search results, however. Instead, the data collected is used by Google to help judge the success of algorithm changes, and is also part of the corpus used to train its machine-learning systems.

Danny Sullivan, editor of Search Engine Land, said: “The results that quality raters flag is used as ‘training data’ for Google’s human coders who write search algorithms, as well as for its machine-learning systems. Basically, content of this nature is used to help Google figure out how to automatically identify upsetting or offensive content in general.

“In other words, being flagged as ‘upsetting-offensive’ by a quality rater does not actually mean that a page or site will be identified this way in Google’s actual search engine. Instead, it’s data that Google uses so that its search algorithms can automatically spot pages generally that should be flagged.”

While the new ranking option addresses one particular problem highlighted by the Guardian and Observer, Google’s failure to keep fake news and propaganda off the top of search results is broader than simply promoting upsetting or offensive content.

Google has also been accused of spreading “fake news” thanks to a feature known as “snippets in search”, which algorithmically pulls specific answers for queries from the top search results. For a number of searches, such as “is Obama planning a coup”, Google was instead pulling out answers from extremely questionable sites, leading to the search engine claiming in its own voice that “Obama may be planning a communist coup d’├ętat”.

The same feature also lied to users about the time required to caramelise onions, pulling a quote that says it takes “about five minutes” from a piece which explicitly argues that it in fact takes more than half an hour.

Shortly after each of these stories were published, the search results in question were updated to fix the errors.

Google is making a new push to eliminate offensive search results such as those that appeared from US neo-Nazi site Stormfront in response to queries about the Holocaust. As Search Engine Land noticed, the site has revised its guide on how to assess search result quality for around 10,000 of its "quality rater" contractors. That includes a new "upsetting-offensive" content flag for the promotion of violence or hate against minorities and other groups, racial content, graphic violence and human trafficking.

On a search for "holocaust history," for instance, Google instructs raters on how to handle two different results (below). The first shows a post from said racist site Stormfront on Holocaust denial, something that's actually a crime in over 20 countries. Google tells raters to flag that with the "Upsetting-Offensive" flag "because of the direct relationship between Holocaust denial and anti-Semitism.



Google says the second example from The History Channel doesn't require the "Upsetting-Offensive" flag, though. Even though it's clearly an upsetting topic, "this result is a factually accurate source of historical information" that, unlike Stormfront, "does not exist to promote hate or violence against a group of people," the document states.

Once the raters flag a result, nothing happens immediately. Rather, they're used by Google's coding team and, in turn, its AI algorithms, to improve the search engine overall. Once all that kicks in, someone searching for history about the Holocaust will be less likely to run into a denial site, if things go as planned. However, determined searchers will still find such results if they specifically seek them out by naming a site, Google points out.

The company has used the new guidelines with select raters and updated its algorithm late last year. Now, searching with a query like "did the holocaust happen" no longer returns Stormfront as the top result and instead surfaces pages from the United States Holocaust Museum. Other queries still turn out questionable results, but Google told Search Engine Land it's "pleased" with the raters' work so far. "We will see how some of this works out," said Google engineer Paul Haahr. "We're learning as we go."

More News Here
News Just For You
Science News
Tech News
Conspiracy Theories
Health News
Weather
Android
Paranormal News
UFO News
Games
Video Gamer
Funny
Advertise with us
Facebook
Can't Find it? Try our Search