When you’re using Google on desktop, you’ll typically see up to 10 predictions. The appearance, order and some of the predictions themselves can vary along with this. Making predictions richer and more useful As said above, our predictions show in search boxes that range from desktop to mobile to within our Google app. Doing this work means sometimes an inappropriate prediction might not immediately disappear, but spending a little extra time means we can provide a broader solution. ![]() We expand to ensure we’re also dealing with closely related predictions. Those using the Google app on iOS can swipe to the left to get the reporting option.īy the way, if we take action on a reported prediction that violates our policies, we don’t just remove that particular prediction. Should you spot something, you can report using the “Report inappropriate predictions” link we launched last year, which appears below the search box on desktop:įor those on mobile or using the Google app for Android, long press on a prediction to get a reporting option. But with billions of predictions happening each day, we know that we won’t catch everything that’s inappropriate. We hope that the new policies, along with other efforts with our systems, will improve autocomplete overall. How to report inappropriate predictionsOur expanded policies will roll out in the coming weeks. For example, predictions for song lyrics or book titles that might be sensitive may appear, but only when combined with words like “lyrics” or “book” or other cues that indicate a specific work is being sought.Īs for violence, our policy will expand to cover removal of predictions which seem to advocate, glorify or trivialize violence and atrocities, or which disparage victims. With groups, predictions might also be retained if there’s clear “attribution of source” indicated. ![]() With the greater protections for individuals and groups, there may be exceptions where compelling public interest allows for a prediction to be retained. Our expanded policy for search will cover any case where predictions are reasonably perceived as hateful or prejudiced toward individuals and groups, without particular demographics. Our existing policy protecting groups and individuals against hateful predictions only covers cases involving race, ethnic origin, religion, disability, gender, age, nationality, veteran status, sexual orientation or gender identity. In the coming weeks, expanded criteria applying to hate and violence will be in force for policy removals. Our latest efforts against inappropriate predictionsTo better deal with inappropriate predictions, we launched a feedback tool last year and have been using the data since to make improvements to our systems. It’s our job to reduce these as much as possible. Regardless, even if the context behind a prediction is good, even if a prediction is infrequent, it’s still an issue if the prediction is inappropriate. It’s also important to note that predictions aren’t search results and don’t limit what you can search for. As we explained earlier this year, the search results themselves may make it clearer in some cases that predictions don’t necessarily reflect awful opinions that some may hold but instead may come from those seeking specific content that’s not problematic. ![]() It’s worth noting that while some predictions may seem odd, shocking or cause a “Who would search for that!” reaction, looking at the actual search results they generate sometimes provides needed context. When we’re alerted to these, we strive to quickly remove them. Our systems aren’t perfect, and inappropriate predictions can get through. However, we process billions of searches per day, which in turn means we show many billions of predictions each day. Why inappropriate predictions happenWe have systems in place designed to automatically catch inappropriate predictions and not show them.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |