Facebook asks users to upload face photos to evidence they're not bots

as informed in Soon, you probably forced to upload a clear photograph of your face onto Facebook to evidence you aren't a Russian bot. The tech giant is Utilizing a new kind of captcha (a system designed to distinguish human from machine) to evidence that its users aren't bots. Facial recognition tech is being used further than before—including to open Apple's new iPhone X, that Uses a tech called Face ID. Facebook soon asked users to upload nude photographs to Facebook Messenger as fraction of its effort to protect revenge porn, according to The Verge. Facebook said the publication that the images are hashed and later deleted from its servers.


Facebook going to no longer allow users exclude racial groups in advertisement targeting

Facebook is Eventually disabling equipment which allowed marketers to exclude minorities or other multicultural affinity groups from their ads. The move is in response to increasing criticism which its advertising system allows marketers to discriminate versus racial and ethnic groups. Those which can't be excluded too involve religious groups, segments related to the LGBT community, and others. And a key fraction of this is working to stop advertesment which discriminate versus people. Facebook's advertising systems came under scrutiny after ProPublica found final year which landlords can illegally aim their housing advertesment only to whites.

Facebook will no longer let users exclude racial groups in ad targeting

Facebook is Utilizing artificially intelligent to spot users with suicidal thoughts and send them help

As it stated in Facebook is Utilizing AI to scan users' posts for signs they're having suicidal thoughts. The tool won't be active in any European Union Union nations, where information prevention laws protect companies from profiling users in this way. The AI looks for comments like "are you ok?" and "can I help?"Despite this emphasis on the power of AI, Facebook isn't providing many specifics on how the tool in reality judges who is in danger. It's the human moderators which going to do the crucial work of assessing each status the AI flags and responding. Although this human element ought not be overlooked, study proposes AI could be a useful tool in identifying mental health problems.




collected by :Roy Mark

Comments