Published By: The New York Times, 2/23/2017
>> View the Article <<
Summary
Google Alphabet has developed new software called Perspective that will flag potentially problematic user comments, for example so news sites can narrow down which posts require manual review. A machine-learning algorithm was given thousands of comments rated for toxicity, in order to build a model that could identify the problematic ones. The developers hope that this system will enable more sharing of thoughts and ideas.
Flesch-Kincaid Grade Level of Article: 13.4 13.413.4
Extended Discussion Questions
- If a forum site offered to let you filter out comments above a chosen level of toxicity, would you use that option? Why or why not?
- What if the comments were in response to something you yourself had posted?
- Besides promoting healthier discussion between users on websites, are there other benefits that could arise from using software that flags comments above a certain toxicity?
- Are there any potential disadvantages that could arise from using such a system?
- For the site? For an individual user? For the user community as a whole?
- Dr. Cohen is quoted as stating that the software could give false positives because it is so new, but that it will likely improve with more data.
- Besides the size of the dataset, are there other factors that could cause the system to produce false positives?
- What could be the consequences of producing false positives?
- Given the possibility of false positives, do you think sites should ever use this type of system to auto-remove comments? Why or why not?
- Do you think it should be different for different types of sites? Different topics?
- What if it never produced false positives (at least according to moderators)?
- What types of websites do you expect Perspective will be used for? What types of websites should it be used for?
- For example, are there sites that might particularly want to prevent trolling? To prevent cyberbullying?
- How should different sites balance being fair and open to those who tend to make toxic comments against being fair and open to those who might feel shut out of a discussion because of toxic comments?
- Why are online news sites in particular concerned about getting everyone “in the conversation” in the first place?
- How is this different from how people consume TV news or newspapers?
Relating This Story to the CSP Curriculum Framework
Global Impact Learning Objectives:
- LO 7.1.1 Explain how computing innovations affect communication, interaction, and cognition.
- LO 7.3.1 Analyze the beneficial and harmful effects of computing.
- LO 7.4.1 Explain the connections between computing and real-world contexts, including economic, social, and cultural contexts.
Global Impact Essential Knowledge:
- EK 7.1.1C Social media continues to evolve and fosters new ways to communicate.
- EK 7.1.1N The Internet and the Web have changed many areas, including e-commerce, health care, access to information and entertainment, and online learning.
- EK 7.3.1E Commercial and governmental censorship of digital information raise legal and ethical concerns.
Other CSP Big Ideas:
- Idea 4 Algorithms
Banner Image: “Network Visualization – Violet – Offset Crop“, derivative work by ICSI. New license: CC BY-SA 4.0. Based on “Social Network Analysis Visualization” by Martin Grandjean. Original license: CC BY-SA 3.0.
Home › Forums › Google Cousin Develops Technology to Flag Toxic Online Comments
Tagged: 4 Algorithms, 7.1.1 Interaction and cognition, 7.1.1C Social media, 7.1.1N Breadth of change, 7.3.1 Benefits and harm, 7.3.1E Censorship, 7.4.1 Real-world contexts