Lesson plans, classroom materials, and teaching strategies on the Global Impact of Computing. Portal to our collection of creative, engaging resources to support teachers of AP Computer Science Principles, and others who want to bring social impact into high school CS classrooms.
One thing that struck me about this research — at least as it was described in the article — is that the researchers were collapsing a couple different dimensions of credibility in a way that may have affected their results, or at least their interpretation.
For example, they pointed out that lower perceived credibility (by their measure) correlates with more retweeting — but they also pointed out that lower perceived credibility correlates with more hedges. Given that the rating scale is accuracy, people may judge the information contained in a hedged tweet as less likely to be accurate because the tweeter has already introduced uncertainty — but they may be more likely to retweet it, because of a higher perception of the credibility of the source. (Exactly because that source doesn’t seem to be overstating their claims.)
This is only one possible interpretation — and a rather optimistic one at that — but it illustrates an important point: If you’re going to try to do experiments on humans, it’s important to be careful about exactly what question you ask, and to make sure you interpret your results in the light of how that question was worded.
Rather a tangent for a CS class, but it’s a point that computer scientists often miss, especially if they aren’t co-authoring with non-computer scientists. So, this is a plug for interdisciplinary collaboration! 🙂