The Information Literacy of Fake News

fake news

Pre- and post-election last fall, there were many stories in all types of media about "fake news." An article in The Chronicle asks "How Can Students Be Taught to Detect Fake News and Dubious Claims?" but I would say that non-students need even more education in this area. Of course, the real question is whether or not this is a teachable skill.

If you had asked me last January to define "fake news" I would have said it was a kind of satire or parody of mainstream journalism. The Onion online, or Saturday Night Live's news segment would fit that definition. Satire always has a bit of truth in it or it doesn't really work.

The Daily Show and Last Week Tonight with John Oliver and other shows and sites have blurred the line. They use real news and sometimes parody it, but sometimes they are closer to investigative journalism. They can edit together clips of a persons inconsistencies in views over the years and create a montage that shows someone who either has a terrible memory or is a liar. It may frighten some to hear it, but many young people and adults list shows like these as their main source for news.

The fake news that is really the focus of attention now are ones (almost exclusively online) that produce wholly fictionalized news stories. Those non-journalistic entities have a very powerful delivery system via social media like Facebook and twitter.

A Stanford University report published last year concluded that many students could not detect fake or misleading information online. They gave students from middle school to college tasks to see how well they could tell a native advertisement from a news article or identify a partisan website as biased or separate a verified social-media account from an unauthenticated one

A larger conclusion I see here is that faculty often assume that young people are fluent in or savvy about n social media in the same way that it is assumed that digital natives know how to use smartphones, websites, photos, video and other digital technology. Bad assumption or expectation.

I remember teaching lessons on determining the veracity of research sources before there was an Internet and after. That has been a part of literacy education since the time when books became more common. I'm sure it was a teachable moment pre-print when a parent told a child to ignore gossip and stories from certain people/courses.

The Stanford researchers said that we need to teach "civic online reasoning" which is something that goes beyond its need in academic settings.

In whose purview is this teaching? English teachers? Librarians? I would say it would only be effective if, like writing in the disciplines, it is taught by all teachers with a concentration on how it occurs in their field.

The science instructor needs to teach how to determine when science is not science. An easy task? No. Look at teaching the truth of climate science or evolution. It is controversial even if the science seems clear.

Napoleon Bonaparte is credited with saying that "History is a set of lies agreed upon." If that is true, how do we teach the truth about history past and the history that is unfolding before our eyes?

But we can't just say it's impossible to teach or assume someone else will take care of it. Information literacy is still a critical, difficult and overlooked set of skills to teach.


200 Learning Tools

toolboxJane Hart created the Centre for Learning and Performance Technologies (C4LPT) in 2000. In 2007, she compiled her first Top 100 Tools for Learning list. This year the list is at an exhaustive and exhausting 200 tools. She takes votes from learning professionals worldwide (Jane is in the UK.) 

Jane was surprised that Twitter dropped from #1. As someone who bought Twitter stock at a low point in the hopes of selling it when it was higher after being purchased, I am not surprised. 

I like that Jane has also broken down the big list to subsets of tools for Personal and Professional LearningWorkplace Learning and Education

Even if you are a big user of online tools for learning, there are probably some new tools on the 2016 list or her "Movers and Shakers" list that you have never even heard mentioned.

The top vote getters should be familiar to all educators and I would expect that at least a few of these tools are in any teachers' toolbox by now. Jane has more information on each tool on her site.

Here are the Top 20:

1 - YouTube

2 - Google Search

3 - Twitter

4 - PowerPoint

5 - Google Docs/Drive

6 - Facebook

7 - Skype

8 - LinkedIn

9 - WordPress

10 - Dropbox

11 - Wikipedia

12 - Yammer

13 - WhatsApp

14 - Prezi

15 - Kahoot

16 - Word

17 - Evernote

18 - Slideshare

19 - OneNote

20 - Slack



Full list of 200 at http://c4lpt.co.uk/top100tools/


Social Media Ethics and Law

social media law I'm working on a presentation titled "Social Media Ethics and Law" to be given at the NJEDge.Net Annual Conference (Princeton, NJ) in later this month. That is also the title of a a course that I have in development.

Social media is redefining the relationships between organizations and their audiences, and it introduces new ethical, privacy and legal issues. The audience for my presentation is schools, primarily higher education, but this topic is one that is unfortunately not given a lot of attention for many organizations. Educating employees about responsible use in the organization and also as individual users is necessary. We need to have a better understanding of the ethics, and also the law, as it applies in these new contexts.

To use a clichéd disclaimer, I am not a lawyer, and my focus will be more on ethics, but at some point ethics bumps up against law. Pre-existing media law about copyright and fair use was not written with social media in mind, so changes and interpretations are necessary.

Technological advances blur the lines of what is or is not allowed to be published and shared and issues of accuracy, privacy and trust. A obvious example is the reuse of images found online. Many people feel that the Millennial and Generation Z individuals in particular have grown up with a copy/paste, download-it-for-free ethos that can easily lead to legal violations online as students and later as employees. 





conference.njedge.net/2016/




Deep Text

typeface
What is "deep text?"  It may have other meanings in the future, but right now Deep Text is an AI engine that Facebook is building. The goal of Deep Text is big - to understand the meaning and sentiment behind all of the text posted by users to Facebook. It is also intended that the engine will help identify content that people may be interested in, and also to weed out spam.

The genesis of Deep Text goes back to an AI paper published last year,"Text Understanding from Scratch," by Xiang Zhang and Yann LeCun.

Facebook pays attention to anything you type in the network, not just "posts" but also comments on other people's posts, Facebook researchers say that 70% of us regularly type and then decide not to post. They are interested in this self-censorship that occurs. Men are more likely to self-censor their social network posts, compared to women. Facebook tracks what you type, even if you never post it. 

Why does Facebook care? If they know what your typing is about, it can show it to other people who care about that topic, and, of course, it can better target ads to you and others.

This is not easy if you want to get deeper into the text. If you type the word "Uber" what does that mean? Do you need a ride? Are you complaining or complimenting the Uber service? What can Facebook know if you type "They are the Uber of food trucks"? 

This is a deep use of text analysis. With 400,000 new stories and 125,000 comments on public posts being shared every minute on Facebook, they need to analyze several thousand per second across 20 languages. A human might be able to do a few per minute, but obviously this is far beyond the capabilities of humans, The intelligence need to be artificial.

A piece on slate.com talks about how in Facebook Messenger might use Deep Text for chat bots to talk more like a human, interpreting text rather than giving programmed replies to anticipated queries. Saying "I need a new phone" is very different from "I just bought a new phone."The former opens the opportunity to suggest a purchase. The latter might mean you are open to writing a review.

Along with filtering spam, it could also filter abusive comments and to generally improve search from within Facebook.
Parsing text with software has been going on for decades. Machine scoring of writing samples has been an ongoing and controversial since it began. Ellis Batten Page discussed back in 1966 the possibility of scoring essays by computer, and in 1968 he published his successful work with a program called Project Essay Grade™ (PEG™). But the technology of that time, didn't allow computerized essay scoring to be cost-effective, Today, companies like Educational Testing Service offer these services, but what Facebook wants to do is quite different and in some ways more sophisticated.

Deep Text leverages several deep neural network architectures. These are things that are beyond my scope of knowledge - convolutional and recurrent neural nets, word-level and character-level based learning - but I will be using it, and so I want to understand what is going on behind the page.

If you read about Microsoft’s Tay chatbot, you know that it was artificial intelligence that was supposed to learn how to talk like a teenager. users gamed the software and "taught" it to talk like a Nazi, which became big news on social media. The bot was created by Microsoft's Technology and Research, and Bing divisions and named "Tay" as an acronym "thinking about you."

Facebook is testing Deep Text in limited and specific scenarios.