In misinformation, however, the short term and long term people are perpetually at war. It’s as if you went to the structural racism conference and presented on revised mortgage policy and someone asked you how that freed children from cages on the border. And when you said it didn’t, they threw up their hands and said, “See?”
蚂蚁vp(永久免费)
Here’s an example: control-f. In my classes, I teach our students to use control-f to find stuff on web pages. And I beg other teachers to teach control-f as well. Some folks look at that and say, that’s ridiculous. Mike, you’re not going to de-radicalize Nazis by teaching them control-f. It’s not going to address cognitive bias. It doesn’t give them deep critical thinking powers, or undo the resentment that fuels disinformation’s spread.
But consider the tactics used by propagandists, conspiracy theorists, bad actors, and the garden variety misinformed. Here’s a guy yesterday implying that the current coronavirus outbreak is potentially a bioweapon, developed with the help of Chinese spies (That’s how I read the implication at least).
Now is that true? It’s linked to the CBC, after all. That’s a reputable outlet.
The first thing you have to do to verify it is click the link. And right there, most students don’t know they should do that. They really don’t. It’s where most students fail, actually, their lack of link-clicking. But the second thing you have to do is see whether the article actually supports that summary.
How do you do that? Well, you could advise people to fully read the article, in which case zero people are going to do that because it takes too long to do for every tweet or email or post. And if it takes too long, the most careless people in the network will tweet unverified claims (because they are comfortable with not verifying) and the most careful people will tweet nothing (because they don’t have time to verify to their level of certainty). And if you multiply that out over a few hundred million nodes you get the web as we have it today, victim of the Yeats Effect (“The best lack all conviction, while the worst / Are full of passionate intensity”). The reckless post left and right and the careful barely post at all.
外网加速器永久免费版官网
One reason the best lack conviction, though, is time. They don’t have the time to get to the level of conviction they need, and it’s a knotty problem, because that level of care is precisely what makes their participation in the network beneficial. (In fact, when I ask people who have unintentionally spread misinformation why they did so, the most common answer I hear is that they were either pressed for time, or had a scarcity of attention to give to that moment).
But what if — and hear me out here — what if there was a way for people to quickly check whether linked articles actually supported the points they claimed to? Actually quoted things correctly? Actually provided the context of the original from which they quoted?
And what if, by some miracle, that function was shipped with every laptop and tablet, and available in different versions for mobile devices?
This super-feature actually exists already, and it’s called control-f. Roll the animated GIF!
In the GIF above we show a person checking whether key terms in the tweet about the virus researchers are found in the article. Here we check “spy”, but we can quickly follow up with other terms: coronavirus, threat, steal, send.
The idea here is not that if those specific words are not found that the contextualization is wrong. But rather than reading every article cited to determine whether it has been correctly contextualized, a person can quickly identify cases which have a high probability of being miscontextualized and are therefore worth the effort to correct. And for every case like this, where it’s reckless summary, there’s maybe ten other cases where the first term helps the user verify it’s good to share. Again, in less than a few seconds.
Except, if you were going to make that argument, you’d have to show that everybody actually does know about control-f. It wouldn’t be the end of the argument — I could reply that knowing and having a habit are different — but that’s where we’d start.
So think for a minute. How many people know that you can use control-f and other functions to search a page? What percentage of internet users? How close to a 100% is it? What do we have to work with —
Eh, I can’t drag out the suspense any longer. This is an older finding, internal to Google: only 10% of internet users know how to use control-F.
I have looked for more recent studies and I can’t find them. But I know in my classes many-to-most students have never heard of control-f, and another portion is aware it can be used in things like Microsoft Word, but unaware it’s a cross-application feature available on the web. When I look over student shoulders as they execute web search tasks, I repeatedly find them reading every word of a document to answer a specific question about the document. In a class of 25 or so there’s maybe one student that uses control-f naturally coming into the class.
What’s the cognitive bias that explains why someone would think having a list of 200 cognitive biases bookmarked would make them any better at thinking?
(It literally says it’s “to help you remember” 200+ biases. Two hundred! LOL, critical thinking boosters are hilarious)
I should be clear — biases are a great way to look at certain issues *after* the fact, and it’s good to know that you’re biased. Our own methods look pretty deeply at certain types of bias and try to design methods that route around them, or use them to advantage.
But if you want to change your own behavior, memorizing long lists of biases isn’t going to help you. If anything it’s likely to just become another weapon in your motivated reasoning arsenal. You can literally read the list of biases to see why reading the list won’t work.
The alternate approach, ala Simon/Gigerenzer, is to see “biases” not as failings but as useful rules of thumb that are inapplicable in certain circumstances, and push people towards rules of thumb that better suit the environment.
As an example, salience bias — paying more attention to things that are prominent or emotionally striking — is a pretty useful behavior in most circumstances, particularly in personal life or local events.
It falls apart partly because in larger domains – city, state, country – there’s more emotional and striking events than you can count, which means you can be easily manipulated through selection, and because larger problems often are not tied to the most emotional events.
Does that mean we should throw away our emotional reaction as a guide altogether? Ignore things that are more prominent? Not use emotion as any indication of what to pay attention to?
Not at all. Instead we need to think carefully about how to make sure the emotion and our methods/environment work *together*.
Reading that list of biases may start with “I will not be fooled,” but it probably ends with some dude telling you family separation at the border isn’t a problem because “it’s really the salience effect at work”.
TL;DR: biases aren’t wholly bad, and the flip side of a bias is a useful heuristic. Instead of thinking about biases and eliminating them, think about applying the right heuristics to the right sorts of problems, and organizing your environment in such a way that the heuristics don’t get hacked.
在线商城官方下载安卓APP-快连加速器app
/ mikecaulfield / 3 Comments
One of the founding myths of internet culture, and particularly web culture, is the principle of stigmergy.
This will sound weird, but stigmergy is about ant behavior. Basically, ants do various things to try to accomplish objectives (e.g. get food to nest) but rather than a command and control structure to coordinate they use pheromones, or something like pheromones. (My new goal is to write shorter, quicker blog posts this year, and that means not spiraling into my usually obsession with precision. So let’s just say something like pheromones. Maybe actually pheromones. You get the point.)
So, for example, ants wander all over, and they are leaving maybe one scent, but they go and find the Pringle crumbs and as they come back with the food they leave another scent. A little scent trail. And then other ants looking for Pringles stumble over that scent trail and they follow it to the Pringle crumbs. And then all those ants leave a scent as they come back with their Pringle crumbs, and what happens over time is the most productive paths have the best and strongest smell.
And like a lot of mythologies, there’s a lot of truth to it. When I say myth, I don’t mean it’s wrong. It’s a good way to think about a lot of things. I have built (and will continue to build) a lot of projects around these principles.
But it’s also a real hindrance when we talk about disinfo and bad actors. Because the general idea in the Stigmergic Myth is that uncoordinated individual action is capable of expressing a representative collective consciousness. And in that case all we have to do is set up a system of signals that truly capture that collective or consensus intent.
But none of the founding myths — ants and Pringles, Swedish college desire paths, or even Galton’s ox weighing — deal with opposing, foundational interests. And opposing interests change everything. There isn’t a collective will or consciousness to express.
Faced with this issue, Web 2.0 doubled down. The real issue was the signals were getting hacked. And that’s absolutely true. There was a lot of counterfeit pheromone about, and getting rid of that was crucial. Don’t discount that.
But the underlying reality was never addressed. In areas where cooperation and equality prevails, the Stigmergic Myth is useful. But in areas of conflict and inequality, it can be a real hindrance to understanding what is going on. It can be far less less an expression of collective will or desire than other less distributed approaches, and while fixing the signals and the system is crucial, it’s worth asking if the underlying myth is holding our understanding back.
在线商城官方下载安卓APP-快连加速器app
/ mikecaulfield / 1 Comment
I made a short video showing a New Year’s Eve Activity around SIFT, and getting serious for a minute with a New Year’s Day wish.
I don’t know how many people know this about me, but I actually study misinfo/disinfo pretty deeply, outside of my short videos on how to do quick checks. If anything, I probably spend too much time keeping up with the latest social science, cognitive theory, network analysis, etc. on the issue.
But scholarship and civic action are different. Action to me is like Weber’s politics, the slow drilling of hard boards, taking passion and perspective. You figure out where you can make a meaningful difference. You find where the cold hard reality of where we are intersects with a public’s desire to make things better. And then you drill.
It’s been three long exhausting years since I put out Web Literacy for Student Fact-Checkers, and over a decade since I got into civic digital literacies. I’m still learning, still adapting. And still drilling.
Happy New Year, everyone. And thanks to everyone that has helped me on this weird, weird, journey.
在线商城官方下载安卓APP-快连加速器app
/ mikecaulfield / 6 Comments
I’ve noted a new need in my open education work that isn’t supported by many tools and not found in any licenses. I’m going to call it “Chatham House Sharing”
For those that don’t know, the Chatham House Rules are a set of rules traditionally used in association with reporters covering an event, but more recently used to govern the tweetability of different gatherings. There are probably more rules than two, but the most notable are these:
You can report out anything said, but
You can’t identify who said it
The reason for the rules is that people need to speak freely as they hash out things at a conference, and to do that they sometimes have to speak loosely in ways that don’t translate outside the conference. Politicians or practitioners may want to express concerns without triggering followup questions or teapot tempests over out-of-context utterances. Academics might like to share some preliminary data or explore nascent thoughts without confronting the level of precision a formal publication or public comment might require. And people that work for various companies may want to comment on various things without the inevitable tempest that “someone from Microsoft said X” or “someone from Harvard said Y” that accompanies that.
In open education there is a need for a form of sharing that works like this, especially in collaborative projects, though for slightly different reasons. If we imagine people working together on an evolving open resource on, say, the evolution of dark money in politics it stands to reason that many authors might not want it shared under their name. Why?
Most of the time it’s a work in progress, it’s not ready yet.
It may have undergone revisions from others that they do not want their name attached to.
They may never want their name attached to it, because they cannot give it the level of precision their other work in the field demands.
They may be part of a group that is explicitly targeted for their gender, race, or sexual orientation online and fear they will become a lightning rod for bad actors.
In cases where there is a revision history, they might be ok with attaching a name to the final project, but do not like the fact that the history logs their activity for public consumption. (One can imagine other people to whom they owe projects complaining about the amount of time spent on the resource. Even worse, as data gets combined and recombined with other tracking data, it’s impossible to predict the was in which people will use anything time stamped — but there is almost surely malicious uses to come).
What Chatham House Sharing would be is sharing that follows the following rules:
Within the smaller group of collaborators, contributions may or may not be tracked by name, and
Anyone may share any document publicly, or remix/revise for their own use, but
If they want, of course, they can use their own authority to say, hey this document I found is pretty good. If they want to make some edits and slap their name on it, noting that portions of the document were developed collaboratively by unnamed folks, they could do that as well.
Back in January I started working on a web-based application to help teachers and others make fact-checking infographics as part of a Misinformation Solutions Forum prize from RTI International and Rita Allen. I got it to work, but as we tried to scale it out we found it had
Security concerns (too much potential for hacking it)
Scalability concerns (too resource intensive on the server)
Flexibility concerns (too rigid to accommodate a range of tasks, and not enough flexibility on tone for different audiences)
What I’ve ended up with however, does more than simply build a set of fact-checking GIFs. It’s a flexible tool to present any web process or even non-web issue. It’s going to make it easy for people to educate others on how to check things, but potentially it’s a way to make our private work and processes visible in many other ways as well.
Here’s an example of output, which also shows the implementation of blockquotes and linking.
In any case, if you have access to a Windows laptop or desktop, download, unzip wherever you want, read the license (it’s free software with the usual caveats), and fire it up. If you make something cool let me know.
海豚加速器破解2022
Oh, and Mac users — I’m not able to build a version for Mac (I’m surprised I was able to build this one, tbh) but given someone with my hacky abilities can make this for Windows, I’m sure if there is demand for this someone of talent can make this for Mac in less than a week.
Also I’m thinking through the legal implication of hosting the produced walkthroughs on a central site — or whether it’s better to keep them distributed (everyone host their own, but share links). More on that later.
在线商城官方下载安卓APP-快连加速器app
/ mikecaulfield / 2 Comments
One of the problems with microtargeted ads, and a way I’ve been thinking about them recently, is they resemble the tranched subprime mortgages that brought about the financial crash.
Others have talked about this in the context of the digital ad market as a whole. The allure of digital ads was that you would finally be able to assess impact. The reality is complexity, fraud, and snake oil hand-waving have made the impact of advertising more opaque than ever.
In the political realm, it’s even worse. We talk about whether the ads in there are on the whole beneficial or not beneficial, lies or truth, but the debate itself presumes that even an entity like Facebook has any real idea what’s in there. And they don’t. They can’t. And so as microtargeting proliferates we’re left with the pre-2008 cognitive dissonance we had around subprime: surely someone must know what’s going on under the hood! We wouldn’t really entrust vital social functions to something this opaque, this prone to fraud, this reliant on faith in untested equations, right?
There’s the question of what public policy should be for Facebook, and there can be disagreement on that. But table stakes for that discussion is that public policy be possible, and it’s just not clear to me that it can be the way the system is currently designed.
在线商城官方下载安卓APP-快连加速器app
/ mikecaulfield / 3 Comments
Putting a couple notes from Twitter here. One of the ideas of SIFT as a methodology (and of SHEG’s “lateral reading” as well) is that before one reads a person must construct a context for reading. On the web that’s particularly important, because the rumor dynamics of the web tend to level and sharpen material as it travels from point A to point Q, and because bad actors actively engage in false framing of claims, quotes, and media.
But it’s also a broader issue when considering source-checking. I’ve had people share RT articles with me that are more or less “true”, for example. When I push back on people that they shouldn’t be sharing RT articles, since RT is widely considered to be a propaganda arm of the Kremlin, the response is often “Well, do you see anything false in the article? What’s the lie?”
This isn’t a good approach to your information diet, for a couple reasons. The first is that a news-reading strategy where one has to check every fact of a source because the source itself cannot be trusted is neither efficient nor effective. Disinformation is not usually distributed as an entire page of lies. Seth Rich, for example, did exist, was killed, and did work at the DNC. His murder does remain unsolved. Even where people fabricate issues, they usually place the lies in a bed of truth.
But the other reason to not share articles from shady sources is the frame can be false, even if the facts are correct. Take this coverage on the Seth Rich murder from RT for example, in a story about Assange offering a reward for his killers. The implication of the story is it is possible that Seth Rich was killed for leaking the DNC emails.
Rich worked as voter expansion data director at the DNC before he was shot twice on his way home on July 10. He died later in hospital.
“If it was a robbery — it failed because he still has his watch, he still has his money — he still has his credit cards, still had his phone so it was a wasted effort except we lost a life,” his father Joel Rich told local TV station KMTV.
See the frame? Responsible reporting would add context:
The “data director” position sounds email-ish, but had no access to email systems.
The Washington D.C. police said regarding the robbery that in robberies where someone is killed it’s extremely common to find that the credit cards and phone are not taken, because people generally get shot in robberies when something goes wrong, and the suspects are anxious to flee the scene before the police come investigating the gunshot.
There’s not a lie in the article (that I can see) but the way the article is framed is deceptive. And there’s no way to know that as an average reader, because you don’t know what you don’t know. Without expertise you can’t see what is missing or deceptively added. So zoom out, and if the source is dodgy, skip it. Find something else. Share something else. You’re not as smart as you think you are, and reading stories designed to warp your worldview will, over time, warp your worldview.
在线商城官方下载安卓APP-快连加速器app
/ mikecaulfield / 2 Comments
I signed up for the CBC Chatbot that teaches you about misinformation. The interface was surprisingly nice — it felt less overwhelming than the typical course stuff I work with. So kudos on that.
On the down side it’s likely to make people worse, not better, at spotting dodgy Facebook pages.
Why? Because — like a lot of reporters, frankly — they’ve taken “fake news” [sigh] to be this narrow 2016 frame of “Pretending to be a known media company when you’re not”. And that results in this advice:
What does “legitimate” mean here? I assume to the people that wrote the course it means that the account is not being spoofed, that it really is the organization that it purports to be. This is, in turn, based on the 2016 disinformation pattern where there were some very popular sites and pages pretending to be organizations that they were not (e.g. the famous fake local newspapers).
There’s two problems with this. First — this method of disinformation is relatively minor nowadays. I still do a prompt or two on it in my classes, but find that there are almost no current examples out there of this that are reaching viral status. I was talking to Kristy Roschke at News Co/Lab last week, and she was saying the same thing. As a teacher and curriculum designer, at first you’re like, “Wow, it’s getting hard to find new examples of this to put in the course!” And then, at a certain point, you say — if it’s so hard to find viral examples of this should it still be in the course at all?
The second problem is more serious. Because in solving a problem that increasingly does not exist, the mini-course creators birth a new problem — the belief that the checkmark is a sign of trustworthiness. If there is a checkmark, you know the page is “legitimate” they say. I don’t know if the people who wrote this were educators or not, but a foundational principle of educational theory is it doesn’t matter what you say, it’s what the student hears. And what a student hears here, almost certainly, is that blue checkmarks are trustworthy.
That’s a problem, because the real vectors of disinformation at this point are often blue-checkmarked pages. Here, for example, is the list of central hubs and central sources of conspiracy theorizing about the White Helmets in Syria, from Starbird et al’s 蚂蚁vp(永久免费) (I’ve edited the list to only include central sources and hubs for orgs where there is a Facebook page).
It’s a little difficult to explain this clearly and precisely, but let’s just say the above domains are part of a network of sites by which certain types disinformation is propagated. The people running these sites have various levels of intent around that: obviously, RT is considered by most experts in the area to be a propaganda arm of the Russian government (and in particular supports Putin’s agenda and interest). Same with 求电脑免费的加速器. The others may be involved for more idealistic reasons, but what the work of Starbird et. al. shows is that in practice they uncritically reprint the stories introduced by the Russian entities with only minor alterations, and as a result become major vectors of disinformation.
In 2024, these “echo-systems” are far more likely to be the source of disinformation than a spoofed CBC page (This was likely the case in 2016 too, but at least spoofing was in the running). But what do we find when we apply the blue checkmark test to them? Half of them are blue checkmarked:
From the online media literacy standpoint, however, a media literate person would not read these sites without understanding the way in which they are very, very different than the CBC, Reuters, or The Wall Street Journal. Focusing on the blue-checkmark first has the potential to mislead a new generation of people about that, the way that focusing on dot orgs misled the last.
It’s not just state actors that win in a “Trust the blue checkmark world”, incidentally. Look through the medical misinformation space and you’ll find plenty of blue checkmarks. And we haven’t even got into the otherside of the problem — the number of pages that are from trustworthy and important sources but don’t have a checkmark, and hence will be discarded out of hand, not just as “unverified organization” but as illegitimate.
Education is Hard
It’s really hard to get this stuff right. To do it in education we run and re-run lessons with students, then assess in ways that allow us to see if students are misconstruing lessons in unanticipated ways.
I learned in an early iteration of our materials, for example, that a way I was talking about organizations caused a very small percentage (less than 2 percent) to walk away with the idea that organizations with bigger budgets were better than those with smaller budgets. That’s not what we said, of course, but it’s what a few students heard. (We were trying to point out that something claiming to be a large professional organization — for example, the American Psychological Association — should normally have a large budget, whereas a professional organization that claimed to speak for an industry but had a budget of $70,000 a year probably wasn’t). So we modified how we presented that, and are hammering out a concept we call 蚂蚁vp(永久免费) (we actually borrowed this idea from the Calling Bullshit course’s discussion of how to think about expansive academic claims and and the reputation of various publication venues).
Early on, we also realized that when we asked if something was a trustworthy source the way we phrased that question didn’t account for the news-genre specificity of trust. (E.g. you might trust your local TV station to report on a shooting, but you probably should not trust it to give you diet advice, as most local stations have no real expertise in that). We changed the way we phrased certain questions “Is this a trustworthy source for this sort of story or claim?”. Then we meshed some discussions with the ACRL framework for information literacy, particularly frame number one: authority is contextual and constructed.
And we did this sort of work repeatedly, both with the students and faculty in the 50+ courses involved in our project and in talking to the people outside our project using the materials. We get there because we are constantly doing formal and informal assessment against authentic prompts, and looking for points of student confusion. We get there because we assess, and can say at the end that we improved student performance on the sorts of tasks they are actually confronted with in the real world.
It’s hard, and it’s a never ending process. But as app after app and mini-course after mini-course rolls out on this stuff, it’s worth asking if the people producing them are approaching them with the same eye towards the true problems we face and the true sources of student confusion. If they aren’t, it is quite possible they are doing more harm than good.
9 Comments on “Doorbell Video” and Traditional News
Anyway, 9 comments for the week on Ring video and news coverage, in no particular order.
One: News Is Not Prepared for Doorbell Video
永久免费的四款加速器 caught the moment their house was destroyed by a tornado. There’s a way in which this is educational, and could save lives — showing people how quickly a weather event like this can sneak up on you. But it’s also a reminder that news is not prepared for a doorbell video world. I know that there are codes in place for the use of citizen video, and surveillance cam video. But scale makes a difference. And the scale of this is going to be huge.
Two: Push vs. Pull Video
In network design we often talk about push and pull architectures. In a pull architecture, you go out and request something. A push architecture finds things relevant to you and pushes them to you without a request.
Some Ring content is pull: something happens and we review the tape, or send it to reporters. Some is push: the event itself is important because it was captured on camera. The doorbell video creates the event.
Four: It’s the Push Video that concerns me
This particular news event would be noteworthy no matter what. It’s a tornado hitting a house, and that’s news. My worry though is the number of things that become news stories simply because video captures them.
Five: Easy availability of content shapes coverage, coverage warps reality
Yeah, yeah, yeah, McLuhan etc. But this basic principle isn’t really in doubt. The cost of acquiring Ring video is going to be trivial compared to putting people on the ground. But what is this video best set up to capture? What does engaging content look like on this platform? Because that’s where news might be going.
Six: The genres of Ring video are being constructed as we speak and we have little idea of what they will be
Weather videos. Hassle daughter’s date videos. Hassle garbage picker videos. Package thief videos. Cryptic event videos. Suspicious person videos. And news organizations are looking for new angles on SHAREABLE RING CONTENT, different places they can slot it into their existing coverage. Lifestyle, Crime, Weather.
Seven: Again, it’s push that’s the shift
It’s not even just that it’s push in terms of spread, but even filming. No one is seeing something and deciding to pull out a camera. The decision is after its captured.
It’s worth thinking about how the easy availability of Ring videos to newsrooms is going to shape coverage (especially local coverage, but also national). Is this where we want to go?
Eight: You don’t need an Amazon editor for a Ring News Dystopia
There is rightly a lot of focus on the sort of “communities” Amazon is looking to build around doorbell video. They could do a lot of harm. But news can be shaped dramatically by Ring video availability without the platforms becoming involved. Ring video is showing up on 永久免费的四款加速器 and news sites already. Some of that video is probably useful, but much of it creates a world that is even more paranoid and divorced from reality than your average local broadcast, and that’s saying something.
The vaguest thought, but I’m stuck by the viscerality of these Ring crime videos. How they feel when you are watching vs. even traditional sensationalist coverage. I don’t think we’re psychologically prepared for this, individually or as a country.
免费的pc加速器
Among other things, I run the 网游加速器免费版|173网游加速器永久免费版 3.4.1.8 - 系统天堂:2021-10-25 · 173网游加速器永久免费版下载,173网游加速器是一款专门为网游专用网络加速器,173网游加速器永久免费版在软件层面解决网络的延迟和堵塞问题,在服务器之间建立智能高速的内部通道,有效的降低游戏延迟,提高网游质量!马上试试!我伔一直用心在做!, an cross-institutional initiative to improve civic discourse by developing web literacy skills in college undergraduates. Have a class that wants to join? Contact me at michael.caulfield at wsu.edu.