US PRESIDENT TRUMP & ADOLF HITLER’s Alternative Information Universes; Can Media Literacy Defeat Disinformation And Misdirection—Lessons From 1939’s Germany

Fighting disinformation with media literacy—in 1939. SEE from a Nazi propaganda poster below in 1936; the stylized head of an eagle with beak open emitting circles like broadcast radio waves.

ANYA SCHIFFRIN, CJR|AIWA!NO!|“THERE ARE THREE WAYS to deal with propaganda—first, to suppress it; second, to try to answer it by counterpropaganda; third, to analyze it,” the journalist turned educator Clyde R. Miller said in a public  lecture at Town Hall in New York in 1939. At that time, faced with the global rise of fascist regimes who were beaming propaganda across the world, as well as US demagogues spouting rhetoric against the government and world Jewry, the rise of Stalinism, and the beginning of the Red-baiting that foreshadowed McCarthyism, scholars and journalists were struggling to understand how people could fall for lies and overblown rhetoric.

In response to this growing problem, Miller, who had been a reporter for the Cleveland Plain Dealerfounded the Institute for Propaganda Analysis in 1937. To get the institute up and running, Miller got a $10,000 grant from the department store magnate Edward A. Filene, who had by then begun making a name for himself as a liberal philanthropist. Based at Columbia University’s Teachers College, with a staff of seven people, the IPA devoted its efforts to analyzing propaganda and misinformation in the news, publishing newsletters, and educating schoolchildren to be more tolerant of racial, religious, and ethnic differences.

The World Journalism Education Council; WJEC Paris 2019 @WJECParis
WJEC Paris call for abstracts ends on October 22nd. Submitters are encouraged to focus their entries on the broader conference theme: “Teaching Journalism During a Disruptive Age.” More details:

In order to understand what kind of people, under certain circumstances would be susceptible to fascism, some sociologists studied personality traits. While it was clear that Germany’s defeat in World War I and subsequent economic conditions there, including widespread unemployment, had paved the way for the rise of Adolf Hitler, academics and journalists tried to parse just what made Nazi propaganda so effective at galvanizing public support for the regime. Theodor Adorno produced his famous “F-scale” (the “F” stands for fascist), which aimed to identify individuals more susceptible to the persuasions of authoritarianism. In recent years, the research of behavioral economist Karen Stenner has similarly examined the ways that innate personality traits coupled with changing social forces can push some segments of society toward intolerance.)

For its part, the IPA, under Miller’s leadership, maintained that education was the American way of dealing with disinformation. “Suppression of propaganda is contrary to democratic principles, specifically contrary to the provisions of the United States Constitution,” Miller said in his 1939 speech. “Counterpropaganda is legitimate but often intensifies cleavages. Analysis of propaganda, on the other hand, cannot hurt propaganda for a cause that we consider ‘good.’” In other words, analyzing propaganda for a good cause would not undermine the cause itself—but analysis of “bad” propaganda would allow audiences to dismantle its effects.

IN THE 80 YEARS since Clyde Miller first set out to tackle this problem, the dissemination of propaganda in our society has become only more sophisticated and perhaps more ubiquitous. The recent rise of Facebook and Twitter, along with the capabilities they offer to micro-target specific particular audience demographics, and the ongoing controversies of the 2016 election—among them the prospect that ideologically motivated foreign actors used social media to disseminate false information—have brought a renewed flurry of interest in the kind of propaganda, misinformation, and disinformation that pervaded the country nearly a century ago. So it’s not surprising that we again see growing interest in developing techniques for identifying and unraveling them.

Foundations including Hewlett Foundation, Ford Foundation, Open Society Foundation, and the John S. and James L. Knight Foundation, have begun slow and expensive efforts to educate people to think critically, build trust in media outletsanalyze disinformation, and fight propaganda. Governments around the world, including in Germany, Malaysia and the European Union, are starting to regulate social-media platforms, as evidenced by recent efforts by European governments to require Facebook and Twitter to crack down on illegal hate speech. But social media platforms have largely taken the stance that the onus is on the audience to figure out what is fake and what is not. Meanwhile, they tweak their algorithms, mount an array of technical fixes, and employ human moderators to block inflammatory content.

In light of all this, it’s worth looking back to one of the earliest attempts to tackle this age-old problem. What, if anything, can we learn from the efforts of the IPA in the 1930s? And why are we again falling prey to the kinds of disinformation campaigns that it aimed to inoculate society against?

IN HER GROUP LEADERS GUIDE TO PROPAGANDA ANALYSIS, the IPA’s educational director, Violet Edwards, argued that industrialization and urbanization made society bigger and more complicated and so the “common man” had  become “tragically confused” by an overload of secondhand information and the need to make decisions about subjects without having firsthand information.

Instead of the town hall or the cracker barrel of yore, where citizens could meet to discuss the topics that affected them personally, Edwards wrote, they now had to rely on information from others about how society should be organized and which policies should be pursued far from home. Meanwhile, many others, including the American writer Walter Lippmann and the French philosopher Jacques Ellul, had begun arguing that journalism could become a means to sift through and distill the excessive information now available to the masses in part because of increasing newspaper circulation and radio broadcasts.

To properly understand the secondhand information on which citizens depend, Edwards wrote, readers should adopt a scientific mindset of fact-finding and logical reasoning and think critically when confronted with secondhand information. The IPA developed techniques for analyzing information that would help audiences think rationally. They brought media literacy training into US schools in an attempt to inoculate young people from the contagion of propaganda by teaching them how to thoughtfully analyze what they read and heard. (Again, something that is being tried today.)

Miller categorized propaganda into seven types. These included “glittering generalities,” “name calling,” “testimonials,” and “transfer,” a means by which “the propagandist carries over the authority, sanction, and prestige of something we respect and revere to something he would have us accept.” Using another tactic, “Plain Folks,” Miller argued, propagandists “win our confidence by appearing to be people like ourselves,” while “bandwagon” was a “device to make us follow the crowd to accept the propagandists’ program en masse.”

Soon after the founding of the IPA, Miller was praised for bringing “the newspaper man’s passion for simplifying complicated subjects”: Among the IPA’s regular output, were analyses of political speeches, with little icons—the emoji of the day—printed next to each phrase to explain which of these techniques the speaker was using. One such book analyzed the anti-Semitic radio broadcasts of the infamous Father Coughlin, a Catholic priest in Detroit who was estimated to draw 30 million listeners for his attacks on the international Jewish population and President Roosevelt, among other topics.

Miller also published something he called the “ABCs of Propaganda Analysis,” which exhorted readers to first concern themselves with propaganda, then figure out the agenda of the propagandist, view the propaganda with doubt and skepticism, evaluate one’s reactions to it, and finally seek out the facts. He hoped audiences would use the ABCs to become active readers who could carefully analyze their reactions to propaganda.

Additionally, Miller published a weekly Bulletin that described an important topic in the news, analyzed the propaganda techniques used by all sides, and included a detailed list of sources he’d used and recommended further reading and discussion questions. This struck a nerve: Some 10,000 people subscribed to the Bulletin, which cost $2.00 a year (about $32.00 in today’s terms), and 18,000 people bought the bound volume of back issues that was published at the end of each year.

Here’s a sample analysis of the fake news of the day, taken from the May 26, 1941, issue of the Bulletin:

Persistently since the influx of refugees from the war areas began, a story has bobbed up in numerous American cities about the alleged heartless—and actually unreal—discharging of regular employes [sic] by stores to make places for ‘foreigners’. The story usually is anti-Semitic; the store with which it is connected has Jewish owners, and Jews are said to get the jobs.

One large store in New York City which has been a victim of the story has spent considerable sums trying to trace the source and find some way of stopping it. The efforts have been fruitless. The story keeps reappearing, and mimeographed leaflets have even been circulated picturing the Jewish manager welcoming a long line of Jewish refugees while turning away another line of fine Nordic types.

As part of the IPA’s attempts to spread its message and techniques, the organization sought to put young students on guard against propaganda and to form them into sophisticated news consumers. The institute formed a relationship with Scholastic magazine, and in 1939 and 1940 produced a series that was distributed in schools, called “What Makes You Think So?; Expert Guidance to Help You Think Clearly and Detect Propaganda in Any Form.” By the late 1930s, 1 million school children were using IPA’s methods to analyze propaganda, and the IPA corresponded with some 2,500 teachers. Anticipating contemporary critiques, such as the argument by Danah Boyd, founder of the technology-analysis organization Data & Society, that media-literacy programs can cause audiences to become dangerously mistrustful, the IPA maintained, in its teaching guides, that students needed to think critically as part of being engaged citizens:

The teacher who acts as a guide to maturity helps her pupils to think critically and to act intelligently on the everyday problems they are meeting…. [B]y its very nature [the] process will not build attitudes of cynicism and defeatism.

The IPA also helped design curriculum aimed at promoting civic engagement and racial and religious tolerance that was piloted in the Springfield, Massachusetts, school district, whose superintendent was sympathetic to the IPA’s mission. The “Springfield Plan” was influential and replicated in other districts but petered out in Springfield itself after a few years partly due to criticism by the Catholic Church and lack of local support as religious tensions rose locally after World War II. By the early ’50s, as McCarthyism was taking hold, there were murmurings that the plan contained “subversive” elements.

MILLER SPENT 10 YEARS at Columbia Teachers College as Communications Director and as an associate professor. In that time, IPA used up $1 million of Filene’s money. It was World War II that caused the end of the IPA, in part because the US began producing its own propaganda to galvanize support for the fight against Hitler. Publication of the weekly Bulletin ceased in 1942, as the US entered the war. In its farewell issue of January 9, 1942, headlined “We Say Au Revoir,” the IPA explained that its board of directors had voted to suspend operations:

The publication of its dispassionate analysis of all kinds of propaganda, ‘good’ and ‘bad,’ is easily misunderstood during a war emergency, and more important, the analyses could be misused for undesirable purposes by persons opposing the government’s efforts.

This final Bulletin expressed satisfaction with the work achieved by the IPA, warned that wartime is usually accompanied by a rise in intolerance, and expressed the hope that IPA techniques for analyzing propaganda would be used in the future, which indeed they were.

Miller’s time at Teachers College came to a sad end, as he apparently fell victim to the very intolerance he had warned against. Along with some other faculty members, he was put on leave from the college in 1944, amid a financial crisis at the institution, and he never resumed work there. In 1948 Miller was officially let go. Miller was told his dismissal was the result of departmental restructuring—but William Randolph Hearst’s animosity towards Miller may have contributed. Hearst was known for attacking “Reds” in the universities and schools and his paper, The World Telegram, had criticized some of the educational activities Miller was involved with.  Hearst had complained to Teacher’s College about Miller, saying he should “lay off.”  The House Unamerican Activities Committee in 1947 also attacked the IPA calling it a “Communist front organization.” The late 1940s were a prelude to the McCarthy years of the 1950s and HUAC had begun going after members of the American left. As far as we know, Miller was not a Communist or a “Fellow Traveler” but his involvement with IPA, the Methodist Church and the Springfield plan was enough to cause Hearst’s papers to smear him. There were a lot of gray areas during the McCarthy era blacklists. Some professors were fired by their Universities while others were let go quietly. Miller may have been one of these.

Miller lost his Columbia housing and salary and wrote repeatedly to Columbia’s president decrying the “violation of tenure and academic freedom.” He also told the New York Tribune, “I can understand that during the depression and now in this period of post-war hysteria, academic freedom is a pretty hard thing to preserve.”  For a while Miller worked at The League for Fair Play, which was based in New York and helped publicize the Springfield Plan. But then the trail goes cold. On a trip to Australia in 1999, Miller died; he’s buried there.

YET MILLER’S LEGACY LIVES ON. Although it was phased out in Springfield, the ideas of his education plan continued. According to Boston College education professor Lauri Johnson, “the Springfield Plan became the most well-publicized intercultural educational curriculum in the 1940s, talked about and emulated by school districts across the country and into Canada.”

And after the dust from WWII had settled, researchers, led by Yale’s Carl Hovland, once again took up the discussion of media effects and propaganda. Rather than focusing on specific propaganda techniques, Hovland took a broader view, attempting to understand how the media garnered credibility. Among other topics, Hovland and his group of scholars tried to understand if the source of a message affects whether people trust it, whether the content of the message matters or (as with Adorno’s F-scale) if audience characteristics are the most important. Hovland believed that highly intelligent people may be more able to absorb new information but are also more skeptical. People with low self-esteem who “manifested social inadequacy…showed the greatest opinion change.” However, despite extensive studies as to what caused media persuasion, Hovland’s findings were inconclusive. Scholars still grapple with the questions he tried to answer.

Additionally, many of the techniques the IPA pioneered are still used today in media-literacy training classes in US schools. Many take as their foundation the IPA’s pioneering ideas about how best to understand and combat propaganda, including a belief in the need for personal reflection and for understanding how personal experience shapes one’s ideas.

In fact, it’s striking how closely the IPA’s discussions about disinformation and possible remedies to it resemble the conversation we’re having on this topic today. For instance, researchers such as Claire Wardle, the executive director of First Draft, a think tank at Harvard’s Kennedy School that aims to fight disinformation, along with various others, have called out the techniques used by people spreading propaganda, and delineated taxonomies of the different kinds in use.

It would be nice to think that the IPA’s efforts worked and a generation of children became inured to propaganda and disinformation. In fact, the rise of Nazism in Germany happened in part because of the effectiveness of German propaganda and the US also went down the road of McCarthyism and anti-Communist propaganda. Moreover, it turns out that it’s devilishly hard to provethat media literacy is very effective. New research by University of Pennsylvania Annenberg Professor Kathleen Hall Jamieson suggests that the Russian disinformation campaign on social media may have worked because it reinforced the points made by Trump in his campaign. Moreover, people who believe in fake news keep believing it even when confronted with information.

The lesson of the IPA is not just that media literacy education is hard to do well but that when societies become truly polarized, just teaching tolerance and critical thinking can be controversial.  In the 1940s Clyde Miller was attacked for his efforts. In today’s polarized world it’s not hard to imagine a similar backlash.

Author’s Note: Thanks to Chloe Oldham for her research, Andrea Gurwitt for her editing, and Professors Andie Tucher, Richard John, and Michael Schudson for their comments. Thanks to Thai Jones and the librarians working with the archives at the New York Public Library, Nicholas M. Butler Papers and the Columbia University Archives Central Files.

Trump Information-sphere – Debunking with data; Insights From Fact-checkers Around The World

CRIMSON TAZVINZWA, AIWA!NO!|EJC|Ever wondered if a politician’s claims really add up? Or perhaps you read a news story which seemed a little fishy? Armed with data, fact-checking organisations across the globe work tirelessly to help separate these facts from fiction, and any misnomers in-between.

To find out more about debunking with data, European Journalism Centre (EJC) gave subscribers to their data newsletter access to a global group of fact-checkers for an exclusive; “Ask Me Anything“.

How about starting with the most recent one; US President Trump’s UN LIE of the ‘century and centuries’ to come;  “In less than two years, my administration has accomplished more than almost any administration in the history of our country,” line which drove the listeners into murmurs and laughter – mockery.2018-09-26

The world just laughed out loud at Donald Trump. That day, during the president’s address to the United Nations General Assembly, the audience laughed when Trump boasted that “my administration has accomplished more than almost any administration in the history of our country.”

As soon as the words left Trump’s mouth, a ripple of laughter traveled through the crowd and grew as Trump reacted to the guffaws

An unnecessary and embarrassing spectacle at that; if you ask me. Of course the humongous CLAIM was debunked as quickly as it was uttered; by the laughter of the audience and the world; also later on; hours later if I remember correctly, by Donald Trump himself;  

On the one hand, that is a pretty even-keeled response from someone as tantrum-prone as Trump.

Reader question: Can you share some good examples or best cases where data has been successfully used for fact-checking?

Anim van Wyk, Chief Editor, Africa CheckGood data aids good fact-checking, which need to point out exactly what the data can and can’t tell you. The more limitations, the less certain the answer becomes.

For example, it’s easy to use data from the World Health Organization’s Global Ambient Air Quality database to rank cities according to their pollution levels. But the fine print shows that these entries aren’t comparable. This is due to differences in the methods and quality of measurements – and the fact that some cities suspected to be the most polluted don’t report data to the WHO.

Samar Halarnkar, Editor, Data are [we never use the singular!] the foundation of fact-checking.

One example: The Indian telecommunications minister announced that within a year of taking charge, his administration ensured that the government-run telecoms behemoth, BSNL, had turned a operating profit, after seven years of losses, and had added subscribers. After a meticulous examination of data–including right-to-information requests–we found that operating profits did not mean the company had turned profitable; indeed net losses had increased, and the minister had, conveniently, not mentioned that more subscribers left than were added.

After a new right-wing government took over in 2014, there were many reports of lynchings, especially of minorities, based on violence related to cows, considered holy by many Hindus. The ruling party and its adherents insisted these were isolated incidents, were never reported before and were not related to the extreme version on Hinduism that they promoted. A debate raged nationwide, poisoning politics and society, made worse by the absence of data–national crime records did not register crimes related to bovines. At, we created a database of each such crime from 2010 onwards, so that crime patterns could be compared with those after 2014, when the new government took office. Our database–now widely quoted in India and abroad–clearly shows that the overwhelming majority of the victims of such lynchings are minorities, in particular Muslims, and most violence has occurred in states run by India’s ruling party.

Image:’s interactive database of cow-related violence in India.

Matt Martino, Online Editor, RMIT ABC Fact Check: Politicians in Australia often like to speak about records, both when attacking opponents and spruiking their achievements. A famous example in our unit was when the ruling Coalition Foreign Minister said that when the Opposition Labor Party were last in government, they bequeathed the “worst set of financial accounts” in Australia’s history to their incoming government. This particular fact-check took several months of work sourcing data from the history books on debt and deficit. We were able to find data on federal government surpluses and deficits, plus gross debt, stretching back to 1901, and on net debt handed over to incoming governments back to the 1970s. It’s a great example of where a claimant has used the raw number in place of a percentage, which puts the figure in historical context. In this case, experts told us that these figures must be expressed as a percentage of GDP to enable historical comparisons. Ultimately, we found that the Foreign Minister’s claim was wrong, as there were far larger (as a percentage of GDP) inherited deficits recorded during WWII, far larger gross debt inherited in the same period, and far larger net debt bequeathed to a government during the 1990s.

Dinda Purnamasari, Senior Researcher, Data is the soul of fact-checking. But not just data, more importantly, the context of data itself is what makes our fact-check more reliable.

First, on 2 May 2017, Jake Van Der Kamp, an economist, shared an opinion entitled “Sorry President Widodo, GDP rankings are economists’ equivalent of fake news”. At that time, Kamp quoted a statement from President Joko Widodo (Jokowi) that Indonesia’s economic growth was third in the world, after India and China.

‘GDP is an attempt to emulate the corporate world by putting money numbers on performance but… with GDP you get no equivalents of the corporate balance sheet or profit and loss account and no notes to the accounts’“Indonesia’s economic growth is the third in the world, after India and China,” said Indonesian president Joko Widodo.

Third in the world, is it? What world is that? Within Asia alone I count 13 countries with higher reported economic growth rates than Indonesia’s latest 5.02 per cent.

They are India (7.5), Laos (7.4), Myanmar (7.3), Cambodia (7.2), Bangladesh (7.1), Philippines (6.9), China (6.7) Vietnam (6.2), Pakistan (5.7), Mongolia (5.5), Palau (5.5), Timor-Leste (5.5) and Papua New Guinea (5.4).

But of course President Widodo’s Indonesia is a very populous country with 261 million people. We cannot really compare it with pipsqueak places like Timor or Palau. Thus let’s draw the line at the 200 million people or more.

This gives us six countries across the world and, in terms of economic growth, Indonesia is in the bottom half of these six behind India, China and Pakistan. Try it at a cut-off of 100 million people or more and you still get no luck. Bottom half again.

Way to go, Joko. Don’t let the facts get in the way of a good story. We’ll make a journalist of you yet.


After this opinion became an issue in Indonesia, decided to verify the data that had been used by Jokowi. We looked at data from the International Monetary Fund (IMF) and based on that we concluded that Indonesia was not in the third position using general criteria, but instead ranks third among BRICS and high populated countries.

Image: A graph from’s fact-check, showing that Indonesia is ranked third out of the BRICS countries.

Second, in early August 2018, the Vice Governor claimed that their policy of odd-even traffic limitation had reduced air pollution in Jakarta. His statement became an issue, and even some media quoted his data. We verified the data using measurements from the Indonesian Agency for Meteorology, Climatology and Geophysics (Badan Meteorologi, Klimatologi, dan Geofisika – BMKG) and the US Embassy. Based on those, his statement was incorrect. The average of air pollution in Jakarta was still high and did not appear to be decreasing.

Tania Roettger, Head of Fact-Checking Team, Correctiv/EchtJetzt: Fact-checking only works for statements of fact, not opinions. So ideally there is data available to verify claims. We regularly use statistics about topics like crime, HIV-rates or jobs. If there are statistics on a topic, we will consult them. Of course, statistics differ in quality depending on the topic and who gathers the data.

Earlier this year, we debunked the claim that refugees sent 4.2 Billion Euros to their home countries in 2016. Data from the German federal bank showed that the 4.2 Billion Euros in remittances actually came from all migrants working in Germany for more than a year, not specifically from refugees. Most of the money, 3.4 Billion Euros, went to European countries, followed by Asia (491 Million) and Africa (177 Million).

Image: Correctiv/EchtJetzt rated the statement as four on their seven point rating scale.

Reader question: Have you seen examples where the same data has been manipulated to support both sides of an argument? If so, how do you ensure that your way of looking at the data isn’t biased?

Anim van Wyk: At Africa Check, we’re fond of the quip that some people use statistics “as a drunken man uses lamp posts – for support rather than illumination”. Depending on what you want to prove, you can cherry-pick data which supports your argument.

An example is different stances on racial transformation in South Africa, or the lack thereof. A member of a leftist political party said in 2015 that  “whites are only 10% of the economically active population but occupy more than 60% of the top management positions.” The head of the Free Market Foundation, a liberal think-tank, then wrote: “Blacks in top management… doubled.”

Both were right – but by presenting only a specific slice of the same data source to support their argument.

Again, you need to find out what the data cannot tell you and try to triangulate by using different data sources.

Image: Africa Check’s ‘mostly correct’ verdict means that a claim contains elements of truth but is either not entirely accurate, according to the best evidence publicly available at the time, or needs clarification.

Matt Martino: A great example of this was the debate over “cuts” and “savings” to health and education during the early days of the Abbott Coalition government in Australia. The government argued that they were making a “saving” on health and education by reducing the amount spent on what the previous Labor government had budgeted to spend. Labor, now in opposition, argued that this was in fact a cut. We investigated the figures and found that the Coalition was still spending above inflation so it couldn’t be called a cut, but the projections the Coalition had made about savings were over such a long period of time that it was difficult to say whether they would come to pass. In the end we called the debate “hot air”.

How do we make sure we’re looking at the data the right way? We always rely on several experts in the field to guide our analysis and tell us the right way to interpret the data. We’re not experts in any of the topics we explore, whilst academics can spend their entire careers researching a single subject, so their advice is invaluable.

Dinda Purnamasari: In our experience, many use the right data, but the context is incorrect. Then, the data becomes incredible.

For example, reports that PT Telkom (state-owned telecommunication company in Indonesia) had provided Corporate Social Responsibility funds of around IDR 100 million to a Mosque and, in comparison, IDR 3.5 billion to a church.

We found that the numbers (IDR100 million and IDR3.5 billion) were right, but the purpose of the funding was incorrect. The 100 million was granted by PT Telkom in 2016 to pay the debt from a mosque renovation process. On the other hand, 3.5 billion was granted to renovate the old church, which also became a cultural heritage site in Nusa Tenggara Barat in 2017.

In this case, again, the context of data becomes an important thing in fact-checking. We must understand the methodology and how the data was gathered or estimated, even by double-checking on the ground, if needed.

Tania Roettger: Crime-data is a good example. In 2017 crime rates in Germany went down. But the statistic only shows the crimes that have been reported to the police. This has lead some politicians to claim that crime has not actually gone down and that the statistics are “fake news“.

When the meaning of data is debated, we consult independent experts to collect arguments about how the data can or should be interpreted. Or we look at alternative sources, for example the surveys some German states conduct with people about the crimes they experienced but did not report. (However, the validity of these surveys is disputed.)

Samar Halarnkar: In this era of fake news, data are often used to reinforce biases.

For instance, there was much self congratulation when the government claimed that India’s forests grew by 6,779 sq km over the two years to 2017. We found that this was not wrong because that is what the satellite imagery revealed. But what it did not reveal was that these new “forests” included forests converted to commercial plantations, as well as degraded and fragmented forests, and that the health of these forests was being gauged by satellite imagery with inadequate resolution. Indeed, numerous studies had recorded a steady degradation of forests over nearly a century.

Image: found that this map of forest coverage was not what it seemed. Credit: India’s state of forest report (ISFR) 2017.

Indian remote-sensing satellites produce images with a resolution of 23.5 metres per pixel, which is too coarse to unequivocally identify small-scale deforestation and cannot distinguish between old-growth forests and plantations. To make that distinction, India needs imagery with resolution of 5.8 m per pixel.

So, all data are not always what they appear. They need to be verified and cross-checked, either with studies, other databases or ground reporting.

Reader question: How do you fact-check stories or statements when data on an issue isn’t available?

Anim van Wyk: It’s really unsatisfactory to use our “unproven” verdict, but sometimes the evidence publicly available at the time “neither proves nor disproves a statement”, as we define this rating. Still, the absence of data doesn’t mean anything goes in making statements of fact about a topic. We then point out what is known and what isn’t.

Samar Halarnkar: If data are not available–or independently verified data are not available–there is only one substitute: Verification through old-fashioned, shoe-leather reporting.

For instance, India’s Prime Minister once claimed that his government had built 425,000 toilets within a year. With no independent verification, this claim was hard to dispute. Obviously, it was impossible to verify that 425,000 new toilets had indeed been built in all of India’s schools. But after sending reporters to conduct random verifications in eight Indian states, it quickly became apparent that the Prime Minister’s claim was–to put it plainly–a lie.

Matt Martino: RMIT ABC Fact Check tests the veracity of claims made by politicians and public figures in Australia. If someone is making a claim to influence policy, our position is that they should have good evidence to back it up. Lack of evidence is no excuse so we try and persevere regardless.

Sure, this often leads to less-exciting verdicts, such as “unverifiable” or “too soon to know” but the verdict is not the be-all-and-end-all of a fact-check. In these situations, we explore what data is out there; we consult experts in the field for their opinion, and we present it to the audience as best we can so they can see how we’ve come to our decision.

Video: More detail on how RMIT ABC Fact Check finds and checks claims.

Dinda Purnamasari: If the data isn’t available, we will place it as unproven, though this flag is unsatisfactory. But, before we conclude the issue as unproven, we still explain the verification steps that we undertook. This is because we want citizens to understand that, when places a claims as unproven, it means we could not find the credible source of the information.

As an example, one of our politicians stated that the LRT development cost for 1 KM was USD 8 billion. After we checked reliable and credible sources, and we couldn’t find the information, then we concluded the issue as unproven.

Tania Roettger: “Knife crime on the rise“ is a recent story, but the federal crime statistics do not list crimes committed with knives as a special category. Some states in Germany do, but among them, they differ in what they count as knife crime. That definitely does not make our work easier.

In cases like this, we source as much information for a claim as is available. If it turns out the material is not sufficient to verify or debunk the claim, we list what is known and clearly state what is missing. If there is no convincing tendency we give the rating “unproven”. But it is important to keep in mind that those making a claim also carry a burden of proof – if one makes a statement of fact, it needs to be based on evidence. This is one of the things we’re trying to show with our work.

Reader question: Are there any established guidelines for determining the reliability of a data source? How does your organisation determine which data is appropriate to use?

Samar Halarnkar: We do not have established guidelines. In general, we consider if the data source is reliable. Sometimes, it might not entirely reliable; for example, a government source, in which case we use the data but cross check with experts, independent studies and/or our own checks. Some public databases are largely reliable: for instance, government-run databases on health, farming and education. We do not consider those data that have previously proven to be compromised or are doubtful.

Matt Martino: We don’t have any hard rules around it, but generally the source should be a non-partisan organisation. In Australia, we rely heavily on data from the Australian Bureau of Statistics, which is a government organisation which has a reputation for providing objective data on a range of issues. This is an example of a good source.

When considering a source, it’s always pertinent to ask: “what is their agenda?” If their motivations for providing data might influence the data in a partisan way, it’s best to leave it alone. As always, it’s a good idea to consult experts in the field on what is the best source to use in verifying a claim.

Dinda Purnamasari: Since we already know that every data has their own nature, such as context, methodology, etc, we have established a standard for the secondary data that is used. Our first level of the source comes from the Government Statistic Bureau, Ministry/Local Government, company financial reports and the stock exchange. As a second layer, we use world organisations, verified and credible journals, consultants and research companies, as well the national or high reputation news agencies. Although, we have this standard, we also cross-check information by consulting with experts in the field, so that we use the best sources.

Tania Roettger: When we’re investigating a claim, one task is to understand what exactly a given piece of data is able to tell. We establish how and why it was collected, what it contains and it excludes. Usually we note the shortcomings of a statistic in the article. Whenever we are uncertain about the evidence we have gathered, we discuss the issue among our team.

Anim van Wyk: There’s no way round studying the methodology by which the data is collected. This must then be discussed with experts to get their input. And all data sources, even those considered reliable, have limitations, which has to be highlighted.

Reader question: What do you think about the potential of automated fact-checking?

Samar Halarnkar: I am sure it has immense potential, but this requires coding expertise that we do not currently have.

Tania Roettger: There are several ways in which automation could help the fact-checking process: extracting fact-checkable claims from speeches or sourcing relevant statistics and documents from a data-pool, for example. But so far we have not experienced or heard of a tool that would do our work for us.

Image: An overview of out automation could aid fact-checking from Understanding the promise and limits of automated fact-checking, by Lucas Graves.

Matt Martino: It’s an interesting area, but one which is currently undercooked. Parsing language is a big part of what we do at Fact Check, and machines are not yet capable of interpreting a great deal of the nuance in language. That being said, anything that allows greater access to the facts in a debate for audiences would be a good thing.

One area where there is already enormous potential is in searching for and identifying potential claims to check and key data on government website such as Hansard and budget papers.

I think that, like a lot of AI, there’s a long way to go, and we’ll be watching this space intently.

Anim van Wyk: The tools I’ve seen are helpful in monitoring important sources for claims to fact-check, such as transcripts from parliament. But I’m quite hesitant about fact-checks without any human intervention as nuance plays such a big role. The potential of getting it completely wrong when you are the one claiming to be correcting claims is not worth the potential credibility loss, in my opinion.

Dinda Purnamasari: It is very interesting, and could make the fact-checker’s work easier. But, for us, it is still long way to go. But, more importantly, to provide the context to data that I am sure is still hard to do by machine.

Reader Question: What are some of your go-to data tools?

Anim van Wyk: You’t can beat a good old spreadsheet. For illustration purposes, we keep it simple by using Datawrapper.

Samar Halarnkar: We use Tabula for extracting tables from PDFs. For analysis, we depend on Excel/Google Sheets and Tableau depending on the size and type of the dataset. For visualisation, we work primarily with Google Sheets, Datawrapper, Infogram and Tableau. We also use Google My Maps and CartoDB for some maps.

Matt Martino: We use Excel or Google spreadsheets for simple analyses; for more complex ones I use R Studio, which is more powerful and can handle much larger datasets. It requires coding knowledge, but the training is well worth it.

In terms of visualisation, we’ve tried many different platforms throughout the years, but Tableau Public has emerged as our go-to. Its abilities in formatting, design, calculation and visualisation are pretty much unrivalled in my opinion, and we’ve been able to create really interesting and rich visualisations using the platform, like those seen here and here.

Dinda Purnamasari: For analysis, we use excel, SPSS, and other statistical tools. It really depends on the purpose, size and type of our data and analysis. For visualisation, we use adobe illustrator, datawrapper, etc.

Want to participate in future ask me anythings? Sign up to the European Journalism Centre’s data newsletter here.