Columbia Journalism Review’s GALLEY – A New APP Supporting Space For Thoughtful Conversations


The central area of the newsroom at USA Today in McLean, Virginia. on March 8, 2017. (Photo by Greg Kahn)

|JON ALLSOP, CJR|AIWA! NO!|Yesterday, CJR opened Galley, an app created last year by Josh Young and Tom McGeveran. In bringing Galley under the CJR mantle, we’re creating a new space for thoughtful conversations about journalism and opening up the debates we have every day in the newsroom. We’re inviting you all to join in.

Facebook
Columbia Journalism Association of Black Journalists - CJABJ

FacebookColumbia Journalism Association of Black Journalists – CJABJ

“It is one of the biggest frustrations of the media moment in which we live: precisely when there is so much in journalism to discuss, the places we can have those conversations seem inadequate,” Kyle Pope, CJR’s editor and publisher, writes. “Reader comments sections grew toxic; many outlets did away with them. Email to a generic address seems too impersonal. Facebook is too generic and politically fraught. And Twitter, where most of the journalistic conversation still happens, is a useful but chaotic place, mined with booby traps, jabbing, and outrage—not a forum for nuanced, thoughtful exchange. And yet that is what we all so desperately need.”
 
Galley is a platform based on trust: you choose the people you want to engage with in any given conversation. That doesn’t mean you can’t get on your soapbox—if you want to, you can make any post open to the public. The idea is to vary the kinds of engagement you have. “Different conversations and topics can be open to different groups of people, depending on your mood, on the subject matter, on who else is involved,” Pope writes. “It’s all entirely up to you.”
 
We’ll be using Galley in a variety of ways at CJR, posting exclusive interviews, AMAs, and conversation threads on media stories—from CJR.org and further afield. Yesterday, Pope chatted with Kevin Delaney, editor in chief of Quartz (which just put up a partial paywall), about membership models in journalism. CJR’s Mathew Ingram and Nausicaa Renner sparked a discussion about crowdsourced news in light of WikiTribune’s decision to lay off all of its professional journalists. And Christine Rushton, a digital editor for 17 Northern California newspapers, posted updates on newsroom morale since wildfires have devastated the state.
 
To join in, or just to watch, go to this link or download Galley from your app store. Check out our users’ guide here. And bring your friends. Galley can only thrive if you help us grow it. See you online.

 
Other notable stories:

  • Mic, the millennial-focused digital media company, laid off all but oneof its editorial staffers yesterday, then was sold to Bustle for a paltry $5 million. When CJR’s Ingram reported in September that Mic was in big financial trouble, Chris Altchek, its cofounder, angrily slapped him down. Yesterday, Ingram reflected on why he was ultimately proven right. “Did Facebook pull a bait-and-switch on video to some extent? No doubt. But no one forced Mic, or any other company suffering as a result of the same strategy, to shift so much of their spending to Facebook video, or to get their hopes up about a huge payoff,” he writes. “You could argue that they were driven to do so by desperation, fueled by declining ad revenues that are primarily being hoovered up by Google and Facebook. But ultimately a company is responsible for its own decisions.”
     
  • Staffers at two other digital outlets, Mashable and PCMag, asked Ziff Davis, their owner, to recognize their efforts to unionize, HuffPost’s Dave Jamieson reports.
     
  • If it wasn’t already, it’s now official: the Robert Mueller news cycle is back in top gear. Yesterday, out of nowhere, ABC’s George Stephanopoulos broke the news that Michael Cohen, Donald Trump’s former lawyer, was pleading guilty to lying to Congress about the extent of Trump’s business interests in Russia, which continued well into the 2016 presidential campaign. On Twitter, many hats were tipped to BuzzFeed’s Anthony Cormier and Jason Leopold, who nailed key detailsback in May. Cormier and Leopold were not done there: last night, they revealed that the Trump organization planned to give Vladimir Putin a $50 million penthouse in its abortive Trump Tower Moscow project.
     
  • The Wall Street Journal had this late contender for correction-of-the-year yesterday: “Vladimir Putin is president of Russia. An editing mistake erroneously identified him as Vladimir Trump.”

GOOGLE & The great firewall of China: Xi Jinping’s internet shutdown

Flowers left outside Google China’s headquarters after its announcement it might leave the country in 2010. Photo: Wikicommons.

Flowers left outside Google China’s headquarters after its announcement it might leave the country in 2010. Photo: Wikicommons.
  • Drop the Dragonfly programme and publicly commit not to re-launch a search engine in China at the expense of human rights.
  • Guarantee protections for whistle-blowers and other employees speaking out.

|CRIMSON TAZVINZWA, AIWA! NO!|Internet censorship in China is among the most extensive in the world due to a wide variety of laws and administrative regulations. More than sixty Internet restrictions have been created by the government of China, which have been implemented by provincial branches of state-owned ISPs, companies, and organizations.

Back in 2010, Google made a promise. The largest search engine in the world vowed that it would never support China’s internet censorship. But – skip forward to August 2018 – and it’s a different story.  It’s been revealed that Google’s preparing to go back on its word.

Before President Xi Jinping, the internet was becoming a more vibrant political space for Chinese citizens. But today the country has the largest and most sophisticated online censorship operation in the world.

Guardian

Under the code-name ‘Project Dragonfly’, Google has been working on a secretive programme to re-launch its search engine in China – even if it means cooperating with the Chinese government’s repressive online censorship and surveillance rules.

People using Google in China would be blocked from accessing banned websites like Wikipedia and Facebook. Content from search terms like ‘human rights’ would be banned. 

The Chinese government would even be able to spy on Google’s users – this is a government that routinely sends people to prison for merely sharing their views online.

If Google is willing to trade human rights for profit in China, could they do the same in other countries?  

Stand in solidarity with the staff members at Google who have protested the project and tell Google CEO Sundar Pichai to #DropDragonfly – before it can be launched.  

46512306_319047615582965_5063497336513101824_n.jpg

Global collaboration is a success story for journalism, whatever its impact

International Media Collaboration

|JON ALLSOP, CJR|AIWA! NO!|The International Consortium of Investigative Journalists broke its latest big story over the weekend. Bringing together more than 250 reporters from 36 countries and 59 news organizations—including the BBC, NBC News, the AP, Le Monde, and Süddeutsche Zeitung—the group has started to unveil massive problems plaguing the global medical devices industry. ICIJ and its partners flagged more than 1.7 million injuries and 83,000 deaths linked to implants such as pacemakers, breast implants, and spinal cord stimulators, which manufacturers move around the world as regulators flounder and patients and doctors are left in the dark.
 
The devices investigation was born out of the work of Jet Schouten, a Dutch reporter who, in 2014, asked European regulators to approve what she claimed was a vaginal mesh, but was actually the netting used to hold mandarin oranges at the grocery store. (None of the three bodies Schouten approached took serious issue with her fake product.) Late last year, based on this reporting and years of arduous follow-up work, ICIJ approved a global look at the devices industry. For the past five months, I sat inside its investigation for CJR, hanging out on conference calls, interviewing partner journalists in 11 countries, and spending time with Schouten in the Netherlands.
 
The operation I observed was flush with confidence and camaraderie, and deeply impressive. That should not be surprising: ICIJ and the collaborative model it pioneered are having a moment. Two years ago, the group dropped the Panama Papers, a massive leak of offshore tax documents that exposed the accounting tricks of the rich and powerful and landed with a big global splash, implicating a succession of world leaders. The effort sparked the resignation of Iceland’s prime minister, then won a Pulitzer, then inspired a nascent Netflix movie that is set to star Meryl Streep. Last year, ICIJ followed up with the Paradise Papers, a second leaks story drawing on 13 million more offshore records.
 
ICIJ has been around for 21 years, during which it has worked on many different types of story. The medical devices project was nonetheless a departure from its acclaimed recent work, and thus a fresh test of its model. Could a collaboration based on painstaking (and often frustrating) shoe-leather reporting and public-records analysis work at the same grand scale as an investigation rooted in a single, centralized leak? And could a consumer affairs-facing story have the same impact as the salacious secrets of the world’s super-rich?
 
While every indication suggests the project navigated its technical complexities smoothly, the impact question remains open. Industry and regulators are already paying attention to ICIJ’s findings: yesterday, the US Food and Drug Administration promised to overhaul its device approval rules. Change, however, comes more easily in some countries than in others. As Lebanese journalist Alia Ibrahim told me, while many ICIJ partners were “waiting for the earthquakes that are going to happen once they publish,” in Lebanon, “I could give you 100 examples of how investigations proving corruption, proving malpractice, didn’t lead to anybody being held accountable.” And, globally speaking, ICIJ only deals with regulatory failures that are both widespread and entrenched—and, therefore, likely to be persistent.
 
As splashy as the Panama Papers were, efforts to overhaul global tax architecture in the time since have largely failed. ICIJ can’t force change, no matter how many journalists it might corral behind its work. But that isn’t the point of the organization. ICIJ is like any top-class individual newsroom, only much bigger. Its model empowers news organizations the world over to shine a spotlight into deep darkness, then joins those spotlights together to make a powerful single beam.
 
Below, more on ICIJ’s latest investigation: 

  • The Implant Files: You can find all ICIJ’s stories here, its overview of the global medical-devices industry here, and a full list of partners here.
     
  • Under the skin: In my piece for CJR, I go into much more detail about how ICIJ followed through on the project, and what it means for the organization going forward.
     
  • In the US: For the AP, Meghan Hoyer looks at problems with breast implants, and Mitch Weiss and Holbrook Mohr lay out how spinal cord stimulators have left some patients with agonizing injuries. For NBC News, meanwhile, Andrew W. Lehren, Emily R. Siegel, and Sarah Fitzpatrick track how US-made devices export pain overseas, and, conversely, how devices withdrawn from the market overseas can remain on sale in the US.  
     
  • “All Meshed Up”: Watch Schouten and her colleagues at Dutch public broadcaster AVROTROS pass off mandarin orange netting as a vaginal mesh here.

Facebook In Turmoil; Denial, Tension And Finger-pointing As Crisis Builds


tal arrogance,” one Facebook employee said of company leadership’s willingness to blame its communications team for recent crises.

Image: Mark Zuckerberg appears before the House and Energy Committee

Facebook founder Mark Zuckerberg appears before the House and Energy Committee about privacy and election meddling on Capitol Hill on April 11, 2018.David Butow / for NBC News

By Dylan Byers, cnbc news

|AIWA! NO!|As challenges to Facebook mount from consumer organizations, politicians and journalists, the company’s leadership remains convinced that its recent crises are primarily public relations problems, according to people at the company.

Mark Zuckerberg, Facebook’s chief executive officer, and Sheryl Sandberg, the company’s chief operating officer, believe Facebook’s negative image is a public relations problem that stems from a bungled press strategy and sensational media coverage, not a structural or philosophical shortcoming that requires a wholesale course correction, six Facebook sources familiar with their thinking told NBC News. The sources asked not be identified because they were not authorized to speak publicly.

As a result, some inside Facebook believe the company’s leaders are likely to respond to the current controversy in the near-term by revamping their communications strategy, not by making drastic changes to personnel or the platform.

To critics from Silicon Valley to Capitol Hill, that is likely to be seen as a continuation of the “delay, deny and deflect” strategy covered by The New York Times that got them into hot water in the first place.

Facebook COO Sheryl Sandberg testifies before the Senate Intelligence Committee on Capitol Hill on Sept. 5, 2018.
Facebook COO Sheryl Sandberg testifies before the Senate Intelligence Committee on Capitol Hill on Sept. 5, 2018.Jim Watson / AFP – Getty Images file

In recent days, Zuckerberg and Sandberg have both publicly blamed the company’s communications team for the decision to hire a conservative public relations firm that included what one former employee called an “in-house fake news shop.” Both leaders also publicly claimed ignorance about the decision, even as Sandberg privately told staff that she “fully accepted responsibility.”

In a company-wide meeting on Friday, Zuckerberg blamed the media for fueling “bad morale” and called “bulls—” on The New York Times report, which insinuated that the company had tried to cover up its problems with Russia-based disinformation efforts. He also said he would not hesitate to fire employees who leaked information to the media.

Internally, the leadership’s decision to blame the media and the press shop has driven a wedge between them and members of the communications team who feel as if they’ve been thrown under the bus, the sources said.

Cummings: People ‘should be allowed to come in, seek asylum. That is the law.’

Cummings: Minority won’t get subpoena power on House Oversight

“It’s total arrogance,” one Facebook employee said. “Everyone is pissed.”

On Sunday night, a Facebook spokesperson told NBC News that the leadership “takes full responsibility for the issues we’re facing. They’ve been vocal about that internally and externally. No matter where people sit at Facebook, everyone wants to move forward — and that’s our plan.”

In recent months, Zuckerberg has taken a war-like attitude toward dealing with Facebook’s problems and with its PR strategy, according to a Wall Street Journal report. Zuckerberg, 34, believes his company didn’t move quickly enough to handle its problems in the past and has “expressed frustration at how the company managed the waves of criticism it faced this year.”

Former Facebook Security Chief on what happened at Facebook in 2016

But Facebook’s critics worry that the leadership still has yet to internalize the full scale of the problem.

“It’s important for Facebook to recognize that this isn’t a public relations problem,” Sen. Mark Warner told The New York Times on Sunday. “It’s a fundamental challenge for the platform and their business model.”

U.K. Parliament Seizes Facebook Documents As Part Of Ongoing Inquiry

Facebook CEO Mark Zuckerberg appears before the House and Energy Committee in Washington on April 11, 2018.David Butow / Redux for NBC News file

Facebook CEO Mark Zuckerberg appears before the House and Energy Committee in Washington on April 11, 2018.David Butow / Redux for NBC News file


The seizure of the documents comes after Mark Zuckerberg declined to appear in London on Tuesday before officials investigating disinformation and election interference.

|Saphora Smith and Olivia Solon, CNBC NEWS|AIWA! NO!|LONDON — British lawmakers have obtained documents that could be “highly relevant” to an inquiry that has been looking into Facebook’s response to disinformation, a spokesperson told NBC News on Sunday.


Cambridge Analytica, the Trump campaign-linked data firm under fire for sweeping collection of Facebook data, issued an expanded statement Thursday about its practices in the 2016 US presidential election.
In the statement, the firm reemphasized its claim that it “did not use Facebook data from research company GSR on the 2016 presidential election,” a reference to Global Science Research, which gathered up data en masse on behalf of Strategic Communication Laboratories, the parent company of Cambridge Analytica, according to a report in The Interceptlast March.
Cambridge Analytica’s statement said the Trump campaign hired it in June 2016, and from “August onwards,” its data team used Republican National Committee voter files, polling, the Trump campaign itself, voting returns released by states and “consumer data available from commercial brokers.”

CNN politics

The documents reportedly contain revelations Facebook has been fighting to keep out of the public domain relating to the company’s data and privacy policies that led to the Cambridge Analytica Scandal, The Observer newspaper in London reported Saturday.

Image result for cambridge analytica

Alexander Nix, former Cambridge Analytica CEO

The Observer reported that the files, which it said include correspondence from Facebook CEO Mark Zuckerberg, were seized from the founder of a U.S. software company, Six4Three, which is engaged in legal action against the tech giant.

NBC News could not confirm the details of the report, but a spokesperson for the parliamentary committee conducting the investigation confirmed that it had obtained potentially useful documents for its inquiry.

NBC News could not confirm the details of the report, but a spokesperson for the parliamentary committee conducting the investigation confirmed that it had obtained potentially useful documents for its inquiry.

“The committee used a parliamentary order to obtain documents that could be highly relevant to the inquiry,” a spokesperson for the Department for Digital, Culture, Media and Sport (DCMS) select committee told NBC News.

A spokesperson for Facebook told NBC News late Saturday that “the materials obtained by the DCMS committee are subject to a protective order of the San Mateo Superior Court restricting their disclosure.”

Six4Three filed a complaint against Facebook at the Superior Court of California County of San Matteo in 2015. According to court documents, the company accuses Zuckerberg of attempting to “deliberately” mislead tens of thousands of software companies into “developing applications that generated substantial user growth and revenues for Facebook.”

NBC News has reached out to representatives for Six4Three for comment.

For two years Facebook has been rocked by crises involving covert Russian propaganda, the mishandling of millions of users’ personal information and the hiring of a public relations firm that had what one former employee called an “in-house fake news shop.”

Sources have told NBC News that Zuckerberg and Sandberg believe Facebook’s negative image is a public relations problem that stems from a bungled press strategy and sensational media coverage, not a structural or philosophical shortcoming that requires a wholesale course correction.

Zuckerberg says he isn’t leaving Facebook despite controversy

NOV. 21, 201800:51

The British parliament’s seizure of the documents comes after Zuckerberg declined to appear before an international coalition of elected officials investigating disinformation and election interference that is scheduled to meet in London on Tuesday.

Representatives from the U.K., Canada, Australia, Ireland, Argentina, Brazil, Singapore and Latvia invited Zuckerberg to give evidence at a meeting at the Houses of Parliament, but Zuckerberg declined.

British lawmaker Damian Collins — who is the chairman of the DCMS committee tasked with investigating disinformation and assembled the international coalition — told NBC News Zuckerberg was “frightened of being exposed.”

Collins said “the really big question” he wanted to ask Zuckerberg was, “what did he know about the concerns about data privacy?”

Related

EXCLUSIVE

Facebook hired firm with ‘in-house fake news shop’ to combat PR crisis

Collins assembled the “International Grand Committee” partly as a response to Zuckerberg’s insistence that he was too busy to visit individual national parliaments to answer further questions about Facebook’s efforts to crack down on the misuse of its platform.

In the wake of the Cambridge Analytica scandal, which triggered global scrutiny of Facebook’s data collection practices, Zuckerberg has only answered to lawmakers in public twice: before Congress in April and European Parliament in May.

Facebook has offered Richard Allan, vice president of policy solutions, to attend next week’s hearing in Zuckerberg’s place.

“It makes it look like he’s got something to hide and he’s worried that we may have information and questions we could put to him that would put him in a difficult position,” Collins said.

“He’s deliberately avoiding that sort of scrutiny.”

YouTube, Facebook, Instagram, Soundcloud And More Face Article 13, New Law Written By The European Parliament; And They Are Huge Consequences For Everyone: Content Creators And Consumers Alike

Article 13 – THE END OF YOUTUBE! – There’s a better way


It’s Coming Article 13 | An Important Message For All Creative People

Article 13 is part of European copyright legislation created with the intent to better protect creativity and find effective ways for copyright holders to protect their content online.

We support the goals of Article 13, but the version written by the European Parliament could have large unintended consequences that would change the web as we know it.

Image result for article 13
Will this spell the end YouTube as we have known it? That kind be right; there must be a better way

There’s a better way. Learn more and make your voice heard.

  1. What is Article 13?
    • Article 13 is one part of a proposed European Union (EU) copyright legislation created with the intent to better protect creativity and find effective ways for copyright holders to protect their content online. (Official text here).
    • To be clear, we support the goals of Article 13 and its push to help creators and artists succeed; we want more effective ways for copyright holders to protect their content. But Article 13, as written by the European Parliament, will create large unintended consequences for everyone, so we’re asking to find a better way forward.
  2. What’s the status of Article 13?
    • On September 12th the European Parliament voted to move forward with Article 13.
    • However, Article 13 is not yet a law. The language is being drafted and revised in EU’s trilogue negotiations between representatives from the European Commission, Parliament and Council.
    • This language could be finalized by the end of the year, and EU member states may have up to two years to make the directive into national law.
  3. What changes with Article 13?
    • The proposed version of Article 13 would eliminate the existing notice-and-takedown system currently in place to protect rightsholders and platforms. This would make platforms such as YouTube, Facebook, Instagram, Soundcloud, Dailymotion, Reddit and Snapchat liable – at the moment of upload – for any copyright infringement in uploads from users, creators and artists.
    • This in turn would mean that platforms including YouTube would be forced to block the vast majority of uploads from Europe and views in Europe for content uploaded elsewhere given the uncertainty and complexity of copyright ownership (more on this below).
  4. What would be the impact if the European Parliament version of Article 13 passes?
    • The risks associated with accepting content uploads with partial or disputed copyright information would be far too large for platforms such as YouTube.
    • As a result, YouTube would be forced to block millions of videos (existing and new ones) in the European Union. It could drastically limit the content that one can upload to the platform in Europe.
    • Creators would be especially hard hit. Videos that could be blocked include: educational videos (from channels such as Kurzgesagt in Germany and C.G.P. Grey in the UK), a large number of official music videos (like Despacito from Luis Fonsi or Mafioso from Lartiste), fan music covers, mashups, parodies and more.
    • As such, Article 13 threatens hundreds of thousands of jobs, European creators, businesses, artists and everyone they employ.
  5. What does this mean for me as a YouTube creator or artist in the European Union?
    • YouTube and other platforms may have no choice but to block your existing videos and prevent you from uploading new ones in the European Union unless you can prove you own everything in your videos (including visuals and sounds).
  6. What does this mean for me as a YouTube creator or an artist NOT in the European Union?
    • YouTube and other platforms will likely block your videos (including existing ones) to users in the European Union if there is partial or disputed copyright information.
  7. What types of copyrighted content would I not be able to use in my videos?
    • Examples of copyrighted material possibly impacted in your videos include images, artwork, software, excerpts from books, music, parodies and much more. (Read more here).
  8. Why aren’t copyright matching tools like Content ID enough?
    • With Article 13 as currently written, copyright matching tools like Content ID wouldn’t help platforms such as YouTube to keep content on the platform.
    • Content ID works if rightsholders use it and provide clarity as to what belongs to them. However, in many cases information on copyright ownership is missing, or there is partial knowledge, meaning that no system could accurately identify full copyright information at the point of upload.
    • Put simply, a piece of content with partial or unknown ownership is – to YouTube – treated the same as a piece of content that is unlicensed and so would have to be blocked.
  9. Is there a better way forward with Article 13?
    • Yes! We’re asking lawmakers to find a better balance we all need to protect against copyright violations and still enable European users, creators and artists to share their voices online. In order to do that, we need a system where both platforms and rightsholders collaborate.
    • What this means in reality is three things:
      • Rightsholders should work with platforms to identify the content they own, so the platforms know what is protected under copyright and can give rightsholders control to block if they choose.
      • Platforms should only be held liable for content identified to them using tools like Content ID or through notice and takedown.
      • Platforms and rightsholders should negotiate in good faith where licenses and rights can be easily identified
  10. What can I do to help find a better way forward with Article 13?
    • European representatives are still working on the final version of Article 13 and there is time to work together towards a better path forward.
    • The European policymakers involved in negotiating Article 13 need to hear and see that real people could be negatively impacted if Article 13 goes into effect as written by the Parliament! That’s why we need you and your fans to make your voice heard now by:
      • Making a video about Article 13
      • Tweeting about Article 13 with the hashtag #SaveYourInternet
      • Joining the movement at youtube.com/saveyourinternet
  11. What’s up with other players? Is YouTube alone in this fight?
  12. Which countries would be directly impacted by Article 13?
    • All member states of the EU: Austria, Belgium, Bulgaria, Croatia, Republic of Cyprus, Czech Republic, Denmark, Estonia, Finland, France, Germany, Greece, Hungary, Ireland,Italy, Latvia, Lithuania, Luxembourg, Malta, Netherlands, Poland, Portugal, Romania, Slovakia, Slovenia, Spain, Sweden and the UK (at least for now, here’s more about Brexit).
  13. One last thing. What are common misunderstandings about Article 13?

International Declaration on Information and Democracy: governments need to open information and communication space. And more …

– International Declaration on Information and Democracy –

In a historic step in the context of the Paris Peace Forum today, 12 countries launched a political process aimed at providing democratic guarantees for news and information and freedom of opinion – a process based on the declaration issued last week by an independent commission that was created at the initiative of Reporters Without Borders (RSF)

|JAVIER PALLERO, accessnow|AIWA! NO!|On November 2, an independent commission set up by Reporters Without Borders published a new declaration on issues relevant for human rights in the digital era. The “International Declaration on Information and Democracy: principles for the global information and communication space“ addresses difficult and pressing issues such as misinformation, privacy, and the role of tech intermediaries in ensuring freedom of expression.

The declaration, endorsed by a number of important figures in journalism and human rights, has valuable references to freedom of the press and the protection of journalists, and it calls for a better technological ecosystem for information exchange. Today at the Paris Peace Forum,12 countries launched a political process aimed at providing democratic guarantees for news and information and freedom of opinion – an initiative based on the declaration.

While we share that goal, our analysis offers a word of caution with regard to the recommendations on the role of internet information intermediaries. We explain why this part of the declaration may be problematic for the freedom of expression online if poorly implemented or interpreted by decision-makers.

A necessary call for better conditions for journalism

At the request of Reporters Without Borders (RSF), the Eiffel Tower’s lights were turned off for a minute at 6:30 p.m. today on the eve of the International Day to End Impunity for Crimes Against Journalists to pay tribute to Saudi newspaper columnist Jamal Khashoggi and all the other journalists in the world whose murders have so far gone unpunished.

The declaration takes stock of the current challenges for the free press, which are shared by traditional and digital journalism. It reinforces the key role that journalists play in democratic societies, and makes a call to increase their safety. From our point of view, this clearly includes strengthening digital security, a challenge that journalists face in light of the illegal eavesdropping by both governments and private actors. Journalists need to be able to rely on technology that works for them and protects their sources. That’s why we view the protection of strong encryption as fundamental for the work of journalists, and we commend the declaration’s call for privacy for those participating in the public debate.

Privacy facilitates the exercise of the freedom of expression, which comprises the right to impart and receive information. Both technology and the press play an important role in facilitating our access to information in the public interest. The declaration recognizes this and stresses the social function of the press. We add that our ability to access the internet in times of political and social unrest is also essential in fulfilment of that role. Therefore, states should abstain from ordering internet shutdowns or blocking applications. Despite growing public awareness of such network interference, this dangerous trend is nevertheless escalating, as we recently indicated in a joint report to the United Nations Human Rights Council. We also call for increased attention to the wave of repressive legislation that is targeting online expression and putting journalists’ work and lives at risk.

Another laudable inclusion in the declaration is its call for further transparency. This includes transparency as a means of improving the quality of information but also as a way to understand more about how the content curation algorithms in digital platforms work.

Cautions and considerations regarding free expression

The declaration raises concerns about issues including liability for content dissemination, bias in digital platforms, and the proliferation of misinformation on the internet. We acknowledge and share those concerns. However, we worry that some parts of the declaration may be misinterpreted by decision-makers to adopt solutions that, without further analysis, could harm free expression.

Liability for expression — some important distinctions

The declaration makes note of liability for those participating in the public debate, particularly for content they disseminate or “help to disseminate.” There are critically important distinctions to be made in this area in order to avoid ill-informed implementations of this idea. First, there are technical intermediaries on the internet that help disseminate content, but, as a general rule, should not be held liable for third-party expressions. That is the case with regard to hosting and domain name providers, for instance, which do not participate in the curation or prioritization of content and merely provide technical infrastructure to web pages and apps to function. Legal sanctions for these intermediaries for the content they host would represent a disproportionate measure at odds with internationally recognized human rights principles.

When we consider social media platforms, there is no clear solution and any efforts in the area must be evidence-based. When platforms use algorithmic curation of content, it implies making a decision about the dissemination of information, but that decision is typically informed not only by the creators of the algorithm but also by the conduct of users. Further, design choices and decision-making for curation that rewards user engagement may create an incentive for companies that use these platforms for advertising to track and surveil users, which implicates other rights. The bottom line is that we need more information to understand how content consumption and dissemination really works. Before we engage in any public policy consideration of liability for digital intermediaries on content, which raises clear and significant risks for free expression, we must have clarity on the extent to which different actors in the information ecosystem exert influence over content creation and dissemination.

Neutrality — what kind?

The declaration also calls for “political, religious, and ideological neutrality.” It states that platforms should be neutral on those issues when “structuring the information space.” While we understand the concerns regarding possible bias in the curation of content, public policy actions based on the call for neutrality in the ”structuring” of the information space may leave room for abuse if important questions are not answered first. There is no doubt that arbitrary discrimination is an obstacle for the exercise of free expression. But, what could neutrality mean in the digital information context? Would that mean equal treatment for different kinds of information that are fed into a curation algorithm? Or would that mean striving for an ideal of a balanced output in search results or social media feeds? The definition of neutrality, as we can see, can be tricky. It implies a neutrality of information input, treatment, and output that is hard to achieve across diverse information systems. Take a search engine, for instance, and compare it with a social media service. A search engine indexes a broader range of information not directly influenced by the user, but its processing and presentation of search results is indirectly influenced by user behavior. That’s how search services offer personalized results. Should a search engine’s neutrality efforts be focused on non-discriminatory crawling of sources? Or should it be non-discriminatory in the processing and presentation of results? How is neutrality in a search engine compatible with user personalization? If this is a matter of degree, how much personalization or neutrality is enough, and who gets to decide that?

The question of “neutrality” for social media platforms is perhaps even more complicated. Users themselves input content, and users tend to follow the people and pages that they like. The choices they make reflect their own ideas, religious beliefs, and more. Should companies or governments intervene in the choices of users? To what degree? Should some content or user posts be sponsored to promote “neutrality” or diversity of opinion? Who makes that decision?

The information ecosystem today has characteristics that appear to be promoting polarization and reactivity, which in turn can have a negative effect in democracy. However, confronting this challenge will take much more than asking companies for “neutrality.” It requires addressing business models, information literacy, design for user choice, and social and educational problems. Consider the reports about the use of WhatsApp, a closed communication channel, to spread misinformation in Brazil before the recent elections. This could be considered a “neutral” channel since there is no algorithmic prioritization of the messages that run through the platform. Yet in the broader context of the information ecosystem in Brazil, including the dominance of this channel because WhatsApp is often “zero-rated” and therefore free to use, its use may also have increased the challenges for information diversity and fact-checking.

We agree with the declaration’s emphasis on the idea that with the greater influence, there is more responsibility and a corresponding need for increased transparency. However, given the considerations outlined above, assigning editorial responsibility or possible liability may not be an appropriate answer in all cases. Platforms should, instead, provide users with effective tools to exert the maximum amount of control over their information experience. By default. This could include options such as giving users the capacity to turn off prioritization in a news feed, or adjust it with their own preferences, for example, or to disable tracking and behavioral advertisements. This might represent the type of “neutrality” for platforms that would benefit users.

“Reliable” information — a difficult quest in the digital space

Finally, the declaration’s call for platforms to favor reliable information also raises complex issues for free expression. The declaration recommends as tools in this endeavor transparency, editorial independence, verification methods, and journalistic ethics. In addition to the challenges we explore above related to editorial responsibility, there are also challenges when it comes to a platform’s use of verification methods and journalistic ethics. The expression of opinion is protected as a fundamental human right, and opinion pieces are not necessarily “verifiable.” Speculation, theorizing, satire, and opinion present challenges to fact checking, online or off. It is also vital that neither states nor companies define journalistic ethics. On a number of social media platforms, one’s news feed contains a mix of personal opinion, news items, editorials, and advertising. Although journalistic ethics could play a role in the design of a news feed or help inform the development of a content curation algorithm, independent and human rights based human intervention is essential to mitigate the spread of misinformation on communication platforms.

Conclusion: in assigning responsibility, take care not to deputize platforms as guardians of truth or neutrality

All the issues we have explored are difficult, and a thorough analysis of all their implications would exceed the bounds of this post. The challenges the declaration seeks to address are only starting to be adequately researched and there is a need for more information from internet platforms.

However, we can start with one initial recommendation to those seeking to apply the content of the declaration to public policy decisions: avoid deputizing social media companies or any internet intermediary as a guardian of the truth or neutrality, as this risks consequences for free expression and other protected human rights. Social media platforms, and the dominant players in particular, must take heed of their responsibility to consider the human rights impacts of their products. If by encouraging them to take more responsibility, we also make them the arbiters of truth, however, we put those same rights at risk. And we transfer even more power from the people to dominant platforms.

Today, people access, create, share, comment on, and react to information in complex ways. In the challenges that this poses for our democracies, we must find solutions that empower us to deal with information in a constructive, but also fundamentally free way. This means putting users in control, by giving them more options for how they find, consume, and share content free from manipulation. It also means providing more transparency, especially with regard to ads, including political advertising. Finally, it means looking at the bigger picture and developing business models that do not reward poor quality information that increases “engagement” by playing on basic human instincts of fear, alarm and discord.

Continue reading International Declaration on Information and Democracy: governments need to open information and communication space. And more …