How credible are the most retweeted tweets that contain visuals about COVID-19?

The last few years there has been a rapid rise in the amount of fake news that is shared online, especially on Twitter (McGonagle, 2017; Lazer et al., 2018). This can be seen as a serious issue, since fake news can manipulate the public’s perception of reality and change attitudes (Damico, 2019). With the rise of the COVID-19 pandemic, it is of importance that the public is informed with accurate information (Hou et al., 2020). However, the amount of fake news that is spread about the coronavirus has increased (Apuke & Omar, 2020). This is problematic, since it can endanger public health (Europol, 2020; Waszak et al., 2018). Therefore, we investigated whether tweets about COVID-19 contain accurate information.

Information overload and digital images

The last few years, there has been a rapid rise in number of posts that are shared online. Social media users receive an endless flow of information. This can lead to an information overload, which has become a major problem in modern society (Zhang, Ding & Ma, 2020). “Information overload occurs when the amount of input to a system exceeds its processing capacity” (Toffler, 1984, p. 21). In order to gain attention from users, content needs to stand in some way. Almost 3.2 billion digital images are shared every day (Meeker, 2016). Multiple studies state that adding images or videos to a social media post help to stand out from text-only posts. In addition, the presence of a photo in a tweet will increase the amount of likes and number of retweets (Li & Xie, 2020). For example, a tweet containing a photo gets 18% more clicks, 89% more likes, and 150% more retweets than those without a photo (Cao et al., 2020). In addition, visualizations may help consumers with processing the content a lot easier (Cook & Lewandosky, 2011).

Images and retweets enhance credibility

Information that contains photos is perceived as more credible. This is because people often believe that photos provide evidence of an event being occurred, even though it might provide a false claim (Kelly & Nace, 1994; McCabe & Castel, 2008). Unfortunately, this advantage is also taken by fake news. Fake news usually contains misrepresented or tampered images, or videos to attract and mislead consumers (Cao et al., 2020). In addition, users asses the credibility of tweets by looking at the number of retweets. According to Morris et al. (2012), a high number of retweets increases the perceived credibility. Consequently, tweets that contain photos, even tampered ones, will receive more retweets whereas these retweets are used as a measure to asses credibility. In the context of fake news, this is worrisome since people asses credibility by the number of retweets, and not by the accuracy of provided information. Therefore, we investigated to what extent the visuals (i.e., images and videos) in COVID-19’s most retweeted tweets are credible.

Fact-checking

During the pandemic, a lot of tweets were posted about the Corona virus in the Netherlands. On the basis of a file containing COVID-19 related tweets by Pointer, we first draw a sample of the most retweeted tweets that contained visuals. Second, we analysed and evaluated each visual on their credibility. We aimed to find the original source of the each visual. To do this, we used Google reverse image search, TinEye, and Wolframalpha. In the case of a video, we fact-checked it by means of Google. If needed, screenshots of the video were put through image reverse search engines.

Fact-checked tweets results

In Figure 1, an overview is given of the results (i.e., true, mostly true, half true, false, mostly false or no evidence) of the analysed tweets. The results show that more than half of the fact-checked tweets are true (60%), which is striking because we did not expect that due to the amount of fake-news that is spread nowadays (McGonagle, 2017). The number of tweets that were mostly true, half true, mostly false and false were almost evenly distributed with an average of 9.1%. In addition, a small part of the tweets had no evidence (3.0%), which means that the tweets couldn’t get linked to confirming or contradicting evidence.

Figure 1. An overview of the fact-checked results

Tweets that were true

As earlier stated, 60% of the fact-checked tweets appeared to be true, which means that the photo or video was related to the context of the tweet, and there was nothing significant missing. An example of a reliable photo that is used in a tweet is shown in Figure 2. We labelled it as true, since the source of this picture (Rijksoverheid) is credible, the photo provides accurate information (e.g., the date of both the picture and the tweet corresponded), and it was tweeted by the Dutch Prime Minister, which makes perfectly sense.


Figure 2. An attached photo of a tweet by the Dutch Prime Minister (Rutte, 2020)

Tweets that were mostly true
8% of the tweets we fact-checked were mostly true, which indicates that the photo or video of the tweet was accurate but needs some clarification or additional information. An example of such tweet is a from Wilders (2020), which can be seen in Figure 3. The photo, added to the Tweet, mentioned a motion and the outcome of the motion. However, it did not show the arguments that other parties had given, neither further explanations were shown.


Figure 3. An attached photo of a tweet by Wilders (Wilders, 2020)

Tweets that are mostly false

11% of the fact-checked tweets were mostly false, because the visuals within the tweets contained elements of truth but ignored critical facts that would have given a different context. For example, in a video of a tweet from Wilders (2020), which is shown in Figure 5, he states that Minister de Jonge gives no information about how many intensive care beds will be available. However, in a debate Minister de Jonge did say that there will be 1600 IC beds and that this will be enough to cover the care. So, the statement in this video is mostly false, since Minister de Jonge did give this information. However, the video of Wilders does not show this statement of minister de Jonge.


Figure 4. An attached photo of a tweet by van Bommel (van Bommel, 2020)

Tweets that are mostly false

11% of the fact-checked tweets were mostly false, because the visuals within the tweets contained elements of truth but ignored critical facts that would have given a different context. For example, in a video of a tweet from Wilders (2020), which is shown in Figure 5, he states that Minister de Jonge gives no information about how many intensive care beds will be available. However, in a debate Minister de Jonge did say that there will be 1600 IC beds and that this will be enough to cover the care. So, the statement in this video is mostly false, since Minister de Jonge did give this information. However, the video of Wilders does not show this statement of minister de Jonge.


Figure 5. An attached video of a tweet by Wilders (Wilders, 2020)

Tweets that were false

9% of the fact-checked tweets were false, because the content of the visual was not accurate. For example, a tweet of Lavie Jan Roos (2020) is labelled as false, since the photo visualized how political parties voted for the motion of Baudet. As can be seen in Figure 6, the data in the picture says that political party ‘50PLUS’ voted in favour. However, according to Tweede Kamer.nl, they voted against the motion (Tweede Kamer.nl, 2020). In addition, the use of photoshop has been also found regularly, which makes the tweet automatically false.


Figure 6. An attached photo of a tweet by Lavie Jan Roos (Roos, 2020)

Tweets that had no evidence

3% of the fact-checked tweets were labelled as no evidence, because they could not be linked to confirming or contradicting evidence. An example of such a non-fact-checkable video is shown in Figure 7. In the tweet, someone claims that she is arrested and that her head was beaten against the wall by the police. However, there was no evidence found that this person in the video is the person who makes these claims in the tweet itself.


Figure 7. A tweet and the attached photo by a user of Twitter (Mercedes, 2020)

Are the most retweeted tweets, that contain visuals, about COVID-19 reliable?

Fortunately, our results showed that the major part of the analysed tweets turned out to be true information (60%), and therefore cannot be labelled as misinformation. However, the other 40% of the analysed tweets contained visuals that are either mostly true, half true, false, mostly false or had no evidence. This raises concern, since fake news can manipulate the public’s perception of reality and is able to change attitudes (Damico, 2019). Especially news and information spread concerning the COVID-19 virus should be reliable and accurate. That this is not fully the case is alarming. Mainly, because visuals shared in a post receive more likes, clicks and retweets and are perceived as credible storytelling (Cao et al., 2020) (Morris et al., 2020). As a result, this could go viral and lead to a digital wildfire (Lewandosky et al., 2017).

With this article, we hope to make the public aware of the fact that not all the information about COVID-19, shared on Twitter, is accurate. Always be critical and cautious with the information that comes across. It’s crucial to fact-check information, before we decide what we believe or what is true or false. Especially when it has such a large impact on our lives, as the COVID-19 pandemic.

The list of tweets used for this article and the corresponding results of the fact-checked tweets can be found here. The method of how we fact-checked the tweets can be found here.

References

Apuke, O. D., & Omar, B. (2020). Fake news and COVID-19: modelling the predictors of

fake news sharing among social media users. Telematics and Informatics, 101475.

https://doi.org/10.1016/j.tele.2020.101475

van Bommel, M. (2020, March 21). Mark van Bommel’s tweet. Twitter. https://twitter.com/MarkvanBommel6/status/1241473407310532612

COVID-19: Fake News. (z.d.). Europol. Geraadpleegd op 20 november 2020, van

https://www.europol.europa.eu/covid-19/covid-19-fake-news

Cao, J., Qi, P., Yang, T., Guo, J., & Li, J. (2020). Exploring the Role of Visual Content in Fake News Detection. Key Laboratory of Intelligent Information Processing & Center for Advanced Computing Research, 1–20. https://www.researchgate.net/publication/339873736_Exploring_the_Role_of_Visual_Content_in_Fake_News_Detection

Cook, J., Lewandowsky, S. (2011), The Debunking Handbook. St. Lucia, Australia:

University of Queensland. November 5. ISBN 978-0-646-56812-6. [http://sks.to/debunk]

Damico, A. M. (2019). Media, Journalism, and “Fake News”: A Reference Handbook

(Contemporary World Issues). ABC-CLIO.

Hou, Y. J., Okuda, K., Edwards, C. E., Martinez, D. R., Asakura, T., Dinnon, K. H., Kato, T.,

Lee, R. E., Yount, B. L., Mascenik, T. M., Chen, G., Olivier, K. N., Ghio, A., Tse, L. V., Leist, S. R., Gralinski, L. E., Schäfer, A., Dang, H., Gilmore, R., … Baric, R. S. (2020). SARS-CoV-2 Reverse Genetics Reveals a Variable Infection Gradient in the Respiratory Tract. Cell182(2), 429-446.e14. https://doi.org/10.1016/j.cell.2020.05.042

Lazer, D. M. J., Baum, M. A., Benkler, Y., Berinsky, A. J., Greenhill, K. M., Menczer, F.,

Metzger, M. J., Nyhan, B., Pennycook, G., Rothschild, D., Schudson, M., Sloman, S. A., Sunstein, C. R., Thorson, E. A., Watts, D. J., & Zittrain, J. L. (2018). The science of fake news. Science359(6380), 1094–1096. https://doi.org/10.1126/science.aao2998

Lewandowsky, S., Ecker, U. K. H., & Cook, J. (2017). Beyond misinformation:

Understanding and coping with the “Post-Truth” era. Journal of Applied Research in Memory and Cognition, 6, 353-369. 

Li, Y., & Xie, Y. (2019). Is a Picture Worth a Thousand Words? An Empirical Study of

Image Content and Social Media Engagement. Journal of Marketing Research57(1), 1–19. https://doi.org/10.1177/0022243719881113

McCabe, D. P., & Castel, A. D. (2008). Seeing is believing: The effect of brain images on

judgments of scientific reasoning. Cognition107(1), 343–352.

https://doi.org/10.1016/j.cognition.2007.07.017

McGonagle, T. (2017). “Fake news”. Netherlands Quarterly of Human Rights35(4), 203–

209. https://doi.org/10.1177/0924051917738685

Meeker, M. (2016, 1 juni). Mary Meeker’s 2016 internet trends report: All the slides, plus

analysis. Vox. https://www.vox.com/2016/6/1/11826256/mary-meeker-2016-internet-trends-report

Mercedes, S. (2020, May 8). SHAWTY MERCEDES’ tweet. Twitter. https://twitter.com/talitzorr/status/1269780066097037312

Morris, M. R., Counts, S., Roseway, A., Hoff, A., & Schwarz, J. (2012). Tweeting is believing? Proceedings of the ACM 2012 Conference on Computer Supported Cooperative Work – CSCW ’12, 1–10. https://doi.org/10.1145/2145204.2145274

Roos, J. (2020, March 26). Jan Roos’ tweet. Twitter. https://twitter.com/LavieJanRoos/status/1243114704677023744

Rutte, M. (2020, March 23). Mark Rutte’s tweet. Twitter. https://twitter.com/MinPres/status/1242166964912619532

Toffler, A. (1984). Future Shock by Toffler, Alvin (1984) Mass Market Paperback (Reissue

editie). Bantam.

Waszak, P. M., Kasprzycka-Waszak, W., & Kubanek, A. (2018). The spread of medical fake

news in social media – The pilot quantitative study. Health Policy and Technology7(2), 115–118. https://doi.org/10.1016/j.hlpt.2018.03.002

Wilders, G. (2020, March 26). Geert Wilders’ tweet. Twitter. https://twitter.com/geertwilderspvv/status/1243164815864061952

Wilders, G. (2020a, March 19). Geert Wilders’ tweet. Twitter. https://twitter.com/geertwilderspvv/status/1240667470471614465

Zhang, X., Ding, X., & Ma, L. (2020). The influences of information overload and social

overload on intention to switch in social media. Behaviour & Information Technology, 1–14. https://doi.org/10.1080/0144929x.2020.1800820

Misleading data visualization in Dutch politics?

With the upcoming parliamentary election in March 2021 in the Netherlands, the election campaigns will be back soon. During the election campaign, the politicians advertise their political party. They explain what the party wants and how they want to achieve it. Every political party wants to get as many people as possible to vote for their party by influencing the people during their campaigns (TweedeKamer, zd). In 2018, Mw. dr. E. (Eva) Groen-Reijman already warned people against purring and framing politicians, she states that voter manipulation is lurking. Politician parties frame, spin respond and engage in targeting people (Trouw, 2018). One way that can be used to mislead voters is by using misleading data visualizations. 

Misleading data visualizations

In recent years there have been some developments that made life easier, such as the internet and this have ensured that people get faster and more information. Added to this, more information is available today than ever. This means that a lot of data and information is available and to get a clear overview of all data and information, data visualisations and graphs are needed. Visualizing data is not only important to increase the perceptibility, but also revealing the patterns within information, being educative, persuasive and guiding depending on the content (Dur, 2014). According to Jones (2006) and Allen, Erhardt & Calhoun (2012) visualizations are essential tools in establishing the relationship in data sets and they can spread information to a wide audience. But behind these visualizations there is a great responsibility. Charts and data visualizations can be misleading which can distort viewers’ perception in to leading incorrect conclusions (Jones, 2006).

According to Yang, Vargas-Restrepo, Stanley & Marsh (2020) graph can be a tricky business, because ‘’Graphs do not have to be factually wrong to mislead’’. There are many techniques that can distort a graph; one of the simplest is y-axis truncation. “Y-axis truncation is the practice of beginning the vertical axis at a value other than the natural baseline’’ (Yang, et al,2020) . In which the starting point is that “The presentation of numbers, as physically measured on the surface of the graphic itself, should be directly proportional to the numerical quantities represented” (Yang, et al, 2020). But Truncation has commonly been used in scientific and mass media. Even politicians try to deceive you in this way. Politician Emile Roemer, who led the SP from 2012 to 2017, was guilty of misleading data visualizations by Y-axis truncation, as you can see below in figure 1 and 2.

Figure 1,

The result of 6 years VVD policy under Mark Rutte. The tenant pays the bill.
Source: Tweet Emile Roemer (https://twitter.com/emileroemer/status/827242437370318849)

Figure 2,

Average rent per month in regulated sector.

Lowering the rents in the regulated sector was one of the most important points in the election program of the SP and this graph tweeted by Emile Roemers shows that the rental prices of houses increased significantly in the period between 2010 and 2016. But this graph is misleading, because of y-axis truncation. The y-axis starts at amount of €400, – while the y-axis should start at the natural baseline of €0,-. When the graph starts at an amount of €0, – the graph will show a much less dramatic increase. This shows a different outcome, while the data is exactly the same. This relates to Yang et al (2020) who stated that “Graphs do not have to be factually wrong to mislead’’.

Is this just an accident or is this more common in politics?

It is unfortunately more common and even political parties try to mislead you with misleading data visualizations. Below you can see an example of how the VVD tries to mislead people regarding to crime in the Netherlands. In 2015 the VVD posted these graphs to claim the success in terms of crime reduction (figure 3). Reducing crime was part of the VVD’s election campaign. Also in these graphs there is Y-axis truncation. The y-axis does not start a zero-point thereby suggesting that there is a more severe decline than there actually is. Figure 4 shows that crime has decreased less than suggested by the VVD.

Figure 3,

Source: Sargasso.nl, 2015

Figure 4,

Source: Sargasso.nl, 2015

Why is this misleading?

These 2 examples mentioned above relate to one of the 3 strategies for misleading data visualizations. According to Cairo (2015) these strategies are:

  • Hiding relevant data to highlight what benefits the sender
  • Displaying too much data to obscure reality
  • Using graphic forms in inappropriate ways

Using graphs in forms in inappropriate ways is one of the most common one. ‘’Many lies in this category are grossly conspicuous, but may go unnoticed if shown quickly on a screen, or if the viewer is distracted by visual bells and whistles’’ (Cairo, 2015). The graphs above with y-axis truncation are a good example of using a graph in an inappropriate way.

What do I think and what do you think?

In my opinion the election campaigns should always be examined critical. After writing this blogpost I will try not just believe everything they tell and I will look at the facts and how they are visualized.

Do you think politicians and political parties should avoid misleading data visualization in their election campaign, or do you think it is their own choice as long they don’t lie and the people should look more critical?

References

Allen, E. A., Erhardt, E. B., & Calhoun, V. D. (2012). Data visualization in the neurosciences: overcoming the curse of dimensionality. Neuron74(4), 603-608.

Cairo, A. (2015). Graphics lies, misleading visuals: Reflections on the challenges and pitfalls of evidence-driven visual communication. In D. Bihanic (Ed.), New challenges for data design (pp. 103-116). Springer-Verlag, London.

Dur, B. I. U. (2014). Data visualization and infographics in visual communication design education at the age of information. Journal of Arts and Humanities3(5), 39-50.

Jones, G.E. (2006). How to Lie with Charts, SecondEdition (Santa Monica, CA: LaPuerta).

Marijnissen, H. (2018, 1 februari). “Pas op voor misleiding tijdens campagnetijd”. https://www.trouw.nl/nieuws/pas-op-voor-misleiding-tijdens-campagnetijd~bef98c77/

Roemer, E. (2017, 2 februari). Het resultaat van 6 jaar VVD beleid onder Mark Rutte. De huurder betaalt de rekening. [Tweet]. https://twitter.com/emileroemer/status/827242437370318849

Sergasso. (2015, 9 maart). VVD voor verkiezingen wel erg makkelijk met grafieken veiligheid. https://sargasso.nl/vvd-verkiezingen-erg-makkelijk-grafieken-veiligheid/

Yang, B., Vargas-Restrepo, C., Stanley, M., & Marsh, E. (2020). Truncating bar graphs persistently misleads viewers. Journal of Applied Memory and Cognition, in press

TweedeKamer. (z.d.). Campagne voeren. https://www.tweedekamer.nl/zo_werkt_de_kamer/verkiezingen_en_kabinetsformatie/campagne_voeren

The second corona wave is 10 times worse than the first wave

The tittle of this blog is misleading. When you read this tittle, you probably indeed think that the second wave in the Netherlands is 10 times worse than the first wave. At least, this is what the number of positive tested people is suggesting. Misleading data is data that is distorted in such way that they look better or worse than it actually is, which can lead to incorrect conclusions like.

Covid-19

In December 2019 an outbreak of the Covid-19 virus occurred in Wuhan and since then the Covid-19 spread around the world. Covid-19 is a disease caused by a new coronavirus (SARS-CoV-2). The first Covid-19 patient in the Netherlands dates from February 27, 2020. After the first infection, the Netherlands went into lockdown quickly and this was called the ‘first wave’. On November 4, 2020 the Dutch government announced a second lockdown and this is called the ‘second wave’ (RIVM, 2020). Since the first outbreak of Covid-19, all data related to Covid-19 has been retrieved and stored. But how can the data of the second wave mislead people?

Suggest data that the second Covid-19 wave is worse than the first Covid-19 wave, or not?

Since the first outbreak of Covid-19 in February 2020 all Covid-19 related data is stored. Covid-19 related data is related to: the number of people tested per day, number of positive people tested, the number of hospital admissions and the number of deaths from Covid-19. The Dutch government publishes the number of positive tested people, as can be seen in figure 1 (RIVM. 2020). The data suggest that the second wave is worse than the first wave in the Netherlands, while in fact this data is misleading you from the fact that the first wave was more violent than the second wave.

Figure 1, GGD reports positively tested persons per day from 27 February 2020 (RIVM, 2020)

Is the data misleading?

But is the data actually misleading? The number of people who tested positive was high in October, with approximately 10.000 infections every day. But how does this number mislead of and is all the data correct? 

The number of positive tested people relates to the test capacity. The test capacity in the second wave was a higher than the test capacity in the first wave. The number of tests in week 43 2020 was 321.379 (RIVM, 2020), while in comparison to the first wave in march this was 27.000 in one week (RTL Nieuws, 2020). Because there has been a increase in test capacity over time, that data of positive tested people can be misleading. In the number of positive tested people, the number of tested people in total is often not mentioned. The increasing test capacity makes it impossible to compare previous data. In addition to this is the data not 100% reliable. Research from RTL Nieuws (2020) has proven that, the more there is tested for corona, the more often the results is incorrect. This means that people get a positive test result, while in fact they don’t have the corona virus. The research concluded that with a contamination level of 3%, more than a quarter of positive results are incorrect (RTL Nieuws, 2020). Furthermore it happened that there were recalculations due to malfunctions (NOS, 2020). All these factors can mislead and distort the data.

Figure 2, RIVM reports 8123 infections, but number not complete due to malfunction.

According to Chambers (2017) people have the tendency to embrace information that supports their beliefs and reject information that contradicts their beliefs. An example of confirmation bias is given by Peter Wason (1968). Peter Wason conducted an experiment in which 4 cards were in 1 row with 2 letters and 2 numbers. The only rule was that if a card has a vowel on one side, then it has an even number on the other side.  The participants had to turn over 2 cards to test this rule. The conclusion of this experiment was the people look for confirmation instead of falsification (Chambers, 2017).

This called confirmation bias relates to misleading data, because people search for information that confirms their belief on the internet. According to Fletcher and Park (2017) there is an increase in volume of information online. This is reinforced by the research of Yadamsuren and Erdelez (2012) who stated that instead of reading a newspaper, people increasingly turn into the internet (Fletcher & Park, 2017 ; Yadamsuren & Erdelez, 2012).

Information presented in news articles can be misleading without being blatantly false (Ecker, Lewandowsky, Chang & Pillai, 2014). The NOS is one of the biggest news channel in the Netherlands. And  uses headlines for every news article. These headlines attempt not only to grab the attention of the reader, but also seek to summary the content of the article. Only short part of the article can be highlighted, which can result in emphasizing different aspect of the article and this may alter the way we read the story and the information we gain from it (Geer & Kahn, 2010).

In these headlines used by the NOS they mention the number of positively tested people. Figure 3 and 4 show two examples of headlines. The data in these headlines can mislead the reader, because this may alter the way we the reader gain the information from it. This relates to misleading data because, people look for confirmation about the number of positively tested people and because this number is mentioned in the headline, the reader is misled due to the fact that this affects the information the reader gains (NOS, 2017).

Figure 3, RIVM reports 10.353 new infections.

Figure 4, more than 10.000 new corona infections in a day.

Statement

Do you think the measures taken regarding the second corona wave have been sufficient? Or did you do it differently to get the corona virus under control?

References

Chambers, C. (2017). The 7 deadly sins of psychology. A manifesto for reforming the culture of scientific practice (Chapter 1: The sin of bias). Princeton, NJ: Princeton University Press.

Ecker, U. K., Lewandowsky, S., Chang, E. P., & Pillai, R. (2014). The effects of subtle misinformation in news headlines. Journal of experimental psychology: applied20(4), 323.

Fletcher, R., & Park, S. (2017). The impact of trust in the news media on online news consumption and participation. Digital journalism5(10), 1281-1299.

Geer, J. G., & Kahn, K. F. (1993). Grabbing attention: An experimental investigation of headlines during campaigns. Political Communication10(2), 175-191.

RIVM. (2020a, 27 februari). Patiënt met nieuw coronavirus in Nederland | RIVM. https://www.rivm.nl/nieuws/patient-met-nieuw-coronavirus-in-nederland

RIVM. (2020, 3 november). Ontwikkeling COVID-19 in grafieken.  https://www.rivm.nl/coronavirus-covid-19/grafieken

RIVM. (2020, 6 november). De ziekte COVID-19.  https://www.rivm.nl/coronavirus-covid-19/ziekte

RTL Nieuws. (2020, 31 maart). Aantal coronatests komende weken verviervoudigdhttps://www.rtlnieuws.nl/nieuws/politiek/artikel/5076451/aantal-coronatests-wordt-verviervoudigd-rivm-corona-kabinet

RTL Nieuws. (2020b, 11 september). Hoe meer er getest wordt op corona, hoe vaker de uitslag niet klopt. Geraadpleegd van https://www.rtlnieuws.nl/nieuws/nederland/artikel/5183095/testen-betrouwbaar-vals-positief-uitslag-klopt-niet-corona-marge

Yadamsuren, B., & Erdelez, S. (2011). Online news reading behavior: From habitual reading to stumbling upon news. Proceedings of the american society for information science and technology48(1), 1-10.