The Role of Algorithms: A paper.li Experiment

Noelle Hobbs
4 min readFeb 12, 2019

In class, when we were introduced to paper.li, I originally believed that the user would chose a topic and paper.li would use an algorithm to pull different pieces of information off the internet and compose a single article. This information seemed interesting to me, but it didn’t seem like it was impossible. Then, I found out that my understanding was incorrect, and it was not how paper.li worked. However, my original thought of non-human “writers” wasn’t completely far-fetched. This reminded me of how my research for Assignment 2 on robots as designers, as paper.li shows that they can also become publishers.

Using paper.li felt similar to using Google for finding information. When typing in topics to search, there were suggestions that would pop up that made you feel obligated choose them (Figure 1). We are often forced to choose these suggestions because of the thought that if Google suggested it, it must be the phrase that has the most information on the topic. This limits us in asking the questions or writing the phrases of the specific information that we want to discover on search engines.

Figure 1: When typing the word “data” these are the suggestions that appeared

However, this does allow it to be easier for us to find information, especially when we didn’t know how to word the question in the first place. Convenience is present, also, with predictive algorithms suggesting the next words we plan on typing. It would take time to individually sort through and find articles related to your interests, so this type of technology makes the process faster.

In an advertising perspective, paper.li is a helpful way to curate content to share on a company blog, newsletter, and social media channels. Paper.li allows you to receive new content on your chosen topics, each week, and you can share easily (with their Pro setting) on social media.

The use of paper.li makes me question the content that they are pulling. Are they pulling from top-ranked Google websites? Are they pulling from websites that are paper.li’s affiliates? Their algorithm is pulling these pieces of content, but are they credible? If they are considered “credible,” someone at paper.li had to craft an algorithm that determined quality, which can be different than what an individual considers trustworthy.

The problem may be all of these questions, but it also is the fact that we do not have access to the answers to these questions. Algorithms are seldom provided for individuals, and if they are, many people do not understand how they work (Figure 2). This also causes worry when it has been said that many people “might be overestimating how much the content-providers understand how their own systems work” (LaFrance, 2015). If the “owners” of these algorithms don’t know how they work, what is stopping algorithms from controlling us, instead of the other way around?

Figure 2: This is a visual showing how Google’s PageRank Algorithm works, which can be seen as confusing.

For my personal paper.li online paper, I noticed that not all of the articles were necessarily relevant to the topic I selected (Figure 3). I typed that I wanted my paper to be about data visualization, and most of the content had to do with digital marketing. Although digital visualization can be an aspect of digital marketing, it is so much more than that, and can intersect with more fields than just marketing. This made me feel that I wasn’t getting the best collection of sources for this topic.

Figure 3: Several of my suggestions for “digital visualization” included digital marketing articles.

The use of paper.li took out the human touch of curating content. I find it interesting because if you were to ask me and someone else to curate content on a topic, you might get a few similar pieces, but there would be a variety of sources. This would be because we would be using different keywords to search and have preferences on websites that we trust or may have heard one. Using one single algorithm, takes away the diversity that comes when researching information.

I think that using algorithms can definitely help streamline a process, but this often can lead to less relevant and dehumanized results. This is a statement that can apply to all technology, as well. How far will we go to get faster and easier answers, and will it become something that is purely based on who the platform’s owners are working with? For instance, if Google doesn’t like Apple, then will that limit access to information about Apple, or resources for Apple users to use? These are questions we need to think about as we move into a more technologically advanced, and heavy algorithm-using society.

--

--