Main content
Course: Wireless Philosophy > Unit 5
Lesson 2: Is technology as neutral as we think it is?Is technology as neutral as we think it is?
In this Wireless Philosophy video, Ryan Jenkins (professor of Philosophy at Cal Poly focuses on the illustrative example of online search engines to consider whether the technologies we use in our everyday lives really are the “value-neutral” tools we often take them to be. Should ethical concerns and other values be given a role in determining which results a search engine “chooses” to show us? If so, which values, how much of a role, and–perhaps most importantly–who decides? Created by Khan Academy.
Want to join the conversation?
- How could we possibly think that the information we get back, from say, a search on google, would not be influenced/biased/directed towards a result that was not, at the very least, determined by a programmer, as a generic term?
(Perhaps this is the author's point, expressed differently.)
But let's start from a point forward in time. Can you imagine the lawsuits that would result from a google search where they actually did post sites where the best method of suicide was given? And how could we even begin -- in the most formal way -- to determine if the method was actually the best method? The person who experienced it is no longer with us to give feedback/analysis.
I agree; collectively, we think the information is objective. I, personally, know that it is not. Having used search engines for many decades, I know that I may have to drill down in the results to get a relevant/satisfactory answer.
I do appreciate the author putting the question before us.(5 votes)
Video transcript
Hi, I’m Ryan Jenkins, a philosophy
professor at Cal Poly in San Luis Obispo. Lots of people think that
technology is “neutral” that it’s “just a tool”
that takes us from A to B, or helps us solve a
problem more efficiently, without raising any difficult
questions about our values. For example, you might think a car is just
a better horse-drawn carriage, a light bulb is just
a better candle, or a thermostat is
just a better fireplace. This is especially true when you think about the way
computers help us make decisions. Lots of people think
that data is “objective,” so if you’re asking a computer
to analyze data for you, then there’s one right
answer that it should give. I think this view is comforting because it eliminates the
need for human judgment. Our own choices don’t
seem to enter into the picture if technology is just
making our lives easier, faster or more efficient
in the best way possible. But I also think this
view is mistaken. Let’s look at one of the simplest
uses of computer algorithms sifting through data for us — one that you and billions of
other people use every day: search engines. Google is the world’s
most popular search engine. When you go to Google
and search for something, you probably think you’re
getting the “best” result — the website that’s the
best fit for your search. But there is actually no such a thing
as an “objective” Google search result. Google tailors
its search results to what it thinks its
users want to see. The search results
served to two people will depend on their
location, browsing history, and other “signals”
that Google uses. This makes sense — if
I’m searching for pizza I want to see pizza
restaurants near me, not in another city
or in another country! But let’s take another
example that’s more serious. In some cases, Google offers
different information to users, or hides information entirely, even when what they search
for has an objective answer. Imagine a user who
searches for something like, “What are the best
ways to commit suicide?” First, ask yourself: What would an “objective”
answer to this question be? Well, we have data about
the answer to this question. Maybe the computer
algorithms performing the search should just show the user the most relevant
information to answer their question. But then, take a
minute and ask yourself: What should Google
tell the user, really? What Google actually offers up is the number for a nationwide
24-hour suicide hotline, and a message telling
you that you're not alone and that confidential
help is available for free. Now, that’s not what
the user searched for — and it’s not actually helping
the user find what they want. It looks like, actually, human values are influencing
the way that the technology works — and this seems like
a good thing, right? The same is true at YouTube,
which is owned by Google: If a user searches for information
about terrorist groups like ISIS, YouTube will show them anti-ISIS
and anti-terrorism videos instead. If users search for information about
the covid virus or the covid vaccine, the site points them to reputable
sources, rather than misinformation, which could lead people to make
bad decisions about their own health. If something as
straightforward as search were really a simple matter of
efficiently crunching objective data, then these results
would be surprising. Instead, it seems like Google is willing
to alter the function of its product to nudge users in
certain directions: away from suicide, away from
terrorism, towards vaccines for covid. But if it’s okay for Google to alter its
search results for certain purposes, what values should guide them? Should Google just show users
whatever results they want to see? Whatever advertisers
want them to see? Or should Google limit the results to
only what’s in the user’s best interest? Or what’s good
for society overall? And who at Google
should be trusted to decide what’s in the best
interest of each user, let alone society at large — especially when billions of
people use their search every day, and rely on it to make decisions
that affect their wellbeing and the wellbeing of others? So, while we think that technology
is neutral, or merely a tool, what we’ve seen here is that even
something as simple as a search engine reflects our individual
choices and values. And moreover, a lot of
these choices made by Google seem like the right choice: they probably should try to steer
people away from committing suicide and towards resources
that could help them! Now, this is not simply
a story about search. Keep in mind that computers are
now helping us make decisions about who gets hired for a job, who
is allowed to fly on a plane, how long a criminal might
go to prison for a crime, who gets a loan from a
bank, and much more. We should be careful
not to be overly naive about the computer programs
involved in these decisions, either. What seems like computers
crunching objective data turns out to offer lots of
opportunities for designers to input their own values and
decisions into the way technology works. This can be an
intimidating thought, much less comforting than thinking
that developing technologies is a rather bland and one-dimensional job of
just making things more efficient. Instead, this realization
pushes us to ask: What’s the role of our
values in shaping technology? When should
efficiency, or objectivity, be balanced against
other things we care about, like human health or
society’s well being? What do you think?