Many of our behaviors are on a
partial reinforcement schedule. Partial reinforcement
refers to a situation in which a behavior is
reinforced only some of the time. These partial schedules
of reinforcement are important because they
are generally more resistant to extinction than
continuous reinforcement. As we discussed,
behaviors are shaped through a process of
continuous reinforcement of successive approximations
of the target behavior. However, continuous
reinforcement eventually becomes less reinforcing. So there's a need for
these partial schedules of reinforcement, which
vary in their ability to maintain learned behaviors. And these were actually
discovered by B.F. Skinner through observation of reward
schedules with animals. However, they apply
to humans too. So there are four schedules
of partial reinforcement and each one has
a different effect on controlling and
maintaining behaviors. As you're watching
this video, you'll probably think of
situations in your life where your behavior
was reinforced on each of these schedules. And by the end of
the video, you'll be able to label those
situations with the terminology used in operant conditioning. So here you can see
the four schedules of partial reinforcement. If these terms are new
to you, don't worry. They'll start to
make a lot more sense when you break them apart. So for our purposes, I want
you to associate the word "ratio" with amount
of responses. The word ratio looks
similar to ration. And a ration of
food, for example, is a certain amount of food. So ratio means amount. And for our purposes,
an amount of responses. Now when you see
the word "interval," I want you to associate the
word interval with time. Think of that phrase,
an interval of time, like maybe a long
interval of time passed, or we were only given
a short interval of time to answer the question. So interval means time. So here's ratio, which means
the amount of responses. And here's interval,
which means time. Now, each of these
categories can either be fixed or variable, meaning
that they can be fixed, as in consistent, or variable,
as in there is a variation. And if you combine
these words together, you come up with
the four schedules of partial reinforcement,
fixed ratio, fixed interval, variable ratio, and
variable interval. So let's talk about a
fixed ratio schedule. So pretend this
car salesmen gets a bonus for every
five cars he sells. That bonus is a reinforcer
placed on a fixed ratio schedule. I a fixed ratio schedule,
reinforcement only occurs after a fixed
number of responses. So this car salesman has
to sell five cars in order to get a bonus. And if he sells five cars in
a week, he'll get a bonus. If he sells five cars in a
month, he'll get a bonus. And if he sells five cars
in a day, he'll get a bonus. You get the idea. What I'm illustrating here
is that the reinforcement, in this case the bonus, is
contingent on the number of cars he sells, regardless of
how long it takes him to do it. So since the only barrier
between the car salesman and his bonus is the
number of cars he sells, you might imagine he'll
work at a furious pace to earn as many
bonuses possible. Jobs that demand you to work at
a fast paced manner like this, they often pay their employees
on a fixed ratio schedule. Think of like factory
workers and fruit pickers, for instance. That's the benefit of
a fixed ratio schedule. It tends to emit a
high rate of behavior because the frequency of getting
the reward with a reinforcer pretty much depends
on the person. So that's a fixed
ratio schedule. Now, let's talk about a
fixed interval schedule. So pretend this car salesman
receives a paycheck every two weeks as long as
he sells one car. The paycheck is on a
fixed interval schedule because the reinforcement
occurs after a consistent amount of time has passed, in
this case two weeks. So in this case,
his paycheck doesn't change if he sells
one car or 100 cars during that time interval. So his paycheck is dependent on
the amount of time that passes. So as you might
imagine, he probably doesn't have much
of an incentive to sell more than
one car if he'll make the same amount
of money anyway. And that's the classic rate of
responding for fixed interval schedules. It's much slower than, say,
the fixed ratio schedule like we discussed earlier. Now, here we have the
variable ratio schedule. A variable ratio schedule
means that the reinforcer is delivered after an average
number of correct responses has occurred. So a variable ratio schedule
is similar to a fixed ratio schedule except the
number of responses needed to receive the reinforcement
changes after each reinforcer is presented. So put simply, a
variable ratio schedule is literally a series
of fixed ratio schedules that just change. What matters in the end
is the average number of correct responses. So using an example we used
for the fixed ratio schedule, there's a car salesmen receiving
a bonus for every five cars he sells. If the number needed to receive
a bonus was always fix at five, then that would be a
fixed ratio schedule. But a variable ratio
schedule would vary. So maybe he must
sell like five cars to get the first bonus,
and then three cars to get the second bonus,
and then seven cars to get the third bonus, and then
six cars to get the fourth one, and then four cars to
get the fifth bonus. If you add up all the
cars sold and then divide it by the five
bonuses he received, you'd find out that the
average number of cars sold to receive a bonus is five,
which is what the fixed ratio schedule was above,
five cars per bonus. The difference here was that
the variable ratio schedule has a lot of uncertainty. The car salesman cannot predict
when he'll receive a bonus in this case. But with every
car sold, he comes closer to getting that bonus. So the classic example used when
it comes on a variable ratio schedule is a slot machine. If you ever played
a slot machine, you understand the power of
a variable ratio schedule. A slot machine is
programmed to pay out after an average
number of pulls. But since you never really know
when the payout will occur, you keep playing and hoping
that you'll win something. And that's one
reason it's so hard to walk away from
a slot machine. You always wonder
like what would happen if the next
pull is the jackpot. You don't want to
miss that, obviously. So a slot machine is a
variable ratio schedule because the reinforcement is
dependent on your behavior. That is, you have to bet money
and pull a lever in order to have a chance to
receive anything. And it doesn't matter
how long you wait in between pulling that lever. It simply matters
whether you performed the behavior of pulling a lever. So that is a variable
ratio schedule. A variable interval
schedule means that the responses
are reinforced after a variable amount
of time has passed. So to use the car
salesman again, imagine a supervisor
randomly showing up without notice to
give him a bonus. As long as the supervisor
sees him actively talking to a customer,
he'll give him that bonus. Now since a car salesman never
knows when a supervisor will drop by, he has to consistently
engage with a customer in order to increase the chance
that a supervisor will notice. The difference here
is that he could have sold one car or even 100
cars on average that month. But all that matters in
order to receive the bonus is whether or not he's
actively engaging in a sale when his supervisor
happens to come by. So a variable interval
schedule like this results in a more regular rate
of responding than does a fixed interval schedule. So those are the four types
of partial reinforcement schedules. And as you might
imagine, each one has a tendency to emit a
different pattern of response.