- Intro to theoretical probability
- Experimental versus theoretical probability simulation
- Theoretical and experimental probability: Coin flips and die rolls
- Random number list to run experiment
- Random numbers for experimental probability
- Interpret results of simulations
Experimental versus theoretical probability simulation.
Want to join the conversation?
- do you know the link for this website is?(5 votes)
- Logic and truth tables.
Can you please give a brief explanation in connecting to topic to a real world problem or in the work place.(2 votes)
- Probability is our guess or better say estimate of how would things work. A very simple example and realistic could be say there is some chance that say 60% that there will be a traffic jam on a sunday. Say you leave for a fun ride on sunday then it is not necessary that you will encounter a Jam but you can expect it and estimates suggest that there will be a Jam and remember so estimates are just expectation not necessarily we are right. I hope this makes sense and answers our query.(4 votes)
- Question - Does it matter what the coin side is before the toss is made?(3 votes)
- The side which it is tossed on does NOT matter, because the coin is getting flipped so many times in the air, that it would end up as heads or tails. Speaking in terms from the Law of Large Numbers, a EXTREMELY LARGE amount of flips would probably equal about the theoretical average.(1 vote)
- how does this program generate this toss randomly?(2 votes)
- Probably not the answer you are looking for. But it looks like java script: http://digfir-published.macmillanusa.com/stats_applet/stats_applet_10_prob.html(2 votes)
- Is this exactly what Law of large Numbers?(1 vote)
- I copied this from Wikipedia.
In probability theory, the law of large numbers (LLN) is a theorem that describes the result of performing the same experiment a large number of times. According to the law, the average of the results obtained from a large number of trials should be close to the expected value and will tend to become closer to the expected value as more trials are performed.
Their page is actually quite helpful. (https://en.wikipedia.org/wiki/Law_of_large_numbers)(3 votes)
- When solving problems in interpreting results of simulations I have no clue as to how to solve them. Mainly reading the charts throws me off.(1 vote)
- Those are dot plots. Basically, each dot represents one simulation and the number below represents the result of that simulation.
– – –
As an example, I ran 10 simulations of how many times I needed to flip a coin until it landed Heads.
• • • •
1 2 3 4 5
Above the number "1" there are five dots. This means that in five of the simulations it only took one flip for the coin to land Heads.
– – –
Let's say we want to use these simulations to approximate the probability that it takes at least 3 flips for a coin to land Heads.
Probability is the number of favorable outcomes divided by the total number of outcomes.
In this case that would be the number of simulations with 3 or more flips divided by the total number of simulations.
Well, there weren't any simulations with 3 flips,
there was one simulation with 4 flips
and one simulation with 5 flips.
So, there were 0 + 1 + 1 = 2 simulations that needed at least 3 flips.
Also, there were 10 simulations in total.
Thus, the probability that we need at least 3 flips of a coin until it lands Heads is approximately 2∕10 = 0.2(3 votes)
- I do not understand how the chart is set up? and can this chart be compared to the tree method with theoretical probability?
Thank you(1 vote)
- What's that website called?(1 vote)
- where can I practice this on my own(1 vote)
- [Instructor] What we're going to do in this video is explore how experimental probability should get closer and closer to theoretical probability as we conduct more and more experiments or as we conduct more and more trials. This is often referred to as The Law of Large Numbers. If we only have a few experiments, it's very possible that our experimental probability could be different than our theoretical probability or even very different. But as we have many many more experiments, thousands, millions, billions of experiments, the probability that the experimental and the theoretical probabilities are very different, goes down dramatically. But let's get an intuitive sense for it. This right over here is a simulation created by Macmillan USA. I'll provide the link as an annotation. And what it does is it allows us to simulate many coin flips and figure out the proportion that are heads. So right over here, we can decide if we want our coin to be fair or not. Right now, it says that we have a 50% probability of getting heads. We can make it unfair by changing this but I'll stick with the 50% probability. If we wanna show that on this graph here, we can plot it. And what this says is at a time, how many tosses do we wanna take. So let's say, let's just start with 10 tosses. So what this is going to do is take 10 simulated flips of coins with each one having a 50% chance of being heads. And then as we flip, we're gonna see our total proportion that are heads. So let's just talk through this together. So starting to toss. And so what's going on here after 10 flips? So as you see, the first flip actually came out heads and if you wanted to, say what's your experimental probability after that one flip, you'd say well, with only one experiment, I got one heads so it looks like 100% were heads. But in the second flip, it looks like it was a tails. Because now the proportion that was heads after two flips was 50%. But then the third flip, it looks like it was tails again because now only one out of three or 33% of the flips have resulted in heads. Now by the fourth flip, we got a heads again, getting us back to 50th percentile. Now the fifth flip, it looks like we got another heads and so now have three out of five or 60% being heads. And so, the general takeaway here is when you have one, two, three, four, five, or six experiments, it's completely plausible that your experimental proportion, your experimental probability diverges from the real probability. And this even continues all the way until we get to our ninth or tenth tosses. But what happens if we do way more tosses. So now I'm gonna do another, well let's just do another 200 tosses and see what happens. So I'm just gonna keep tossing here and you can see, wow look at this, there is a big run getting a lot of heads right over here, and then it looks like there's actually a run of getting a bunch of tails right over here, then a little run of heads, tails, and then another run of heads and notice, even after 215 tosses, our experimental probability has still, is still reasonably different than our theoretical probability. So let's do another 200 and see if we can converge these over time. And what we're seeing in real-time here should be The Law of Large Numbers. As our number of tosses get larger and larger and larger, the probability that these two are very different goes down and down and down. Yes, you will get moments where you could even get 10 heads in a row or even 20 heads in a row, but over time, those will be balanced by the times where you're getting disproportionate number of tails. So I'm just gonna keep going, we're now at almost 800 tosses. And you see now we are converging. Now this is, we're gonna cross 1,000 tosses soon. And you can see that our proportion here is now 51%, it's getting close now, we're at 50.6%. And I could just keep tossing, this is 1100, we're gonna approach 1200 or 1300 flips right over here. But as you can see, as we get many many many more flips, it was actually valuable to see even after 200 flips, that there was a difference in the proportion between what we got from the experiment, and what you would theoretically expect. But as we get to many many more flips, now we're at 1,210, we're getting pretty close to 50% of them turning out heads. But we could keep tossing it more and more and more and what we'll see is, as we get larger and larger and larger, it is likely that we're gonna get closer and closer and closer to 50%. It's not to say that it's impossible that we diverge again, but the likelihood of diverging gets lower and lower and lower the more tosses, the more experiments you make.