Sunday, September 11, 2016

Blogs you need to know about

I write a blog about things I think are annoying. It's included topics like Amy Schumer, the fact that people still say "bye Felicia," and people who refer to themselves as intellectual.

Res Annoying 
http://resannoying.tumblr.com

And our cat writes a blog about living with us. 
barnabyscatblog
https://barnabyscatblog.wordpress.com

Monday, July 4, 2016

Useless Gun Post

We don't have any way of saying whether or not carrying a gun makes you safer, or prevents (slows, stops) mass shootings, etc. 

The best we can manage is either tapping our forefinger against our chin, checking our gut instincts and offering a yay or nay, in which case if I say Yay and you say Nay, we're again at an impasse, and that didn't help us at all; OR we can point to cases where a person used their weapon to "prevent" (more about "prevent" later) a bad thing and say "SEE?" And the problem here is, well, it's anecdotal. And because we don't and can't have all of the data from all of the events of this type, or even a representative sample of the data, we can't say for certain if this case is an outlier, or part of a general trend. Of course, for public relations it's an entirely different matter. But we're not interested in merely creating an impression of legitimacy, are we? We want to be able to say, to the best of our ability, whether or not we're any safer. And to do that we need data.

But before we get data, we need to clean up some definitions. "Safer", for example, is a pretty nebulous term. For example, say in the data we found that with guns, there is a 10% decrease in the likelihood of death, but a 30% increase in the likelihood of injury. Is that acceptable? Is death the only thing we're interested in? Either way we hash it out, we need to say explicitly ahead of time how we're measuring or we'll only be drawing a circle around the data that seems to support our own biased position and saying we proved it, not unlike the anecdotal case above.

Also, safer for whom? Are we interested in self-protection only, or the protection of others as well? Clearly the answer to that question will change the way we look at data.

Also, "prevent" is a tricky word. It suggests we're stopping people before they start. That either means a stare down with nothing happening afterwards, or it means shooting people in cold blood. Presumably no one would advocate shooting in cold blood. So we would probably want to look at stopping or slowing down a shooter. Here the problem is we have to enter the weird world of counterfactuals: what would have happened, or what could have happened, i.e. What did happen in the possible world where there was no intervention. We don't have access to that information, because that information doesn't actually exist. We can speculate, but this will lead to estimation, and estimation is heavily subject to bias. (Suppose your hypothesis is that guns do not make anyone safer and you estimate the "could have killed" number at 5, whereas my hypothesis is that guns do make you safer, and I estimate the "could have killed" number at 100). Perhaps I interpret a (relatively) innocuous bar fight as a potential mass shooting (ahem), or you interpret a mass shooting a several distinct separate incidents. Clearly our beliefs will taint our analysis.

And lastly, we need to count everyone who was there. This probably seems obvious, but in detail it gets tricky. Suppose there is a shooting in a mall where there are 5000 people. 10 die, 20 are wounded. We need to know about the other 4970 people. How many of them are carrying weapons? How can we get that information? Are we considering them to be participants or not? Are they causal factors in determining safety rates? This is important because if it turns out that in most cases there are other gun-carriers present who did nothing/ran away, their protect-self rates would be high, while their protect-other rates would be no impact. 

Now suppose it is a mall (not a ticketed event where we can get an attendance figure) and we just don't have a good way of knowing how many people are present. Again, the estimation bias issue creeps back in. How many others we estimate will impact ratios, as above. 

I say all this not to say we shouldn't research these things. We obviously should. Just because something is tricky doesn't mean it should be avoided. I simply mean to point out that if you think the evidence clearly supports your position, whatever your position is, you are grossly over-estimating the evidence that's available. 

Tuesday, March 22, 2016

If we talked about the NCAA like we talk about anything else

Actual Story: School A beat School B 102-81.

News Story: "School A and School B played each other today. We'll give each school the same amount of airtime to state why they won. You can weigh in on who won on our Facebook page."

Facebook Story: 
"Free throws cause autism!"
     "No they don't!"
     "We need to study whether they do better!"

"In 1992 someone from School B said something bad."

"The troops go to School A"
"The troops go to School B"

"School A contains ingredients I don't know what they are!"
     "If we don't know what it is, it's probably scary!"

"Hitler would support School A!"
"Hitler would support School B!"

"I don't believe in School B, and I shouldn't have to acknowledge that it exists."

"The troops go to School B."
     "They go to School A, too!"

"What are you all talking about? The score was 102-81. End of discussion. There's no argument to be had!"
     "Shut up, nerd!"
     "You're an elitist!"
     "You're trying to take away my right to free speech!"