Original post by GonvilleBromheadAmazing. That is the best false equivalency I've seen in a long time - that is the equivalent of saying 'you think cars go wrong? You just must not believe in machinery'. As I pointed out numerous times I make no comment on this absolute load of claptrap, the OP is full of rubbish but my point is the numbers do not forgo the conclusion. Case in point. A study compared two cities. One had a homelessness figure of say 10,000 (I've forgotten the exact figures but they don't actually matter), the other 1,000. The study had perfect methodology (as close as can be gotten). It concluded that 10,000 is a bigger number than 1000 so therefore that town was worse for homelessness. But the 1000 town had a rate per 100 twice that of the town of 10,000 - to me the rate per 100 is worse because it means each individual is more likely to be homeless. So is the study correct? There is always a gap between the raw data and the assertion.
Also weirdly for some reason you seem to presume I'm questioning gravity or something. Items such as that are pretty demonstrable. It's hard to skew the value of gravity due to ideological leaning or interpretative assertions because there is no confounding element or socially charged topic. Take for example during my law course we saw a study on criminality that used the scientific method to absolutely talk rubbish - if you broke down its figures (this was issued as a warning to us against citing sources without reading them properly) it concluded six out of every five people got assaulted because the way it categorised violence overlapped and the method used to measure didn't isolate repeat variables.
Similarly how many 'studies' release saying they prove misogyny or some other such topic with near infinite confounding variables and then you have studies that use terrible methods, poor reasoning or are simply blatantly ideological (take the supposed 1 in 5 figure for rape on campuses which would actually make them (a) more dangerous than the surrounding areas, and (b) there has been every figure from 1 in 4 to 1 in 1000 presented by various studies. This is before you get into issues of how data was collected for example interpreting responses, self reporting, assumed evaluation (see the study in which it was claimed babies were racist because they stared at same race pictures for marginally longer on average) - studies are rife with issues which is why peer review and repeatable outcomes are crucial. But how do you establish a repeating method for a study in which they ask 'do you think we should stop affirmative action and just pursue meritocracy?' and class all affirmative answers as 'racist' (a study on the huffpost linked to here a while back) in which they manipulated the sharpness of the line of graph to claim race was the most important factor by using 'certainty' to set the gradient ie they said we are more certain people are openly racist than we are the census data on their income is correct.
Also why is me saying we should be as skeptical of academics as we are of randomers an inherently incorrect statement? Why is the bizarre assumption because I dont inherently trust academics I therefore inherently trust a random on the internet? That isn't even close to logic. Obviously if I'm skeptical of someone who is qualified I'm just as skeptical of someone who is not qualified - the fact this isn't obvious to everyone astounds me.
This is particularly the case as a law student where we see bad studies all the time. Such complex legal and social phenomenon cannot be simply boiled down to a single measurable statistic. The information is there but that gives you no insight to the causal link, for example the sentencing gap between black and white is argued to be 40% in 'a study' but when you remove the confounding factors ie prior charges it becomes 8% - if you reduce this further according to 'remorse shown' and 'aggressive courtroom behaviour' it becomes 4%. Ignoring the latter and using the harder data of the 8% figure that is a 32% disparity between the study and reality. Studies - and academics - are not inherently correct just because they are. Again read Aristotles theory of equity, it literally doesn't make sense but he was an academic so its correct? Nonsense.
Because you're on a thread about misandry. There is no concrete evidence from a study or otherwise for such complex social interpretations. No study will prove misandry or misogyny or racism, it may highlight causal issues but never the direct source. You need a whole load of these indicators to paint a broader picture and more social insight still to reach full conclusions. Also that's a very personal response to a very broad statement - as if its specifically about you.
I don't. That's obviously illogical and a stupid thing to say.
If that's what you think then you clearly haven't been reading what I've actually written. This whole thing is a misrepresentation. The point is if you ask who knows more about pregnancy, a male doctor or a woman who has had a child? The answer is it depends on your question. If it's a personal question, how does it feel then she gives the more appropriate answer. If it's a general question about health or anatomy then I'd be more inclined to ask the doctor. My point being in an all encompassing study you can't just take one data source and go 'f**k it that'll do'.
If I was researching altzeimers it depends on the question. If I wanted to know just the facts I'd ask a clinician. If I wanted to create social policy say for a government department I'd ask NHS accountants, doctors, clinicians, carers, those who have lived with a relative with altzeimers, a cross section of the general population, I'd take assessments of the financial impact and compare that to groups most heavily linked with the disease etc etc - my point is I wouldn't just ask a doctor and make my policy based on that alone because that is when you reach unreasonable conclusions for example presume their expert testimony told me 'it's most efficient to defund care and shift the burden to families, it'll cause a rise in the housing market as they're forced to move back in with family and save the NHS millions by splitting the burden of care as well as meaning the number of related injuries and accidents would be reduced by close family attention'. On this evidence alone I would act. Six months later my policy would cause untenable debt to the poorest families, likely a rise in poor care resulting in more deaths, more mental health issues for already stressed family members etc because I didn't bother taking a full suite of information into my conclusion.
A very noble view. Bit simplistic. A study is only as good as the person carrying it out, as soon as the data gets interpreted that person is adding their life experiences, their lived biases onto the raw data and therein formulating a conclusion. But again this depends on the type of study. If it's about social issues the propensity is much higher, nobody is going to get political about an experiment to say test the melting point of steel beams and therefore the data is the data. It's also a much simpler data set and methodology, a thing happens, you measure it, repeat, that's your answer. All studies are not created equal.
This whole principle of me thinking 'the opposite' is just making a binary out of a nuanced issue. Why is it if I disagree with something I automatically think the complete opposite. I don't much like antifa beating up innocent people and stopping working class people getting to work on time, does that mean I'm automatically far right? I don't think boycotting the dear white people film is anything but pointless hypocrisy actively against the goals of comedic freedom and a society in which we are all free to make commentary on each other and everyone can enjoy it, does that make me a racist? This binary thinking is dumb and leads to stupid conclusions that indeed would be flabbergasting were they remotely close to true