▲ | DannyBee 2 days ago | |
I read your other comment with the numbers and I don't think it makes the amazing difference you seem to. Certainly not to the degree i think it makes it all worth it. Maybe if they at least plateau in different places, but they don't. I think you seem fairly defensive (you've posted the same response repeatedly) about what still seem like middling results. As a basic example: While your point about the starting percentages is correct, the study lost partipicants over time. Group A (the 1k/month group) lost 33% of its participants by T3, and Group C (the 50/month comparison group) lost 38% of its participants. The table you quote from the final study doesn't include the people who were lost, only those who filled out both surveys, T1 and T3. So using it to say they helped a greater percent of people is a bit weird. They also don't tell you the table for T2 in the final report, you have to go look at the interim one. The T2 data makes T1->T3 look much less impressive, and definitely doesn't seem to support some amazing gain for group 1. As far as i can tell, the data looks even less impressive for your claim if you do t1->t2, and T2->t3, instead of just t1->t3 with only both included. It would certainly be easier to tease apart the point you try to make if they reported the number of originally unhoused vs originally housed that were retained at each timepoint, but they don't. So what am i missing? Why should I look at these results and think it is amazing? (Also, I don't think i'd agree the main argument the author makes is based and refuted solely by the results of the denver study) | ||
▲ | vannevar 2 days ago | parent [-] | |
>I read your other comment with the numbers and I don't think it makes the amazing difference you seem to. Maybe you're looking more at the article headline, which implies the author was focused on the study results. The thrust of the article isn't that the programs are ineffective (in fact, toward the end of the article she's quite optimistic that isn't the case). Her problem is that the results are overstated. But one of the prime examples she cites to support that idea does not actually support it. Denver claimed significance, and the study results support their claim. >The table you quote from the final study doesn't include the people who were lost, only those who filled out both surveys, T1 and T3. So using it to say they helped a greater percent of people is a bit weird. Why is that weird? The percentage of the test group who found housing is significantly higher than control. We don't know what happened to the people who dropped out---the worst case scenario is that none of them found housing, which leaves the stats as they are. >So what am i missing? Why should I look at these results and think it is amazing? They didn't claim it was amazing, they claimed it was significant. The author implied they were lying. They were not. That's what you're missing. |