Page 5 of 7 FirstFirst 1234567 LastLast
Results 61 to 75 of 93

Thread: Elementary School MCAS scores

  1. #61
    Join Date
    Jan 2007
    Location
    Wayland, MA
    Posts
    235

    Default

    Quote Originally Posted by Kim Reichelt View Post
    Dave, based on your comments throughout this thread about this data and its use, it would seem that to apply your comments consistently, you would agree with Jeff that TheRealTruth's comment is not supported by the data. Am I correct in my understanding of your position?
    No. The data that Jeff posted in this thread's first message neither supports nor refutes the assertion that Happy Hollow's scores are slipping worse than Claypit Hill's as the result of the elementary school reconfiguration.

    I do not agree with statement "TheRealTruth's comment is not supported by the data" because it implies that the data is inconsistent with the comment, which it is not necessarily the case.

    Quote Originally Posted by Jeff Dieffenbach View Post
    With respect to the Town Crier poster's allegations that MCAS scores slipped and that the reconfiguration was the cause, the data suggests otherwise.
    The data does not necessarily suggest otherwise, which is what prompted my initial post in this thread.

  2. #62
    Join Date
    Nov 2005
    Location
    Wayland MA
    Posts
    1,431

    Default

    Quote Originally Posted by Dave Bernstein View Post
    I do not agree with statement "TheRealTruth's comment is not supported by the data" because it implies that the data is inconsistent with the comment, which it is not necessarily the case.
    In what way is the MCAS data consistent with "TheRealTruth's" data?

  3. #63
    Join Date
    Jan 2007
    Location
    Wayland, MA
    Posts
    235

    Default

    Quote Originally Posted by Jeff Dieffenbach View Post
    In what way is the MCAS data consistent with "TheRealTruth's" data?
    Earlier in this thread, I described a scenario consistent with the data you posted in which the anonymous poster's assertion could be correct.

    Your claim that "the data suggests otherwise" is only true if one assumes that the two groups of students would perform identically in every respect. This assumption is ludicrous.

  4. #64
    Join Date
    Nov 2005
    Location
    Wayland MA
    Posts
    1,431

    Default

    Quote Originally Posted by Dave Bernstein View Post
    Earlier in this thread, I described a scenario consistent with the data you posted in which the anonymous poster's assertion could be correct.
    Are you referring to your "10x Mercury" supposition? That's an addition of make-believe new data, which strays far outside the original claim. Please use the original data set (MCAS data) to support your point. Bring in other actual data if you like.

    Quote Originally Posted by Dave Bernstein View Post
    Your claim that "the data suggests otherwise" is only true if one assumes that the two groups of students would perform identically in every respect. This assumption is ludicrous.
    You misunderstand. I didn't say that my analysis was proof of anything, just consistent with a rejection of the original claim.

  5. #65
    Join Date
    Nov 2005
    Posts
    726

    Default

    Quote Originally Posted by Dave Bernstein View Post
    Earlier in this thread, I described a scenario consistent with the data you posted in which the anonymous poster's assertion could be correct.

    Your claim that "the data suggests otherwise" is only true if one assumes that the two groups of students would perform identically in every respect. This assumption is ludicrous.
    Dave, I am, frankly, really surprised by what seems to be a double standard in how you evaluate analyses of data.

    "TheRealTruth" presented an assertion with which you seem to find no fault, although it does not seem to be supported by the data. Even if it were supported, clearly, your objections to Jeff's data would apply. For instance when you say:

    "The problem with incomplete, inaccurate, and/or irrelevant data is that it can be made to suggest nearly anything, to the joy of politicians and demagogues everywhere."

    or where you say:

    "The data presented here comes nowhere close to providing the information required to draw conclusions. The question is, "how would this group of students have performed had they not endured reconfiguration?". Drawing conclusions based on performance data from a different group of students that did not endure reconfiguration assumes that the two groups are in all other relevant ways and experiences identical. You've presented no data supporting this assumption."

    My question to you is: has TheRealTruth presented data that provides the necessary information in order to draw conclusions? You understand that all Jeff has done is see an assertion which he believes to be unsupported. He countered by providing information that does not support the initial argument. Whether or not you agree with Jeff that his information counters the initial assertion (and clearly you do not), I am left flabbergasted that you would not similarly reject the initial assertion.

    As I interpret your posts, it is fine for someone to make an assertion that is unsupported by the data (or perhaps supported by some interpretation of the data but still "incomplete, inaccurate, and/or irrelevant"). But it is somehow not fine for another to counter that assertion with incomplete or inconclusive information that runs against the initial assertion. I'm confused. One might conclude that you are happy to accept a poorly proven argument that you agree or sympathize with, but will not hear anything against it unless published in a peer reviewed journal. But this runs counter to the data I have seen in your past posts.

    I have found your posts generally to be well-thought out and logical, even when I did not agree with them. So I cannot understand what looks to be a double-standard in your logic here.

    And, of course, if you find the whole thread vacuous, then why post on it? If you find anything in it not vacuous, then why not address that instead? I am still very interested in discussing my post in which I presented evidence that the difference in performance between the two schools could be attributable almost entirely to performance among Special Education students.

    Why are there more Special Education students at Happy Hollow?
    What can we do to improve the performance of Special Education students at both schools?
    How much effort should be targeted at Special Education?
    How do we divide our resources and effort between the absolutely most talented, those who struggle, and the majority in between to produce the best overall result?

    These, I believe, are not vacuous questions, and are worthy of discussion. What say you?

  6. #66
    Join Date
    Nov 2005
    Location
    Wayland MA
    Posts
    1,431

    Default

    Quote Originally Posted by Kim Reichelt View Post
    Q1. Why are there more Special Education students at Happy Hollow?
    Q2. What can we do to improve the performance of Special Education students at both schools?
    Q3. How much effort should be targeted at Special Education?
    Q4. How do we divide our resources and effort between the absolutely most talented, those who struggle, and the majority in between to produce the best overall result?
    Kim, great set of questions (which I've numbered for ease of reference). Several comments.

    Regarding Q1, note that we need to be careful about what we're counting. If I'm not mistaken, you actually reported on number of SPED students in grades 3-5, not 1-5, correct? Also, for the current school year, the gap may be closer: CH is 16.8% and HH is 18.7%. Given the range of reasons behind SPED designations and the many "environmental" factors, I am not optimistic that we'll be able to answer this question with any certainty. Moreover, I'm not convinced that we need to, with Q2-Q4 being more important and actionable.

    Regarding Q2, it would be helpful to have a baseline. For instance, how do Wayland's SPED students compare with the state average, and how does that SPED comparison match up with how our general population compares with the state average. Almost by definition, SPED students won't perform at as high a level as the general population. It *may* be that we're already getting strong results. That said, we should of course always be pushing to improve. As for what specifically we should do to improve, I'll leave that to our professional educators--it's certainly a question beyond my expertise.

    Regarding Q3, note that there are legal requirements concerning the education of SPED students--in my opinion, we must at least meet these requirements. This segues into your Q4 ...

    Regarding Q4, there's a balance that we must maintain, and it's not easily quantified. Wayland generally opts for an inclusion model, albeit with tracking of math in grade 6, science in grade 8, and most other subjects in grade 9. I'm not in favor of true "pull out" gifted and talented programs, but appreciative of the fact that there are some benefits to such an approach. At the HS, we start to see classes with students from different grades (and therefore ages). That's one way to have a narrower range of students in a class (which is arguably easier for the teacher and more effective for the student), but in a "fluid" way where there's not a strong demarcation between gifted/talented, "general," and SPED.

    In the end, our objective should always be differentiated instruction (meeting each student at his or her challenge--as opposed to boredom or frustration--level) that is cost effective. There's a natural tension in that objective, one made all the more apparent when resources are particularly limited. Our professional educators are the right ones to manage the balance, and acknowledging the ever-present push to do better, they do so well.

  7. #67
    Join Date
    Nov 2005
    Posts
    726

    Default

    Quote Originally Posted by Jeff Dieffenbach View Post

    Regarding Q2, it would be helpful to have a baseline. For instance, how do Wayland's SPED students compare with the state average, and how does that SPED comparison match up with how our general population compares with the state average. Almost by definition, SPED students won't perform at as high a level as the general population. It *may* be that we're already getting strong results. That said, we should of course always be pushing to improve. As for what specifically we should do to improve, I'll leave that to our professional educators--it's certainly a question beyond my expertise.
    I've attached a spreadsheet to help answer this question. I'm not sure that beating the state average is an adequate goal (though I admit, that isn't precisely what you said), but at least we can see that Wayland does outperform the state handily on all measures on all exams. (This spreadsheet provides CPI and I added a column for the sum of advanced and proficient, labelled A+P)

    The spreadsheet contains data for all towns in the DOE database, but I have filtered it to show the "peer" towns (which I could remember without looking them up). If anyone wants to play with the spreadsheet they can undo the filters I put in (see the Include column on each tab) and add or subtract the towns they are interested in.

    What I found is that Wayland generally stacks up pretty well against the peer towns on SPED results, particularly for 5th grade. I don't know how much that varies from year to year, as I imagine the composition of the pool of Special Education students could certainly change from year to year. Still, there's clearly room for improvement.

    I was surprised by how poorly Lincoln seems to do - is there something special about their program there that would generate such bad (as in, consistently below state average) results?
    Last edited by Kim Reichelt; 10-13-2009 at 01:27 PM. Reason: left out the idea that we certainly could improve...

  8. #68
    Join Date
    Jan 2007
    Location
    Wayland, MA
    Posts
    235

    Default

    Quote Originally Posted by Jeff Dieffenbach View Post
    Are you referring to your "10x Mercury" supposition? That's an addition of make-believe new data, which strays far outside the original claim. Please use the original data set (MCAS data) to support your point. Bring in other actual data if you like.
    The data you posted fails to characterize other aspects of the two groups of students that could affect their academic performance, like their health. The example -- not supposition -- I provided simply illustrates how your incomplete data could suggest the wrong conclusion.


    Quote Originally Posted by Jeff Dieffenbach View Post
    You misunderstand. I didn't say that my analysis was proof of anything, just consistent with a rejection of the original claim.
    I fully understand that you're trying to refute the poster's claim with data that you acknowledge to be inadequate for that purpose. So you use words and phrases like suggest and consistent with to induce a conclusion unsupported by the available facts.

    Lies, damn lies, and statistics.

  9. #69
    Join Date
    Nov 2005
    Location
    Wayland MA
    Posts
    1,431

    Default

    Quote Originally Posted by Kim Reichelt View Post
    I've attached a spreadsheet to help answer this question. I'm not sure that beating the state average is an adequate goal (though I admit, that isn't precisely what you said), but at least we can see that Wayland does outperform the state handily on all measures on all exams.
    Not only is that not precisely what I said, it's not remotely what I said! [grin]

    What I'm interested in knowing is if our *advantage* relative to the state as a whole for our overall population is maintained for our special education students.

    And for those concerned that "automaton Jeff" only cares about MCAS scores, well, I don't. Scores are just one valuable guide to how we're doing. In the end, we should be striving to deliver an appropriate education to each and every student, as measured by a range of qualitative and quantitative evaluations.

  10. #70
    Join Date
    Nov 2005
    Posts
    726

    Default

    Quote Originally Posted by Jeff Dieffenbach View Post
    Not only is that not precisely what I said, it's not remotely what I said! [grin]
    Sorry. Just wanted to be clear that we need to set the bar higher.

    Quote Originally Posted by Jeff Dieffenbach View Post
    What I'm interested in knowing is if our *advantage* relative to the state as a whole for our overall population is maintained for our special education students.
    I'm not sure how best to measure this -- we could generate a Special Education ranking easily enough. Or see what % above average our CPI or Advanced+Proficient % is. Eyeballing it, it looks like we're doing OK. Note that I only looked at Elementary School for the purpose of this comparison, so I can't be certain at this point the same will hold for MS and HS.

  11. #71
    Join Date
    Apr 2009
    Posts
    117

    Default What do we really know?

    Even though by now I’m not sure if we’re talking about MCAS scores or percentages of Special Needs children in each class, I appreciate the above discussion if only because it keeps me thinking about it.

    I started by wondering what MCAS tests actually say about individual skills; then taking the tests, wondered about what they said I knew, as opposed to maybe just being a mental exercise; then thought maybe they helped compare the performance of different schools. But if it’s hard to tell why Happy Hollow and Claypit Hill’s test results are different, maybe it’s best to use the tests only to compare what one school does against itself over time. Perhaps Jeff and Kim are right to break the tests from one school into different groups (cohorts, grades, subjects, etc.) and follow their changes through the years, hoping to isolate, if one can, some identifiable educational influence? It looks and sounds reasonable.

    But…

    If we have no way of knowing why something as simple as why Happy Hollow and Claypit Hill’s test results are different (Because I guess there’s too many possible variables at play. Variables could be: geographical variations – special needs children, household income, ethnic percentages, etc.; school variations – teaching skills, educational philosophy, reconfiguration stress; or individual events that happen in any given year, etc., or many possible things unknown to us.), doesn’t that imply something about what we don’t know about the test results in general, no matter how they’re broken down or analyzed? Scientifically, I guess we’re missing a control group.

    We don’t know what the difference means between our two elementary schools and we don’t know what the difference means between any two tests groups. Aren’t the tests just a bureaucratic means of pretending that something is being done, that we can analyze and ponder the significance of, but doesn’t help us understand the “why” they are as they are? Since they don’t, they don’t really tell us anything. The tests actually give no information that could be called “actionable”?

    What do you think?

    donBustin@verizon.net

  12. #72
    Join Date
    Nov 2005
    Location
    Wayland MA
    Posts
    1,431

    Default

    Quote Originally Posted by don Bustin View Post
    Aren’t the tests just a bureaucratic means of pretending that something is being done, that we can analyze and ponder the significance of, but doesn’t help us understand the “why” they are as they are?
    I don't agree. The tests are there for accountability, to shine a light on potential causes for alarm, and to help inform change.

    Quote Originally Posted by don Bustin View Post
    Since they don’t, they don’t really tell us anything. The tests actually give no information that could be called “actionable”?
    Each year, our educators do a detailed analysis of our MCAS results (down to the level of each question), using that information to adjust our curriculum. I'd call that actionable.

  13. #73
    Join Date
    Dec 2005
    Posts
    44

    Default

    Before we go further in a discussion of why CH students did better than HH students on the MCAS, it would be great if a statistician would be willing to weigh in and let us know if the difference is statistically significant. Someone actually did that a while ago on a thread that was discussing a drop in property values from one year to the next. My memory is that it was extremely helpful.

  14. #74
    Join Date
    Jan 2007
    Location
    Wayland, MA
    Posts
    235

    Default

    Quote Originally Posted by Kim Reichelt View Post
    Dave, I am, frankly, really surprised by what seems to be a double standard in how you evaluate analyses of data.

    "TheRealTruth" presented an assertion with which you seem to find no fault, although it does not seem to be supported by the data.
    "TheRealTruth" has posted nothing in this thread to critique, Kim. Has "TheRealTruth" has made any attempt to induce false conclusions from the data Jeff posted?

    Quote Originally Posted by Kim Reichelt View Post
    My question to you is: has TheRealTruth presented data that provides the necessary information in order to draw conclusions?
    "TheRealTruth" has posted nothing in this thread.

    Quote Originally Posted by Kim Reichelt View Post
    You understand that all Jeff has done is see an assertion which he believes to be unsupported. He countered by providing information that does not support the initial argument. Whether or not you agree with Jeff that his information counters the initial assertion (and clearly you do not), I am left flabbergasted that you would not similarly reject the initial assertion.
    Jeff's attempt here to refute "TheRealTruth" with incomplete data is what drew my attention and response.

    Quote Originally Posted by Kim Reichelt View Post
    As I interpret your posts, it is fine for someone to make an assertion that is unsupported by the data (or perhaps supported by some interpretation of the data but still "incomplete, inaccurate, and/or irrelevant"). But it is somehow not fine for another to counter that assertion with incomplete or inconclusive information that runs against the initial assertion. I'm confused. One might conclude that you are happy to accept a poorly proven argument that you agree or sympathize with, but will not hear anything against it unless published in a peer reviewed journal. But this runs counter to the data I have seen in your past posts.
    At no point in this thread have I accepted or supported the argument made by "TheRealTruth". You've jumped to an incorrect and completely unsupported conclusion.

    Quote Originally Posted by Kim Reichelt View Post
    And, of course, if you find the whole thread vacuous, then why post on it? If you find anything in it not vacuous, then why not address that instead? I am still very interested in discussing my post in which I presented evidence that the difference in performance between the two schools could be attributable almost entirely to performance among Special Education students.

    Why are there more Special Education students at Happy Hollow?
    What can we do to improve the performance of Special Education students at both schools?
    How much effort should be targeted at Special Education?
    How do we divide our resources and effort between the absolutely most talented, those who struggle, and the majority in between to produce the best overall result?

    These, I believe, are not vacuous questions, and are worthy of discussion. What say you?
    Exposing a blatant attempt to induce false conclusions from incomplete data seemed a worthy undertaking, and has been my focus in this thread. Answering the questions you pose will likely involve analyzing data; the results will be more reliable and worthwhile if that analysis is, to use an old scouting term, square.

  15. #75
    Join Date
    Apr 2009
    Posts
    117

    Default

    Quote Originally Posted by Jeff Dieffenbach View Post
    Each year, our educators do a detailed analysis of our MCAS results (down to the level of each question), using that information to adjust our curriculum. I'd call that actionable.
    So funny, I can see them, the educators, sitting ‘round the table saying, “gee, only 40% of our students got question #44 right, they don’t seem to understand how this type of analogy works, guess we’ll have to work with/teach more of those kind of analogies before next year.”

    For all their analysis, I haven’t heard an explanation of their understanding of the difference between CH and HH (my apologies to Lawrie’s request for statistical significance… what is “significant”) but the “detailed analysis” does seem to imply that they teach to the test, for whatever that’s worth.

    As I have said before, it doesn’t really matter what I think about education in Wayland, my children are grown and I’ve made my mistakes. You parents will just have to make your own.

    And to totally disgress – about the new High School – I’d be so much more interested/supportive if I believed there was more imagination going on with the design. In particular, I’d pay more if the new school somehow incorporated “adult” usage. A couple classrooms by the parking lot where townspeople could offer night classes. (I’ve always liked night/continuing school as the preferred model of education.) There could be fees that went to the school. Likewise, a computer/data center that the adults could utilize. WayCam is a good model of what’s possible. Maybe we could put more money into the new theatre and townspeople could offer entertainment. Ticket returns could go to school. It’d be fun, it’d be a community. Heck, the Office of Elder Affairs could be there if we were really imaginative, and educational.

    donBustin@verizon.net

Tags for this Thread

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •