Results 1 to 5 of 5

Thread: In Classroom of the Future, Stagnant Scores

  1. #1
    Join Date
    Mar 2011
    Posts
    215

    Default In Classroom of the Future, Stagnant Scores

    From the New York Times: In Classroom of the Future, Stagnant Scores

    Acquiring and deploying hardware before you've identified and validated applications that accomplish your objectives is the oldest blunder in information technology.
    Last edited by DHBernstein; 09-04-2011 at 05:03 PM.

  2. #2
    Join Date
    Aug 2011
    Posts
    13

    Default

    Quote Originally Posted by DHBernstein View Post
    Acquiring and deploying hardware before you've identified and validated applications that accomplish your objectives is the oldest blunder in information technology.
    But in this case they aren’t just acquiring and deploying technology willy-nilly; in fact, far from it:

    Quote Originally Posted by NY Times
    “I get one pitch an hour,” he [Kyrene technology director Mark Share] said. He finds most of them useless and sometimes galling: “They’re mostly car salesmen. I think they believe in the product they’re selling, but they don’t have a leg to stand on as to why the product is good or bad.”

    Mr. Share bases his buying decisions on two main factors: what his teachers tell him they need, and his experience. For instance, he said he resisted getting the interactive whiteboards sold as Smart Boards until, one day in 2008, he saw a teacher trying to mimic the product with a jury-rigged projector setup.
    Does this mean we can’t trust the judgment of the people whose job it is to assess technology for our schools? Does it mean that so-called teachers are completely clueless about how to teach, and are grabbing blindly at high-tech straws?

    I don’t think so. I think there are two things that are keeping educational technology from “working,” in Kyrene or elsewhere.

    First, educational technology is expected to be a miracle that it is not:

    Quote Originally Posted by NY Times
    To support its conclusion, [President Clinton’s science and technology] committee’s report cited the successes of individual schools that embraced computers and saw test scores rise or dropout rates fall. But while acknowledging that the research on technology’s impact was inadequate, the committee urged schools to adopt it anyhow.
    That sounds like a classic confusion of correlation with cause-and-effect. If Wayland’s MCAS scores are higher than those in, say, Brockton, I can assure you that the difference has little to do with the Brockton students’ lack of access to laptops. Schools that embraced technology before the Clinton committee were more likely to A) have the money to spend on that technology, and B) have parents and other stakeholders who were engaged enough in their children’s education to assert that the technology would add value to it. I assert that that engagement is much more of a factor in educational achievement than the technology purchases it helped to bring about.

    The second thing that keeps educational technology from “working” is that we are using the wrong definition of “working.” Test scores, especially high-stakes standardized test scores, are excellent measures of how particular students perform on a particular test on a particular day. What they aren’t very good at is determining whether students are able to use the resources around them, synthesize new ideas based on what they already know, and evaluate those ideas critically with an eye toward increasing their knowledge -- which I am willing to bet was coincidentally a major goal of the As You Like It lesson.

    A much better assessment of student learning (and of the value of the technology) in this case would have been to give the students a different work, of similar difficulty and complexity, and to see if their technology-assisted exploration of As You Like It had any bearing on the students’ ability to analyze, synthesize and evaluate the new material. Such an assessment would be almost impossible to encompass in a statistically-analyzable multiple choice test, but would certainly be a good indicator of whether the technology “worked.”

  3. #3
    Join Date
    Mar 2011
    Posts
    215

    Default

    Quote Originally Posted by Chris Hoffman View Post
    But in this case they aren’t just acquiring and deploying technology willy-nilly; in fact, far from it:

    Does this mean we can’t trust the judgment of the people whose job it is to assess technology for our schools? Does it mean that so-called teachers are completely clueless about how to teach, and are grabbing blindly at high-tech straws?

    I don’t think so. I think there are two things that are keeping educational technology from “working,” in Kyrene or elsewhere.

    First, educational technology is expected to be a miracle that it is not:

    That sounds like a classic confusion of correlation with cause-and-effect. If Wayland’s MCAS scores are higher than those in, say, Brockton, I can assure you that the difference has little to do with the Brockton students’ lack of access to laptops. Schools that embraced technology before the Clinton committee were more likely to A) have the money to spend on that technology, and B) have parents and other stakeholders who were engaged enough in their children’s education to assert that the technology would add value to it. I assert that that engagement is much more of a factor in educational achievement than the technology purchases it helped to bring about.

    The second thing that keeps educational technology from “working” is that we are using the wrong definition of “working.” Test scores, especially high-stakes standardized test scores, are excellent measures of how particular students perform on a particular test on a particular day. What they aren’t very good at is determining whether students are able to use the resources around them, synthesize new ideas based on what they already know, and evaluate those ideas critically with an eye toward increasing their knowledge -- which I am willing to bet was coincidentally a major goal of the As You Like It lesson.

    A much better assessment of student learning (and of the value of the technology) in this case would have been to give the students a different work, of similar difficulty and complexity, and to see if their technology-assisted exploration of As You Like It had any bearing on the students’ ability to analyze, synthesize and evaluate the new material. Such an assessment would be almost impossible to encompass in a statistically-analyzable multiple choice test, but would certainly be a good indicator of whether the technology “worked.”
    If there's no agreed-upon definition of "worked", how can anyone successfully assess, select, and deploy educational software that works?

    Acquiring hardware like laptops and tablets is seductively easy, and makes most people feel good about "investing in education". Such hardware is generally accompanied by applications that can be used to teach appliance-level skills -- e.g. typing, word processing, spreadsheets, image processing, macros/scripts/actions, web searching -- all of which are valuable. But beyond examples like these, the selection of effective educational software requires a serious investment of time and energy by people with a broad range of experience and skills -- far more time and energy than most teachers and motivated parents can muster on a volunteer basis.

    Is there any site on the web that broadly supports the discovery, categorization, selection, and rating of educational software?

  4. #4
    Join Date
    Nov 2005
    Location
    Wayland MA
    Posts
    1,431

    Default

    I'd like to spend more time learning about Project Red, its work, its findings, and where it's headed. From their "About" page:

    What We’re Doing: We are conducting a national survey to analyze what’s working in technology-transformed schools and to show how technology can save money when properly implemented.

    1. We’re researching several thousand schools that provide access to the Internet for every student. We’re asking them what factors contributed to the success or failure of their program.

    2. We’re looking for other technology-transformed schools that we may have overlooked so we can have the most complete database ever assembled from which to learn.

    3. We’re also searching for proof of cost savings due to the implementation of technology in any k-12 environment, whether these savings come from online learning courses, professional development, concurrent enrollment in college courses, data mapping, special needs programs or any other program.

  5. #5
    Join Date
    Mar 2011
    Posts
    215

    Default

    Quote Originally Posted by Jeff Dieffenbach View Post
    I'd like to spend more time learning about Project Red, its work, its findings, and where it's headed. From their "About" page:

    What We’re Doing: We are conducting a national survey to analyze what’s working in technology-transformed schools and to show how technology can save money when properly implemented.

    1. We’re researching several thousand schools that provide access to the Internet for every student. We’re asking them what factors contributed to the success or failure of their program.

    2. We’re looking for other technology-transformed schools that we may have overlooked so we can have the most complete database ever assembled from which to learn.

    3. We’re also searching for proof of cost savings due to the implementation of technology in any k-12 environment, whether these savings come from online learning courses, professional development, concurrent enrollment in college courses, data mapping, special needs programs or any other program.
    This article from June 2010 seems promising:

    "Among all school surveyed, 50 percent say they are seeing a reduction in student disciplinary actions, and 56 percent of their students plan to attend college. Among schools with 1-to-1 computing programs, these figures are 65 percent and 66 percent, respectively. But for schools with 1-to-1 programs that report using proper implementation strategies, including regular formative assessment and teacher collaboration, these figures jump to 82 percent and 86 percent."

    "In general, schools with lower student-to-computer ratios have better measurable results than schools with higher ratios. But still, too few schools are taking full advantage of their ed-tech investments.

    “Very few schools implement many of [the] key implementation factors, despite large investments in infrastructure and hardware,” Project RED said."

    "For instance, researchers found that not one school with a 1-to-1 student-to-computer ratio deployed all of the key implementation factors"

    What are these "key implementation factors"?

    From Project Red Key Findings:

    1. Intervention classes: Technology is integrated into every class.
    2. Principal leads change management and gives teachers time for both Professional Learning and Collaboration
    3. Games/Simulations and Social Media: Students use technology daily.
    4. Core subjects: Technology is integrated into daily curriculum.
    5. Online Assessments: Both formative and summative is done frequently
    6. Student-Computer Ratio: Fewer students per computer improves outcomes.
    7. Virtual field trips: With more frequent use, virtual trips are more powerful..
    8. Search engines: Students use daily.
    9. Principal is trained via short courses in teacher buy-in, best practices and technology-transformed learning


    I am disappointed by the absence of any assessment of the specific technology being used. Are all games and simulations uniformly beneficial, or are some more effective than others? What factors should be considered when selecting games and simulations?

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •