A+Statistical+Elephant

= Educational systems and Polemics =

A-Level system
The number of students awarded elite A* grades is set to reach almost one-in-10 eclipse last year’s total of 8.1 per cent - when the grade was introduced for the first time to mark out exceptional candidates. Academics are also predicting that the overall pass mark will rise for the 29th year in a row as students across Britain register another round of record results. []

Pass rates rose for the 29th consecutive year, with one in four awarded an A; Exam boards, some of which have had to admit over the summer that they set impossible questions and made errors in papers, are expecting a record number of complaints as desperate students seek to raise their grades. Read more: [|http://www.dailymail.co.uk/news/article-2027334/A-level-results-2011-hits-new-record-1-12-grades-awarded-A.html#ixzz1pxfTBbEN]

This is partly because in most other spheres of life we do indeed define "high standards" in relative terms – for example, to denote the best restaurants, the swankiest hotels, the top football teams, the fastest athletes. There are, of course, exceptions. We accept that anyone who has passed a driving test has reached the necessary "standard", even though there are many more drivers on our roads than 40 or 50 years ago. There are similarities between the driving test and school exams – for example, more people need to drive today, just as more people aim for higher education – but the public seems unwilling to see that higher pass rates could be down to greater motivation and participation any more than they accept it could result from better teaching. The reason is historical: O- and A-level standards were originally defined in relative terms, and that's what people grew used to. Students were measured against one another, not against an absolute standard. They were ranked and graded accordingly: the top 10% got an A, the next 15% a B, and so on. []

Education Secretary Michael Gove has ordered a review of A-levels to see how they compare with exam systems in other countries. He has also said he wants more emphasis on a final exam which stretches candidates' capacity for original thought. Recent changes to the A-level - brought in at the same time as the A* grade - involved the introduction of questions designed to stretch the brightest students and the cutting of the number of modules or sections of an A-level from six to four. []

Degree system
The present system of classifying degrees is not fit for purpose and should be scrapped, the head of the universities watchdog has told MPs. Peter Williams, the chief executive of the Quality Assurance Agency, told the Commons universities select committee that the smaller, research-led universities of the 1994 Group were mostly responsible for the huge increase in firsts and 2:1s in the past five years that has provoked talk of degree inflation. "The degree classification system is not fit for purpose ... It was designed for a smaller higher education world. It has passed its usefulness," Williams said. He said 118 individual institutions had the power to award degrees, and they set their own standards based on a threshold set by the QAA. There was no national curriculum or examination to regulate standards. But Williams argued against regulation, saying it would undermine the diversity of the system and be the "death knell of innovation". But Williams argued that standards had improved, because students worked harder. He told the committee that 12% of the more than 600 complaints monitored by the QAA as a result of the public debate over degree standards were worthy of further investigation. He was asking his board today to give the go-ahead for investigating the claims of misconduct. Willis told EducationGuardian.co.uk: "The meeting exposed that we have a system of awarding degrees that doesn't stand up to scrutiny. And we have an organisation that doesn't have the ability or research capacity to influence that in the future. The current system has to go." The shadow universities minister, Rob Wilson, said: "There is no doubt that the current degree classification system needs updating. We need more information to be provided when degrees are awarded so that employers are better able to assess graduates' abilities. The recommendations made by the Burgess report last year carry considerable merit." []

and again……… []

Matthew Effect
The Matthew effect in education was described by Keith Stanovich based on the Matthew effect in sociology. It derives its name from a passage (Matthew 25:29) in the New Testament: "For unto every one that hath shall be given, and he shall have abundance: but from him that hath not shall be taken away even that which he hath."[1] Stanovich used the term to describe a phenomenon that has been observed in research on how new readers acquire the skills to read: early success in acquiring reading skills usually leads to later successes in reading as the learner grows, while failing to learn to read before the third or fourth year of schooling may be indicative of life-long problems in learning new skills. This is because children who fall behind in reading, read less, increasing the gap between them and their peers. Later, when students need to "read to learn" (where before they were learning to read), their reading difficulty creates difficulty in most other subjects. In this way they fall further and further behind in school, dropping out at a much higher rate than their peers. []

The Matthew Effect, however, pervades all aspects of schooling. Parenting is the key to socio-emotional soft skills that drive educational success. The neighborhood is the prime indicator of economic success. "Skills beget skills and motivation begets motivation," explains James Heckmen. And as illustrated by the concept of "degrees of separation," being isolated from a broader functional community undermines motivation. Combine enough isolated and traumatized kids in a high-poverty neighborhood school and a "tipping point" is crossed where disorder grows rampant. The dysfunctional learning culture drives away the best teachers, as magnet schools cream away the most motivated of the students. Worse, rich states and school systems that invest more per capita, are rewarded disproportionately through federal funds. And even worse, data-driven accountability has often damaged the schools it was designed to assist by encouraging excessive test prep and narrowing the curriculum. What I would like to see you consider, as you ponder the Matthew Effect (and it would certainly please me a good bit if you would refrain from putting the blame on parents), is what is the school's role as mediator of this effect? As you think about schools in communities where you see a tipping point of bad influences, or scarcities, or deficits, are the schools a part of the bad influence, scarcity and deficit? Do they amplify or counteract these factors? I would suggest that many who work in schools see themselves as perhaps "in" but not "of" the neighborhood. They see themselves as missionaries to the great unwashed--constantly frustrated because they cannot peddle their wares of salvation. []

Matthew effects are complicated. They may be ambiguously and simultaneously functional and dysfunctional for the social systems in which they arise. Their consequences are usually multiple and ambivalent, and are more positively functional for some than for others. Who should be the “some” and who should be the “others”? These are among the moral and political questions that Matthew effects pose for us. A dispute has raged for years in my former home city of San Antonio over the legitimacy of “Robin Hood” funding for schools, which redirects some tax revenues from richer to the poorer school districts where the financial need is manifestly greater. This legal and moral dispute, which lives on in Texas and in many other jurisdictions, requires that we understand and address Matthew effects when advantages become self-amplifying and cumulative in the absence of intervention. Should we and our institutions intervene to prevent the destructive consequences of self-amplifying loops of social and economic advantage? The Sheriff of Nottingham had one view. Robin Hood had another []

Flynn Effect
The Flynn effect is the name given to a substantial and long-sustained increase in intelligence test scores measured in many parts of the world. When intelligence quotient (IQ) tests are initially standardized using a sample of test-takers, by convention the average of the test results is set to 100 and their standard deviation is set to 15 or 16 IQ points. When IQ tests are revised they are again standardized using a new sample of test-takers, usually born more recently than the first. Again, the average result is set to 100. However, when the new test subjects take the older tests, in almost every case their average scores are significantly above 100. Test score increases have been continuous and approximately linear from the earliest years of testing to the present. For the Raven's Progressive Matrices test, subjects born over a 100 year period were compared in Des Moines, Iowa, and separately in Dumfries, Scotland. Improvements were remarkably consistent across the whole period, in both countries.[1] This effect of an apparent increase in IQ has also been observed in various other parts of the world, though the rates of increase vary.[2] IQ tests are updated periodically. For example, the Wechsler Intelligence Scale for Children (WISC), originally developed in 1949, was updated in 1974, in 1991, and again in 2003. The revised versions are standardized to 100 using new standardization samples. In ordinary use IQ tests are scored with respect to those standardization samples. The only way to compare the difficulty of two versions of a test is to conduct a study in which the same subjects take both versions. Doing so confirms IQ gains over time. The average rate of increase seems to be about three IQ points per decade in the US on tests such as the WISC. The increasing raw scores appear on every major test, in every age range and in every modern industrialized country although not necessarily at the same rate as in the US using the WISC. The increase has been continuous and roughly linear from the earliest days of testing to the present.[9] Though the effect is most associated with IQ increases, **a similar effect has been found with increases of semantic and episodic memory.[3]** []

Field Failure
The bathtub curve, displayed in Figure 1 above, does not depict the failure rate of a single item, but describes the relative failure rate of an entire population of products over time. Some individual units will fail relatively early (infant mortality failures), others (we hope most) will last until wear-out, and some will fail during the relatively long period typically called normal life. Failures during infant mortality are highly undesirable and are always caused by defects and blunders: material defects, design blunders, errors in assembly, etc. Normal life failures are normally considered to be random cases of "stress exceeding strength." However, as we'll see, many failures often considered normal life failures are actually infant mortality failures. Wear-out is a fact of life due to fatigue or depletion of materials (such as lubrication depletion in bearings). A product's useful life is limited by its shortest-lived component. A product manufacturer must assure that all specified materials are adequate to function through the intended product life. []

This paper presents a detailed look at how unmanned ground vehicles (UGVs) fail in the field using information from 10 studies and 15 different models in Urban Search and Rescue or military field applications. One explores failures encountered in a limited amount of time in a real crisis (World Trade Center rescue response). Another covers regular use of 13 robots over two years. The remaining eight studies are field tests of robots performed by the Test and Evaluation Coordination Office at Fort Leonard Wood. A novel taxonomy of UGV failures is presented which categorizes failures based on the cause (physical or human), its impact, and its repairability. Important statistics are derived and illustrative examples of physical failures are examined using this taxonomy. Reliability in field environments is low, between 6 and 20 hours mean time between failures. For example, during the PANTHER study (F. Cook, 1997) 35 failures occurred in 32 days. The primary cause varies: one study showed 50% of failures caused by effectors; another study showed 54% of failures occurred in the control system. Common causes are: unstable control systems, platforms designed for a narrow range of conditions, limited wireless communication range, and insufficient bandwidth for video-based feedback. []

University Survival
[]

= The purpose of educational systems  =

//[Discussion]//

= Tokens of the system  =

Insinuated that systems were immutable and that any reaction from any disaffected parties would either be negligible or the system architect would merely resort to a plan B which would have already been anticipated by the said architect.
 * BM1:**

The OU did not receive 20% of students with more than 80% so decided to lower the threshold so that the statistical necessity could be met.
 * BM2:**

Expressed signs of elation given that they had now received a Distinction.
 * BM3+4:**

Recognised that their entries being unexplainably edited caused them to lose the motivation to continue participating in the forums.
 * BM5:**

Stated that they wouldn’t bother trying to do their final TMA.
 * BM6+7+8:**

Found that assignment marking changed radically with different tutors. They no longer felt that they had earned their distinction grade given that a fellow student had various marking results for different tutors. It was suggested that they inform their student acquaintance that they should fill in the OU complaints form, but they were too stressed, demotivated or lacked the belief in the OU system to an extent that the complainer would not follow this protocol.
 * BM9:**

Had completed the course and was aware that their assignments’ had been miss-marked, but didn’t really care anymore. They also noted the sparseness of other student participation towards the end of the course.
 * BM10:**

Felt that systems-thinking was about facing the unknown and self-belief. Although, when they recognised that their assignments were being miss-marked they changed their tutor.
 * BM11:**

None of the motivational points matter because it’s meant to be ironic and lawless.
 * BM12:**

30% of students left the course, but on some courses as much as 50% of students leave the course prematurely.
 * BM13:**

Stated that “some students are TMA driven and some students are Course driven.” //[Berardi-world]//
 * BM14:**

Didn’t always know where they lost the marks on the courses that they did with the OU.
 * BM15:**

Noticed that no matter how much study they gave the course, the TMA scores would always be about the same.
 * BM16:**

Stated that the marking scheme for the systems course was ambiguous.
 * BM17:**

The OU stated that it was to bring in a compulsory level 1 course.
 * BM18:**

“I find level 3 courses are [//often?//] easier than level 2 courses.”
 * BM19:**

Recognised that the OU courses would contain errors and that these errors would not be in the assignments so these sections of the course could be ignored. They also made a note of these aspects of the course but didn’t elaborate what they did with this information.
 * BM20:**

Purchased a multitude of past exam papers and observed that there was a common question across all papers and would prepare for the exam upon the basis of this information.
 * BM21:**

The OU changed its advertising campaign from “be inspired” to “employers value a degree”
 * BM22:**

Thinks courses are about changing and evolving to whatever the system decrees, and adaption to survive, anticipate and manage these changes.
 * BM23:**

Thinks systems “depend on the motivations of the various people using systems thinking for their own nefarious reasons”
 * BM24:**

Wanted to know if there was a template to systems thinking, and the discussion then went onto something about templates and methods.
 * BM25:**

Stated that they didn’t think that systems-thinking was working for them.
 * BM26:**

Wanted to know how to start doing something (as they couldn’t on their own) but it didn’t matter because they would wait until the end of their degree until they would try to do anything.
 * BM27:**

Also wanted to know how to do something.
 * BM28:**

Said that “if someone wants to be rebellious or ‘smash the system’ for their own purposes it is easy to find a way to justify doing so….people will lose sight of their own secret motives…” They have since had their degree/diploma framed and hung on a wall
 * BM29:**

Believes that any system only requires some positive feedback to correct it.
 * BM30:**

“I got a distinction, I’m a percentage. I’m a label, and when I grow up I’m going to be a ratio – and then another label.”
 * BM31 – about 40 or so:**

Asked if the O.U. MBA (or some such thing) was credible. They were told that it was because some third-party, second-order statistical device had said it was credible and ranked third - so it must be.
 * BM41:**

= By-products of the statistical force  =

//[Discussion//