Should an Algorithm decide a student's future?

Nayan Grover,

Research Member

Indian Society of Artificial Intelligence and Law.

Kshitij Naik,

Associate Editor,

Indian Society Artificial Intelligence and Law.


Statistical Modelling is being widely used among various arrays ranging from time series analysis to market segmentation. With the improvement in algorithmic programs, it is now even used in performing critical functions like budget allocations in a legislative area. But the question that we face today is the statistical modelling perfect or at least, competent enough to be used in situations where someone’s life depends on the outcomes that come from statistical modelling. The problem of algorithmic bias still seems to be prevalent in prediction techniques whether it is machine learning prediction programs or statistical modelling. The impact that this problem can have on society has been witnessed in programs like Pred-Pol, a predictive policing tool used by LAPD which predicted higher crime rates in minority neighbourhoods due to algorithmic bias. Similarly, the issue that arises now, is it safe and ethical, that despite knowing all this, we employ statistical modelling to play a part in calculating final grades of students whose future depends on it.

What's the issue here?

The International Baccalaureate (IB) registered in Switzerland offers a Diploma Programme and is present in more than 5,000 schools in 158 countries with over 166,000 candidates, IB usually conducts a mandatory set of examinations during spring in May, the marks from these examinations are used to allot a final grade to students, this final grade is very important to students because it helps them apply to universities and accept admissions offers from them, these final grades have 90 per cent weightage when it comes to admissions in colleges at Europe and Asia, so essentially these grades will decide these students future and that's how important they are.

However this year the Pandemic has caused major disruption in school and university examinations all over the world and IB was also forced to cancel its final examinations unlike most of the university's moved their examinations online IB decided to use an Algorithm based statistical method for awarding these grades

Where could IB's Model have gone wrong?

IB used a three-step process to process final grades for each student.

So IB released that it used predicted grades submitted by teachers, their coursework grades and also used historical assessment data from the particular school to determine what the students might have scored had there been no pandemic, what the students realised that their grades were nowhere close to their predicted grades, Ali Zagmouth from Sweden who filed a Global Petition regarding IB's methods said, "many students got significantly lower grades than predicted, For example; 41 down to 34, 43 down to 37, 38 down to 28, 42 down to 36, 30 down to 26 and the list goes on. Essentially, the IB has lowered some students by up to 12 points".

So from the above petition, we can see that the Algorithm used by IB could be significantly faulty or there is a major problem with IB's three-step process. IB said that its system is evidence-based and that it was subjected to rigorous testing by educational statistical specialists to ensure our methods were robust and it was also checked against the last five years’ sets of results data, to ensure that it would provide reliable and valid grades for students. However prominent researchers have always expressed in their textbook that just the fact that particular research is “evidence-based” does not always ensure that it will lead to accurate, reliable, or fair decisions.

Major Issues with IB's methodology

IB’s model has major methodological issues and completely disregards the ethical considerations which should accompany its adoption, the model can discriminate even when it is not given gender, race and socioeconomic data because it was predicting marks based on the historical data from the same school which could be biased. There are other Smaller but significant issues as well but 'Historical Bias' is one of the major issues here, A study based on data from the National Centre for Education Statistics concluded that secondary school teachers tend to express lower predictions for their ‘expectations from students of colour and students from disadvantaged backgrounds’. This is problematic because predicted grades play a prominent role in the model, also there is a possibility that the historical data used could have been biased. The other issue could be that schools that have smaller historical data because of lesser students in the past would not be able to predict scores accurately just because enough data is not available, similarly, schools that have recently started operations would not have any data to predict from, there are issues like 'Measurement Bias' for example A teacher who has to assign predicted grades for 10 students will do a better job than a teacher who has to assign predicted grades to 30 students.

These are just some of the issues that might arise however the ocean of biases might be much deeper but just these surface-level issues are enough to deem the system absolutely inaccurate

Can we leave students future at the mercy of the algorithm's

The major question here is not whether 'we can' the question is whether 'we should', because with IB's model we have already left these students and their future at the mercy of an Algorithm, like a Tech Blogger "positively-semi-definite" says 'There is a ubiquitous saying in the field of statistics that ‘all models are wrong and some models are useful.' So we can say that all models are merely estimates and cannot forecast with complete certainty. Even if we assume that IB's model is 90 per cent accurate this will mean that at least one in ten students will have the wrong grades, and that means they are gambling with the future of thousands maybe lakhs of students worldwide.

We believe that such Algorithm based models or systems will inevitably play a large part in our future. But is it ethical to deprive a student of their hard-earned spot at the London School of Economics because a black-box decision-making mechanism said that they were not worthy of the opportunity?, there are major ethical considerations which the IB has possibly overlooked while making its operational choices which is questionable considering there is a student's future at stake here. So the fact that IB' three-step system is an outsourced black-box model with very limited historical data which has a high possibility of being biased and no oversight into the decision-making mechanism with only 3 months for research and production does raise some eyebrows on how accurate the system is and was it really necessary to use one.

The fact that IB never released any information on the algorithm that was used and neither on how students' predicted grades would be used to create final grades other than the three-step process brings us to the realization on how much 'Algorithmic Accountability' is important and the need to regulate Algorithmic decision-making legislations around Algorithms at this point are necessary and they need to be more transparent to help reduce the future possibility of biases

The Solution

The best way could be to not at all risk the future of students on predictive scores and calculate their final grades on the basis of the course work they have submitted. If the university feels that there is a real need of using this method then they should make the analysis process transparent so that the students know how they are getting these predictive scores and can object if they feel any parameter has been evaluated unjustly.

To conclude, we can say that Data analysis and machine learning are incredibly powerful tools, but they need to be used in appropriate situations and with a great degree of care. What we need to focus on before making large scale use of these technologies is to figure out how biases can be taken out of the data that is used to train or get an outcome from such programs or at least how the programs can be trained to overlook the biases in the data sets, the other solution would be to use algorithm-based solution when they are necessary and have the least possibility to cause harm to a community or a person and only after evaluating them ethically. As for the use of Algorithm based models in Education and for predicting marks like we have said before the question is not whether 'we can' the real question is whether 'we should', whether 'we should' leave out children's future to the mercy of a machine.

56 views0 comments

Recent Posts

See All