top of page

Artificial Intelligence & Disability: Understanding Effects and Causes

Updated: Mar 24, 2021

Adarsh Dash,

Editorial Intern,

Indian Society of Artificial Intelligence & Law.


 

In the following Article titled ‘Artificial Intelligence’ and its consequent correlation and correspondence with disabilities has been analysed. In the initial part the optimistic effect of AI in respect to the Indian diaspora has been presented, by drawing reliance upon the various possible technologies and applications developed. In the second half, possible methods of removing “biasness” and “discrimination”(by introducing certain changes during the development of the coding, programs to be used in AI models) have been envisaged and finally in the later stages the possible ways, kinds the disabled, marginalized community is adversely affected is described.

In the simplest terminology, AI is the ability of a computer, robot, code, programme etc for processing the data, information (which has been preloaded/mechanized) for producing outcomes and outputs, replicating with that of the conventional methodologies that is usually adopted by humans. It is used for tackling, solving complex and real-life scenarios and thus turning human life easier and efficient.

From assisting the customers, commuters in choosing the shortest, inculcating the safest path till answering all their queries, questions using a simple voice recognition tool there is no doubt with regards to the fact that AI is indeed a kind of “revolution” with the potential of transforming the destiny of this universe irreversibly and permanently. Microsoft India (in coordination along with IIT, Delhi) has developed, invented a variety of tools like that of Seeing AI, Tell Me, Dictate that have assisted persons with disabilities in performing functions and activities that wouldn’t have been without the intervention of advanced technology. The said Organization has also launched a five-year $25 Million programme titled, “AI for accessibility” for accelerating the development of solutions manifolds that would understand the needs, desires of the people better. Some of the commonly used assistive tech enabled products that have been developed for the assistance, development of the disabled people are: Armable[1], XBOX adaptive Controller[2], Robo Bionics Prosthetic Hand[3], Saarthi[4], Brailie Display etc[5]. (HERE's How Microsoft is using AI to empower people with disabilities, 2019, OCT 29) Assistive technologies which have been designed for helping people with their disabilities are providing them with easy access to smartphones, mobile applications and various gadgets which are being used for navigation, transportation and also for attaining a correspondence course. (Assistive technologies for persons with disabilities , 2019, Nov 10)

For instance, Avaz Application (a free speech app) which has been launched by an Indian entrepreneur named Ajit Narayan, helps children with autism, speech impairment diseases and also teaches them to communicate in various languages. Similarly a Hobby Project was started in the year 2012, where an customisable Eye-D keypad was invented, which assisted the blind people in proper, legitimate utilisation of their smartphones and devices. Gynosys is another application that has been launched for the deaf and dumb people, which can translate the given texts, speeches and voices into the corresponding sign languages thus assisting those category of the disabled people (How Tech is making life easier for the differently abled, 2017, Nov 7).

Disability is the product of disabling environments depending less upon the physical, mental and psychological attributes of the concerned individual rather than the mannerism and ways in which the society responds to his “disabled” body. It is an absolutely undisputed feature of the AI`s potential, capacity for the revolution and transformation of this world, through the proper usage of coding, technology nonetheless its equally necessary, essential for analysing the said effect with reference to the aspect of disability. Broadly disability is divided into two models: the medical model and social model which are detailed below.

Fairness in AI: How much fair, neutral and unbiased are the AI systems being developed? On what premises, factors and conditions do such unfairness, biases arise? How does it include the people from the disabled community and hence consequently affect them? How can such be removed and rectified? Scientific experimentation, human experience and other factors have discerned that biases within the AI systems does exist and the same is with relation to race, gender, age, sexual attributes and various other determining factors. Researchers have further discovered that AI models have the capability of perpetuating and enhancing the racial, gender-based discrimination especially amongst the marginalized (also including the “disabled” counterparts) communities. Algorithmic fairness, Allocative fairness, Group Fairness and Individual fairness are some of the methodologies that could be adapted by the said programmers while designing, constructing the AI models which would ensure a greater, wider participation from the different components, aspects of the scientific fraternity-thus reducing the possible biasness and discrimination that might arise. Algorithmic fairness is the method in which the aspect of “fairness” can be enhanced by collecting the training data from an extremely wide, broad range of groups and ensuring the cleansing of data through the diverse inclusion. It is essential, necessary for developing specialised models for the recognition and inclusion of the specific disabled, special groups. Fairness through unawareness is an approach towards fairness where no data, information is gathered and collected about the attributes of the disabled groups, which is consequently then used for developing the AI models. Fairness through awareness is the reverse process of fairness through unawareness where the respective model is prepared by using all the disability information in which the corresponding data is prepared using numerical, scientific methods and the output models are manipulated in order to mitigate, reduce the bias. Group Fairness is the method of measuring fairness in which the proportion of selected candidates in a protected, specialised group (i.e. with disabilities) should be similar with that of the individuals from the non-protected group or category. Fairness for people with deformities, disabilities and robustness amongst the Machine Learning Languages can be enhanced and developed by inculcating and recognizing ‘outliers’. These are the possible suggestions and methodologies that could be enacted and adopted for reducing biases and making AI more complicit with the persons with disabilities. (AI Fairness for People with Disabilities: Point of View )

Now the question that is to be analysed from all corners is how AI creates discrimination amongst the marginalized of the communities, sometimes thus adversely affecting the latter? As stated earlier, AI systems function and work on the basis of the type and nature of the data and information used for their programming. If some essential information (like the behaviour, responses of the black-coloured people, homosexuals and other factors like that of the race, gender, age) are omitted, excluded from its purview, then it would cause a significant amount of harm to the disabled-in cases alleviating the problems for the disabled categories for which AI was originally designed to resolve.

Given the extremely fluid, diverse definition and interpretations of the terminologies like ‘gender’, ‘sexuality’ and the introduction of the social model of disability (which extends the definition of ‘disability’ to beyond the usual biological and pathological norms) there have been instances where these ‘dormant’ and ‘conventional’ conceptions are technically applied in the AI for classifying people under specific categories, without giving consideration across all the “types of people” and thereby creating discrimination and biases. Another crucial aspect significantly affecting the interests of the disabled is it`s version of normalcy which is created by using the popular and the commonly used notions of ‘cultures’, ‘logics’ and then consequently encoded and applied upon the data, logistics to be used in the AI models. These ‘norms’ often conveniently fail to include within their ambit the so-called ‘outliers’ and the categories of people within the realms of disabled, resulting in discrimination. Privacy and data-security concerns are some of the other concerns to be given due consideration in the light context of disability especially given that given that any possible leakage, breach of the same would lead to losses in jobs, employment and other potential stigmatization to be borne by the marginalized community. (Disability, Bias and AI , 2019, November)

NITI Aayog in the year 2019 had launched an AI 4 All Global Hackathon for promoting awareness and developing potential solutions to address various infrastructural and institutional challenges without any compromise on the data privacy. In the same year (i.e. 2019) a national program for the development of the AI was announced in the General Interim Budget. (ARTIFICIAL INTELLIGENCE FOR INDIA'S TRANSFORMATION, 2019, APRIL 24) ranging from Banking, financial Services till Agriculture, AI has made deep inroads within the Indian business and corporate sector. The possible potential of AI is extremely infinite with the capabilities of revolutionizing the world including our subcontinent of India. Although the fact cannot be denied of the harmful effects and biases, discrimination especially towards the marginalized and disabled community, nonetheless the positive effect of the same is tremendous not only enhancing the productivity of the corporations but also making the human life comparatively easier. It`s true that certain methodologies have to be adopted and inculcated towards making AI more compatible and accessible towards the disabled people yet there have been specific scientific developments developed and created towards the assistance and progress of the disabled. The Indian Government has also allocated funds and signed MOU`s along with various MNC`s towards the further growth and introduction of more AI based technologies, meaning that it has an optimistic and significant impact.



Bibliography

AI Fairness for People with Disabilities: Point of View . Terwin, Shari. s.l. : IBM ACCESSIBILITY RESEARCH.

ARTIFICIAL INTELLIGENCE FOR INDIA'S TRANSFORMATION. 2019, APRIL 24. s.l. : COMMUNICATIONS TODAY, 2019, APRIL 24.

Assistive technologies for persons with disabilities . Arora, Isha. 2019, Nov 10. s.l. : FINANCIAL EXPRESS, 2019, Nov 10.

Disability, Bias and AI . West, Whittaker Alper Bennett Salas. 2019, November. 2019, November.

HERE's How Microsoft is using AI to empower people with disabilities. SERVICE, INDO ASIAN NEWS. 2019, OCT 29. s.l. : HI-TECH, 2019, OCT 29.

How Tech is making life easier for the differently abled. UMACHANDRAN, SHALINI. 2017, Nov 7. s.l. : THE TIMES OF INDIA, 2017, Nov 7.



Endnotes

[1] Armed rehabilitation device for the neuro-rehabilitation of the stroke victims [2] Designed for adhering to the needs of the gamers with the limited mobility [3] A battery-powered device providing assistance to the disabled people without functioning limbs [4] Assistive mobility device designed for working both indoors and outdoors. [5] Contains applications, programs which would enable a visually-impaired persons from performing all his tasks


The Indian Society of Artificial Intelligence and Law is a technology law think tank founded by Abhivardhan in 2018. Our mission as a non-profit industry body for the analytics & AI industry in India is to promote responsible development of artificial intelligence and its standardisation in India.

 

Since 2022, the research operations of the Society have been subsumed under VLiGTA® by Indic Pacific Legal Research.

ISAIL has supported two independent journals, namely - the Indic Journal of International Law and the Indian Journal of Artificial Intelligence and Law. It also supports an independent media and podcast initiative - The Bharat Pacific.

bottom of page