top of page

Sarcastic Content on the Internet and their Detection Using AI Tools

Updated: Sep 21, 2023

Ankesh

Editorial Intern for The Indian Learning (e-ISSN: 2582-5631)

Indian Society of Artificial Intelligence & Law





Sarcasm is a popular use of the expression, mostly used in a humorous way. The detection of Sarcasm, or as the common language goes “people who do not understand sarcasm” is not that popular. It can be defined as “a sharp, bitter, or cutting expression or remark; a bitter gibe or taunt”. Sarcasm is often understood by becoming aware of the context of a particular subject matter. If people don’t know the context, it might not be possible to gain insights on the same. This has been the general understanding of the concept of sarcasm, but as we move ahead, and advancements of technological developments take place, the use of sarcasm is also integrated into this technology.

Sarcasm is a rhetorical way to express hatred or negative feelings with an exaggerated verbal picture. It is a series of false mocking and morals that increase hostility without being explicit. In face-to-face conversation, sarcastic speech is easily identified by the speaker's expressions, gestures, and tone. However, recognizing satire in text communication is not trivial because these signals are not available. As Internet usage increases, finding sarcasm in online communications on social media, discussion forums, and e-commerce websites has become important in public meetings, sentiment analysis, and identifying cyberbullying online.

This has aroused a lot of interest in neuropsychology and linguistics professions too, but the development of a computer model that automatically detects satire is still in its infancy. Previous work in detecting satire in the text has used vocabulary (content) and practical (context) cues such as interference, punctuation, and emotional shifts, which are important indicators of satire. In this work, features are created manually and cannot be generalized if there is an informal and pictorial language that is often used in online conversations.

As the technology has advanced, we can do a lot more like leveraging neural works, deep machine learning among other things. This can be used to study both “lexical and contextual features”, thereby making it easy for researchers to do their work in a well-organised manner. Artificial Intelligence is one important part of the same.








Social Media and Sarcasm

The world of the internet is full of different things all around, it has the education, online marketplaces, and even the latest news. But amidst all this, we have developed platforms to share this news, education and other things. Social Media is one such platform where people share different things according to their interests, some pictures they like, some news they might enjoy, etc. This use of social media leads to a different category of content on the internet, the witty and funny content often referred to as “memes”. Sometimes this content crosses a reasonable limit to not be available on a particular platform, and as a majority of the time artificial intelligence is used to censor these contents, the AI might not understand sarcasm or the humour in a content. Twitter one of the social media giants faces this particular issue a lot, as most of its content is more public than other social media platforms like Instagram, or Facebook per se. This type of censorship often known as algorithmic censorship is often facing some kind of backlash by the users, after the removal of some content on their platform.


In the data-driven mindset of today's world where we rely on the internet for even the smallest of things, and a majority of the global population is using the internet, thereby, a lot more people doing these sarcastic and witty posts, it becomes really important to look out for things which cross a reasonable line, thereby invoking censorship. Now, censorship in itself has acquired a negative connotation to over the last few years, but a person owning the platform should be free to decide what should be available on his platform, and how he decides to use the available content. Therefore, a reasonable amount of censorship is required to maintain a certain order in our society. Artificial Intelligence or AI comes to the rescue of platforms on this issue. If used properly, AI could easily be used to censor certain words, or a certain group of words or phrases, which it thinks could be demeaning to the platform. But how does this work exactly? There have been different ways to approach these studies, and many countries, especially the United States of America and China, have moved quite further ahead in this research, in addition to these various different methods to pursue this topic has also been developed. These methods along with their usage would be further discussed in this article.

In today's world thanks to advances in communication technologies such as mobile phones and social media. These advances have led to an exponential increase in data production. In recent years, we've seen people use social sites like Twitter and Facebook in bulk to collect and share thoughts, opinions, and discoveries, and engage in various discussions. This data should be analysed for a variety of purposes, including sentiment analysis, author mood evaluation, and more. Such information can affect your audience, so you need to understand the nuances of the authors adding data to these social sites.

The mood can range from confused, provocative, distracted, or nauseous. Psychologists study the various moods of people and their origins. Mood affects an individual's behaviour, which can affect not only their lives, but also others. Mood is related to emotions, and it focuses primarily on opinions and attitudes. This is why emotions are considered subjective. Some people refer to emotions as a natural way to react to admiration, longing, discomfort, and disgust. Sense means motivation through emotion, evaluation, or observation.

There are many types of data on the Internet, from short text data such as tweets to long text data such as arguments. Twitter, a popular social site, has trillions of tweets and tweets that provide a lot of information to understand the concept of satire. Prudence plays an important role in the mood analysis process, and today researchers use these tones to understand an individual's mood. In this article, we will look at the work of various researchers in this field, as well as these tones and their uses. Understand the technical details required to develop these models.


The Method of Sentiment Analysis

One of the methods to detect sarcasm or more specifically any emotions is the method of sentiment analysis. This method is also sometimes called “opinion mining”, the reason being that this approach, determines a person’s behaviour or predicts his behaviour through the set of data that is available to him.

This is the process of classifying emotions using text. Emotions can be neutral, negative, or positive. With the increasing proliferation of social media these days, they have greatly improved sentiment analysis and made researchers explore this area more. A variety of useful information can be extracted through sentiment analysis in social networks. For example, it helps advertising companies calculate success and failure, predict consumer behaviour, and even can predict election outcomes. (Although this step has been criticised by some saying it can lead to unfair elections.)

The term has been commonly referred to as taunting someone, or commenting inappropriately on someone’s action, and is used to express someone else’s context or in a context in which the speaker presumes that the listener or the person reading his post knows. Due to the illiteracy of comments, or more importantly the illiteracy of the context, this has become a complex problem today. As we discussed earlier that these days sarcasm is being used frequently in social media, tweets, etc, sentiment analysis or opinion mining have come as an important player. The opinion analysis app has become a key solution to fighting sarcasm. Prudence is associated with various verbal phenomena, such as clear gaps between emotions or unevenness of expressed emotions. Humour is compared as a difference or distinction between these positive and negative moods.

Although this method does with some of its limitations, the first and foremost being the availability of language. For English dictionaries, it's easy to create, but researchers are looking for other languages ​​and creating dictionaries for their languages, dictionaries for these languages, which is the biggest problem researchers face.

Also, the use of this process has attracted a wide range of researchers in the field. Natural Language Processing helps you get the best results from your Opinion mining process. Since domain-independent corpus gives better results for the opinion than domain-independent corpus, more attention should be paid to domain-specific corpus rather than domain-independent corpus. You may mention false comments or fake blogs that mislead users and give false comments on any topic. This is done to reduce the reputation of the object. This type of spam provides impractical opinions in various applications.


The CNN Framework

One of the methods suggested by a paper published at Cornell University suggest the use of a framework called the CNN framework. As we know that, sarcasm detection may depend on sentiment and other cognitive aspects. For this reason, they have included a queue of moods and emotions in our concept. Along with this, they also argue that the personality of an opinion person is an important factor in identifying satire. To account for all these variables, they create different models for each such as mood, emotion, and personality. “The idea is to train each model on its corresponding benchmark dataset and, hence, use such pre-trained models together to extract sarcasm-related features from the sarcasm datasets.”

The paper has further mentioned that CNN can automatically extract key features from the training data. “It grasps contextual local features from a sentence and, after several convolution operations, it forms a global feature vector out of those local features.” CNNs are not required for handcrafted functions used by traditional supervisory distributors. These functions are difficult to calculate manually, and you always have to code them to get a satisfactory result. Instead, CNNs use a hierarchical structure of local functions that are essential to the learning context. “The hand-crafted features often ignore such a hierarchy of local features. Features extracted by CNN can therefore be used instead of hand-crafted features, as they carry more useful information.” A look at this method clearly using the science of Artificial Intelligence to detect the sentiments which have been associated with the content. The framework is also learning itself from the behaviour of the content which has been posted.





The AI competition between the US and China

Both these countries have been making constant advancements in this field of AI and technology as a whole, here also, both the government have been funding universities to do research on the same so that government could have a better idea as to what they are dealing with to make the legislation and rules accordingly.

Researchers in China say they have created artificial intelligence to detect sarcasm that has reached the highest level of performance in a database searched on Twitter. AI uses multimode learning that combines text and images. Because both are often necessary to understand if a person is doing something wrong, or posting something inappropriate. Researchers claim that detecting satire can help you analyse your emotions and share public attitudes toward specific topics. As part of a challenge launched earlier this year, Facebook is using multi-modal AI to recognize if memes violate terms of use. AI researchers focus on the difference between text and image and then combine these results to make predictions. It also compares hashtags to the text of your tweets to measure the mood you want to convey.

“Particularly, the input tokens will give high attention values to the image regions contradicting them, as incongruity is a key character of sarcasm,” the paper reads. “As the incongruity might only appear within the text (e.g., a sarcastic text associated with an unrelated image), it is necessary to consider the intra modality incongruity.”


The US government also launched a new AI tool, which was research funded by the military. This tool has proven to be able to solve problems that have traditionally been very difficult for computer programmers. Discovering human satirical art. This allows intelligence officers or agencies to better apply artificial intelligence to analyse trends while avoiding non-serious social media posts.

“Certain words in specific combinations can be a predictable indicator of sarcasm in a social media post, even if there isn’t much another context,” the University of Central Florida noted in a Research Paper. In essence, the team often learned computer models to look for patterns that represent satire and combined this with training the program to correctly select keywords from sequences that are more likely to identify satire. They learned a model to do this by loading large amounts of data and then testing for accuracy. It’s not the first time researchers have tried to use machine learning or artificial intelligence to detect sarcasm in short pieces of text, like social media posts.

This method is based on what researchers call a self-aware architecture and trains a sophisticated artificial intelligence program called a neural network to give more weight to some words depending on the words that appear next to them and the tasks assigned to them.

The Future of detection of Sarcasm with AI

Content censorship or moderation is an essential part of the online world, but for a number of reasons, it is difficult to get it right at scale. In doing such moderations, the moderators for the most part are bots or AI tools, and while extending this moderation to all forms of communications raises significant concern for the users, it also becomes essential for the safe use of the platform. Detection of sarcasm through different tools and different methods might just be the beginning of the long rage of AI tools to come, but the arrival of algorithmic censorship brings two new developments in this field, that more and more private communications are going to be under the influence of moderation with the levels increasing. The detection of sarcasm is in simple words the sentiment analysis, and if you can master the art of one sentiment analysis, you’ll have no or very less issue mastering the other sentiments. This brings s to the second area of development, the use of more and more realistic robots. One thing which they have lacked over the years is emotion, and this making of tool would be one of the great starts in making way for successful production of robots and AI chatbots.

Detection using algorithms may seem to have little military value, but think about how much time people spend on the internet than years ago. Also consider the increasing role of open-source information, such as social media posts, to help you understand what's happening in key areas where the military can operate.

The future holds a series of interesting developments not only in the detection of “Sarcasm” one of the human emotions but also other human emotions too, and algorithmic censorship has also not only talked about sarcasm but all of the human emotions as a whole. The governments again come into the picture, as we discussed earlier that the major countries and military of the world have actively been participating in this research to understand human sentiments, we also should remind them that the use of these tools should be done in a reasonable manner, and more parliamentarians should come forward for the ideas of legislation on this subject matter, aiming for safe use of these bots and tools.


The Indian Society of Artificial Intelligence and Law is a technology law think tank founded by Abhivardhan in 2018. Our mission as a non-profit industry body for the analytics & AI industry in India is to promote responsible development of artificial intelligence and its standardisation in India.

 

Since 2022, the research operations of the Society have been subsumed under VLiGTA® by Indic Pacific Legal Research.

ISAIL has supported two independent journals, namely - the Indic Journal of International Law and the Indian Journal of Artificial Intelligence and Law. It also supports an independent media and podcast initiative - The Bharat Pacific.

bottom of page