Regulating sentiment analysis: putting the technology to practical, ethical use
When Admiral Insurance revealed plans to use sentiment analysis to determine the price of customer premiums, the public was up in arms. Indeed, Admiral was shortly forced to scrap the scheme.
Despite the quick downfall, the public’s reaction revealed an important insight. While sentiment analysis is becoming a part of our everyday lives, not everyone is happy about it.
So, should we be regulating sentiment analysis? We examine the ethical dilemmas surrounding the increasingly prevalent technology.
What is sentiment analysis?
Sentiment analysis studies the mood, opinions and attitudes expressed through written text. By using algorithms to distinguish the emotions behind words, sentiment analysis can determine whether a communication suggests a positive, negative or neutral sentiment.
While the technology is nothing new, stories about sentiment analysis have raised valid concerns about the accuracy and ethics of using the technology. Without, that is, the prior consent of a customer.
Of course, with GDPR, this is becoming less of a concern. But should companies be doing more when it comes to regulating sentiment analysis?
Risks and regulations
When used by large corporations for potential commercial gain, using sentiment analysis unethically runs the risk of negatively impacting individuals or groups of people.
Admiral Insurance stated that it wouldn’t use the social media data from sentiment analysis to raise insurance premiums. Rather, they intended to use it only to reduce existing quotes. Importantly, though, the company did not rule out using the technology in this more negative way in the future.
In the United States, the constitution ensures that neither the Government nor a large corporation can harm a single individual without ‘the due process of law’. For instance, using sentiment analysis software to deny a person’s access to a service — such as car insurance — based on their social media activity would be illegal.
However, in the United Kingdom, the Government has only produced very short guidelines explaining how to use sentiment analysis technology. These guidelines briefly describe the software and its limitations. They do not, though, supply any kind of real regulatory framework for businesses to adhere to.
A transparent approach
As a provider of sentiment analysis technology, we believe that it is vital for businesses using the software to formally lead the discussion on regulating sentiment analysis. Given, that is, its related ethical dilemmas.
Businesses should not fear sentiment analysis technology. Instead, they should drive the conversation to ensure that correct usage guidelines are put in place.
Shying away from sentiment analysis benefits no one. The right application, however, means that businesses can use it to both the advantage of their brand and to their customers.
Taking to Twitter
Consumers should be provided with adequate information about what sentiment analysis is and how it can be used to enhance their buying experience. (Rather than intrude upon it.) For instance, managing customer complaints is one clear example of how sentiment analysis can benefit users on both sides of the buying and selling spectrum.
Recent studies state that 67% of consumers have used a company’s social media page for customer service. In fact, 33% prefer this communication channel over the telephone. However, if a customer were to make a complaint about a negative buying experience through social media, it is all too easy for their comments to get lost in the 500 million tweets published every single day.
This is where sentiment analysis can benefit both business and customer. Using the technology, tweets that express a particularly negative sentiment about the company can be instantly highlighted for the attention of management. That might be an incorrect delivery complaint, a problem with a product, or anything containing inflammatory language.
Next, automation can step in. Using an intelligent business automation platform, automated actions can run based on sentiment analysis results.
For example, if a customer was experiencing problems completing the checkout on an ecommerce website and they tweeted in frustration, their contact details could be forwarded directly to the help desk with a ticket raised.
From the customer’s perspective, this can dramatically speed up the process of resolving complaints. (Particularly when compared to traditional methods of customer service such as waiting in telephone queues.)
Businesses can quickly gain a poor reputation for bad customer service and complaints management. This means that the potential to speed up the resolution process is invaluable. However, intelligent sentiment analysis tools can do much more than simply forward complaints.
ThinkAutomation, for example, can analyse the sentiment of any tweet, incoming email, SMS or document to determine its sentiment score. From there, the system can run automated actions based on the results. So, many minor customer problems can be resolved without any interaction from a customer service agent.
Regulating sentiment analysis
While Admiral’s plans to use sentiment analysis to determine insurance discounts stalled, it’s not the last we’ve seen of the technology. Yes, sentiment analysis has the potential to provide masses of commercial gain for businesses. But it also has undeniable value for consumers that should not be overlooked.
When used correctly, sentiment analysis can enhance the buying experience from both sides of the checkout. Before this happens, however, businesses need to be given clear, ethical guidelines on how the technology should be exercised to prevent its abuse.