Insurance firms can skim your online data to price your insurance — and there’s little in the law to stop this

Image from Shutterstock

Zofia Bednarz, Lecturer in Commercial Law, University of Sydney; Kayleen Manwaring, Senior Research Fellow, UNSW Allens Hub for Technology, Law & Innovation and Senior Lecturer, School of Private & Commercial Law, UNSW Law & Justice, UNSW Sydney, and Kimberlee Weatherall, Professor of Law, University of Sydney

What if your insurer was tracking your online data to price your car insurance? Seems far-fetched, right?

Yet there is predictive value in the digital traces we leave online. And insurers may use data collection and analytics tools to find our data and use it to price insurance services.

For instance, some studies have found a correlation between whether an individual uses an Apple or Android phone and their likelihood of exhibiting certain personality traits.

In one example, US insurance broker Jerry analysed the driving behaviour of some 20,000 people to conclude Android users are safer drivers than iPhone users. What’s stopping insurers from referring to such reports to price their insurance?

Our latest research shows Australian consumers have no real control over how data about them, and posted by them, might be collected and used by insurers.

Looking at several examples from customer loyalty schemes and social media, we found insurers can access vast amounts of consumer data under Australia’s weak privacy laws.

A person's hands are visible holding an Apple phone on the left (screen facing forward), and a generic Android on the right.
How would you feel if a detail as menial as the brand of your phone was used to price your car insurance? – Shutterstock

Your data is already out there

Insurers are already using big data to price consumer insurance through personalised pricing, according to evidence gathered by industry regulators in the United Kingdom, European Union and United States.

Consumers often “agree” to all kinds of data collection and privacy policies, such as those used in loyalty schemes (who doesn’t like freebies?) and by social media companies. But they have no control over how their data are used once it’s handed over.

There are far-reaching inferences that can be drawn from data collected through loyalty programs and social media platforms – and these may be uncomfortable, or even highly sensitive.

Researchers using data analytics and machine learning have claimed to build models that can guess a person’s sexual orientation from pictures of their face, or their suicidal tendencies from posts on Twitter.

Think about all the details revealed from a grocery shopping history alone: diet, household size, addictions, health conditions and social background, among others. In the case of social media, a user’s posts, pictures, likes, and links to various groups can be used to draw a precise picture of that individual.

What’s more is Australia has a Consumer Data Right which already requires banks to share consumers’ banking data (at the consumer’s request) with another bank or app, such as to access a new service or offer.

The regime is actively being expanded to other parts of the economy including the energy sector, with the idea being competitors could use information on energy usage to make competitive offers.

The Consumer Data Right is advertised as empowering for consumers – enabling access to new services and offers, and providing people with choice, convenience and control over their data.

In practice, however, it means insurance firms accredited under the program can require you to share your banking data in exchange for insurance services.

The previous Coalition government also proposed “open finance”, which would expand the Consumer Data Right to include access to your insurance and superannuation data. This hasn’t happened yet, but it’s likely the new Albanese government will look into it.

Why more data in insurers’ hands may be bad news

There are plenty of reasons to be concerned about insurers collecting and using increasingly detailed data about people for insurance pricing and claims management.

For one, large-scale data collection provides incentives for cyber attacks. Even if data is held in anonymised form, it can be re-identified with the right tools.

Also, insurers may be able to infer (or at least think they can infer) facts about an individual which they want to keep private, such as their sexual orientation, pregnancy status or religious beliefs.

There’s plenty of evidence the outputs of artificial intelligence tools employed in mass data analytics can be inaccurate and discriminatory. Insurers’ decisions may then be based on misleading or untrue data. And these tools are so complex it’s often difficult to work out if, or where, errors or bias are present.

A magnifying glass hovers over a Facebook post's likes
Each day, people post personal information online. And much of it can be easily accessed by others. – Shutterstock

Although insurers are meant to pool risk and compensate the unlucky, some might use data to only offer affordable insurance to very low-risk people. Vulnerable consumers may face exclusion.

A more widespread use of data, especially via the Consumer Data Right, will especially disadvantage those who are unable or unwilling to share data with insurers. These people may be low risk, but if they can’t or won’t prove this, they’ll have to pay more than a fair price for their insurance cover.

They may even pay more than what they would have in a pre-Consumer Data Right world. So insurance may move further from a fair price when more personal data are available to insurance firms.

We need immediate action

Our previous research demonstrated that apart from anti-discrimination laws, there are inadequate constraints on how insurers are allowed to use consumers’ data, such as those taken from online sources.

The more insurers base their assessments on data a consumer didn’t directly provide, the harder it will be for that person to understand how their “riskiness” is being assessed. If an insurer requests your transaction history from the last five years, would you know what they are looking for? Such problems will be exacerbated by the expansion of the Consumer Data Right.

Interestingly, insurance firms themselves might not know how collected data translates into risk for a specific consumer. If their approach is to simply feed data into a complex and opaque artificial intelligence system, all they’ll know is they’re getting a supposedly “better” risk assessment with more data.

Recent reports of retailers collecting shopper data for facial recognition have highlighted how important it is for the Albanese government to urgently reform our privacy laws, and take a close look at other data laws, including proposals to expand the Consumer Data Right.The Conversation

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Authors

  • Dr Zofia Bednarz joined the University of Sydney Law School as a Lecturer in January 2022. She teaches and researches in the area of Commercial and Corporate Law. Zofia's current research focuses on the use of new technologies, such as Artificial Intelligence (AI) tools, by financial firms and the implications it has for provision of financial services to consumers. She is an Associate Investigator at the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S).

    View all posts
  • Dr Kayleen Manwaring is a Senior Lecturer, School of Taxation & Business Law, UNSW. Prior to joining the School, Kayleen also taught law in the Business & Economics Faculty at Macquarie University. Until March 2012, she spent many years working as a commercial lawyer and in law firm management, in Sydney and London. Her work in practice primarily focused on technology acquisition and licensing, intellectual property, and communications. Her research interests lie at the intersection between emerging technologies, particularly information technology, and the law of contract, consumer protection and competition law, intellectual property law and corporations law. She has recently completed a major research project on the implications for consumer contracts of the Internet of Things and associated technologies. She teaches corporations and business associations law, intellectual property law and information technology law.

    View all posts
  • Kimberlee Weatherall is a Professor of Law at the University of Sydney, and a Chief Investigator with the ARC Centre of Excellence on Automated Decision-Making and Society. She specialises in issues at the intersection of law and technology, as well as intellectual property law. Her current research focuses on intersections between research policy, open access and intellectual property; and the law relating to the collection, ownership, use and governance of data about and related to people, including privacy law, with the goal of ensuring that data collection, use and linkage, and data and predictive analytics are developed in ways that are beneficial to people and society. She is a Fellow at the Gradient Institute, and a research affiliate of the Humanising Machine Intelligence group at the Australian National University. She also co-chairs the Australian Computer Society’s Technical Advisory Board on Artificial Intelligence Ethics.

    View all posts