Articles

In our brave new world, not all use cases are good

By Alex Vronces

 

The frequency with which authors included the term “use cases” in their books exploded in the late 1980s. That’s when Ivar Jacobson, a computer scientist and software engineer, introduced “uses cases” at OOPSLA, an annual programming conference. Much has changed, technologically speaking, since then, but use cases, as a concept, have not. Use cases did then what they do now: they clarify what a system will do and what a system won’t.

While the term is common in the programmer’s lexicon, it’s also common in the non-programmer’s: dive into a conversation about open banking and you’ll hear people asking about use cases more often than not.

Use cases are what fintechs will be able to do for Canadians when open banking comes to life. Some say open banking will be game-changing, the precursor to a Cambrian explosion of financial-services appsThe federal government started consulting on open banking in 2019, but it’s 2021 and little progress has been made.

That’s why advocates of open banking are so hungry for use cases. They’re the only way to get people to understand what open banking will do for them in a language that resonates, which will bring Canada closer to actualizing open banking.

In other words, use cases are the only way to get over the hump of why

That hump, however, is treacherous. Though jumping over the hump of why is important, it’s also important to land the jump. In the digital economy, where policymakers are still wrangling over big and polarizing questions, the wrong use case will make you trip and fall before you even get off the ground.

 

Lemonade and the use case gone wrong

Founded in 2015, Lemonade is a millennial-loved fintech disrupting the centuries-old insurance industry. Using artificial intelligence and a chatbot to process claims, the company bragged that it set a world record by processing a claim in three seconds and with no paperwork

All a customer needs to do is record a video of themselves making the claim.

“Our AI carefully analyzes these videos for signs of fraud,” Lemonade said a few days ago in a now-deleted tweet. “It can pick up non-verbal cues that traditional insurers can’t, since they don’t use a digital claims process.”

Humans can’t tell fact from fiction. In fact, human judgment barely outperforms a coin flip. But why should anyone believe Lemonade’s machine-learning model is better? 

Lemonade’s tweet invited critical commentary.

The company’s machine-learning model is doing something that is a priori impossible, said one person. Another added he has a hard time believing the company has the necessary data to train its model because of how difficult it is for humans to detect lies.

Others raised questions about the ethics of relying on AI to accept or deny people’s claims.What if you’re neurodivergent? What if you have social anxiety? What if you’re blind or have trouble with facial expressions?

The CEO and co-founder of Lemonade, Daniel Schreiber, talked with Fortune last year and broached the subject of AI ethics.

Schreiber distinguished between the two types of algorithmic discrimination: more benign discrimination that is risk-based, which is what insurers already do today, versus more malign discrimination that is based on ethnicity or gender or sex.

On the company’s blog, Schreiber offered a self-referential example to make the point:

Let’s say I am Jewish (I am), and that part of my tradition involves lighting a bunch of candles throughout the year (it does). In our home we light candles every Friday night, every holiday eve, and we’ll burn through about two hundred candles over the 8 nights of Hanukkah. It would not be surprising if I, and others like me, represented a higher risk of fire than the national average. So, if the AI charges Jews, on average, more than non-Jews for fire insurance, is that unfairly discriminatory?

The distinction sounds clear in the abstract, but it’s not so clear in practice. What’s tricky, in practice, is disentangling acceptable discrimination from unacceptable discrimination. 

In the day-to-day deployment of AI to make decisions about how to treat people, intentions aren’t outcomes. Designing your model to not flagrantly discriminate against people of a certain ethnicity or gender or sex doesn’t mean it won’t. In fact, we know unintentional yet unfair algorithmic discrimination happens at scale.

But Lemonade already knows that.

In a blog post, Lemonade walked back its “poorly worded” tweet. The company clarified: “we never let AI perform deterministic actions such as rejecting claims or canceling policies.” Acknowledging the AI can perpetuate unfair biases, the company also said its “users aren’t treated differently based on their appearance, behavior, or any personal/physical characteristic.”

What Lemonade’s blip of Twitter controversy showed is the importance of not just getting over the hump of why, but also landing the jump. Failing to land it risks undermining yourself and your broader cause.

 

Use cases are storytelling

Describing use cases is storytelling, and storytelling is hard. In the digital economy, where there are still big challenges for policymakers to overcome, it’s even harder. 

When describing your use case, it’s easy to lose control of the story. Take algorithmic decision-making when it comes to lending, for example, which can easily invite the same critical commentary that Lemonade invited upon itself. 

Though the lack of narrative control is a natural consequence of the faux-decentralization of social media, it’s also a natural consequence of people’s shifting attitudes.

Apple’s recent ads about privacy should tell us all we need to know. So should the popularity of Shoshana Zuboff’s The Age of Surveillance Capitalism. 

The fact is that people are wary of the digital economy and data sharing, but they want the benefits that come from them.

If open banking is going to succeed, advocates of open banking need people’s support, not their wariness. The catch-22 for advocates of open banking is to remind people of the benefits, while ameliorating what makes them wary. Whether that’s privacy, algorithmic decision-making and manipulation, or big tech usurping the Canadian financial sector is your story to tell.

So be meticulous and ethical about your use cases. Or just keep your use cases from the hordes of Twitter.

Explore more resources

Fintechs Canada responded to the Government of Manitoba's request on how AI and digital technologies can drive innovation and economic...

News

Fintechs Canada responded to the government’s proposals to strengthen Canada’s financial sector. The proposals outline initiatives under five key themes,...

News

Fintechs Canada responded to the government’s consultation on right to repair for home appliances and consumer electronics, which aims to...