CFI Fellow Patrick Traynor, an Associate Professor in the Department of Computer and Information Science and Engineering (CISE) at the University of Florida, explains the second part of his research effort on the security and privacy of data in digital lending applications. Patrick’s previous post, explaining the first part of his research on evaluating the privacy policies of digital lending applications, can be found here.

“I’m sorry, but we are just not interested in providing security for our customers.”

Here is a phrase that you are unlikely to see from any company, at least if they want to stay in business. In fact, you are far more likely to see statements to the opposite. Yet time and again, the same services that tout security as something they care about prove to be tremendously vulnerable. Think about it – when was the last week that you didn’t hear about stolen Bitcoins, ransomware attacks, or data breaches?

If companies care so much about security, what is going on?

What Does “Secure” Mean?

Security is one of the least well-defined terms I know. By itself, it completely lacks context. Secure against what? Against whom? Under what conditions? Based on what assumptions?

As an analog, think about a restaurant that claims to care about “consumer health.” Is this statement regarding ingredients in their meals? Calories? What about cleanliness? How is the food prepared? Do employees really wash their hands after using the restroom? Most would agree that it is insufficient to simply trust such claims, and we rely on external entities (e.g., health department ratings, government regulations for calorie counts on menus, etc.) to give “consumer health” more measurable meaning.

The answers to my above “security in context” questions become deeply technical very quickly. In fairness, it is debatable whether it is realistic for the general public to understand these details. Regardless, this gap between specificity and understanding often permits real security considerations to end at exceedingly broad public statements. If nobody checks or bothers to define what security means, the thinking often goes, why bother to go further?

That’s where my team comes in – think of us as the health department for the security of online credit systems.

Before I go into further details, let’s make something clear. “Security” is hard. Even if you answer all of the context questions above, ensuring that you get it right all of the time in every location is something that nobody seems to know how to do well. Unlike most other engineering, where major assumptions remain within a predictable range (e.g., when building a bridge, winds, currents and the pull of gravity are all accounted for in the design), security faces the problem of constantly evolving adversaries. That means that what worked perfectly well yesterday may do very little to protect systems tomorrow. That said, we cannot write off security as an inevitable failure just because it is hard. Moreover, there are many tried and true techniques that absolutely should be in place for anyone to begin tossing around the word security. Our job is to ensure that we see these techniques in place and working correctly.

What Are We Measuring?

If we are specific about what we claim is secure, it is possible to understand the ways in which systems are likely to be strong. For my CFI Fellowship research on the security of online credit applications, I am measuring the characteristics of the connection between mobile devices and the server within the application provider’s network. While it possible to measure the security of many parts of an online credit system, the most critical is ensuring that connections between mobile devices and the service provider’s servers is secure. Accordingly, we focus our attention here. The figure below shows this in more detail.

A high-level overview of our security analysis

Why is this the right thing to measure? Simply, if a user cannot communicate securely with a company’s server, nothing else matters. An adversary need not bother to try to breach other security if this portion is poorly executed. Think of it this way – if the online credit provider’s system were a castle, we are most interested in the protections around the drawbridge over the moat. If nobody is watching who and what comes across that drawbridge, it does not matter how high the castle walls may be. Said differently, systems that do not get this part of security correct might as well be saying that quotation from the beginning of this post.

My team will characterize security with regards to the following metrics. First, we will look to see whether or not the mobile device uses strong encryption algorithms to protect the confidentiality (i.e., the secrecy) and integrity of all communications. Mobile platforms contain many different encryption algorithms, but even the best algorithms can be used in dangerous ways. Accordingly, we will reverse engineer mobile applications (where available) and determine which algorithms their developers have chosen.

Second, we will measure how mobile devices confirm the identity of the server. This is critical – when done poorly, a mobile device is happy to divulge any and all secrets to an adversary on the other end of a connection. There are some standard ways that an expert would expect this step to be done, and our previous published work on mobile money (Reaves, et al., Mo(bile) Money, Mo(bile) Problems: Analysis of Branchless Banking Applications in the Developing World, Proceedings of the USENIX Security Symposium (SECURITY), 2015) showed that approximately 50 percent of applications analyzed in that space did so incorrectly.

Finally, we will measure the configuration of the servers with which a mobile device communicates. Poor configuration here could again render the connection vulnerable as an adversary may be able to force the selection of weak/no cryptographic algorithms encrypting communications.

One major advantage to this analysis is that fixes to these problems can also be achieved at relatively low cost. Specifically, any faults that we find will not require impacted companies to spend hundreds of thousands of dollars to deploy advanced detection systems. Rather, a combination of changes by their development team and administrators can potentially address these issues in short order. We intend to alert all of the impacted companies prior to the release of our final report.

As a final note, finding no faults in this portion of a system does not mean that everything is secure. Moreover, this kind of analysis requires significant resources and time to carry out. It is our hope that each of the companies we study see these results as a first step, and use them to start a broader internal discussion about what they are doing to ensure security inside their company (e.g., creating policies about software updates and patching machines, establishing mechanisms for researchers who discover vulnerabilities to provide responsible disclosure of vulnerabilities, deployment of mechanisms for the secure storage of sensitive data, etc.).

No single analysis can measure all aspects of security. However, we look forward to sharing our results with the general public and the financial inclusion community in the near future. Together, we can ensure the transformative power of these systems while minimizing risks to users.

Image credit: Accion

Have you read?

How Secure Is Data Used in Digital Credit?

Data Are Not Neutral (Part 1 of 2)

Why Do Privacy Policies Matter for Digital Credit?