The potential of FinTech to help financially excluded populations of the world is well-documented, with FinTech helping to reduce vulnerabilities, build assets, manage cash flows and increase income. Yet FinTech doesn’t come without risks. Here are a few things to keep in mind when considering FinTech for the financially excluded:
- The loss of high-touch customer service – human transactions are based on trust; this is especially the case with lower-income consumers. Proponents of algorithm-based lending argue that it eliminates subjectivity in decision-making, but it also often results in excessive standardization that overlooks suitability principles such as “sell only what clients need and can use”.
- The value of transparency – FinTech’s algorithmic nature means that, many times, consumers can be refused a loan based on “alternative data” such as geolocation, frequency of SMS use, phone charging, medical records, browsing history, social media profiles and online purchasing. But what recourse does someone have if rejected based on such data?
- Ensuring ethical behavior – too aggressive of sales targets can mean dishonesty and unethical behavior from sales staff (in an effort to meet unrealistic quotas). This can increase risk to both the company and consumers. Establishing an environment based on ethical leadership will reduce this risk greatly.
- Focusing on the end user – responsible FinTech for the financially excluded means including them in a way that also protects them. The ability to question assumptions and check where things might go wrong are characteristics of responsible lenders, as is accounting for the specific vulnerabilities of lower-income customers and exercising respect and sound judgment. In the end, remember that artificial intelligence cannot replace human empathy and human judgment.