A look at a collection of examples of how leading institutions are utilizing machine learning to unlock value
Machine learning and artificial intelligence will become the most defining technologies in banking and beyond. This led some of the most powerful institutions to seek partnerships, investments, and in-house developments to take advantage of the application potential of machine learning and AI.
Let’s look at a collection of examples of how leading institutions are utilizing machine learning to unlock value from the vast data pools they command and continuously accumulate.
Aetna has launched a new security system for its consumer mobile and web apps that, in something of a twist, make passwords optional. Instead of a password or fingerprint being the only barrier to entry, Aetna’s new behavior-based security system monitors user devices and how and where a consumer uses that machine. Consumers can add biometric protection available on their devices.
That risk engine takes in data from many attributes of the device (software configuration, operating system version, etc.), in addition to benign attributes of consumer behavior (for example, how a mobile device is held when texting and location of the device), and matches these attributes against a device signature and a model based on previous behavior.
The risk engine binds a consumer to one or more of the devices they typically use. If they use a new device, the authentication request may include a PIN or biometric to confirm the consumer wishes to bind their identity to a new device. The risk engine compares the benign behavioral attributes to the existing behavioral model and determines a risk score based on the match.
“The risk engine is using unsupervised machine learning to match attributes to the existing model, so the more data provided into a model, the better it performs over time,” Jim Routh, Chief Security Officer at Aetna, explained. “Therefore, the more often the consumer uses the application, the more effectively the risk engine performs. Aetna provides consumers with choices on how they wish to interact and which types of biometric controls they prefer on their devices. Giving consumers choices gives them more convenience while also providing them with better security to protect their information.”
Read the full story on the way Aetna replaces security passwords with machine learning tools.
“We are starting to deploy various aspects of AI into production. Machine learning, deep learning, natural language processing, and image recognition will each play an important and growing role in our business,” Simeon Preston, AIA Group COO, said.
Preston shared that AIA has deployed machine learning to optimize its actuarial modeling, embedding chatbots into its service proposition in several markets, and uses an AI engine to improve insurance claims outcomes in Australia. Read more.
Allianz Global Corporate & Specialty SE (AGCS), the corporate insurance carrier of Allianz SE, is working with Praedicat, an InsurTech analytics company based in Los Angeles, to better predict the key catastrophe liability risks of the future. By combining Praedicat’s predictive modeling approach with AGCS’ underwriting processes and extensive liability risk portfolio analysis, the companies aim to identify the next generation of catastrophe liability risks for business customers far earlier than under current methods. Praedicat’s modeling engine uses machine learning technology to scan large volumes of data from peer-reviewed science publications and profile the likelihood that products or substances will generate litigation risks over their lifecycle.
By complementing the traditional experience-based underwriting and portfolio management of liability risks with predictive analytics, AGCS and Praedicat aim to combine the best of both approaches in this new risk assessment methodology. Using forward-looking data models in addition to historic loss data analysis and risk engineering assessments, AGCS liability underwriters globally will be able to better identify and assess future liability risks for industries or single companies. Asbestos, which caused insured losses of $71 billion globally until 2011, is one high-profile example of such a man-made liability disaster.
Read more at Allianz.
AXA, one of the largest global insurance companies, used TensorFlow as a managed service on Google Cloud Machine Learning Engine for predicting large-loss car accidents involving its clients with 78% accuracy (POC).
Approximately 7-10% of AXA’s customers cause a car accident every year. Most of them are small accidents involving insurance payments in the hundreds or thousands of dollars, but about 1% are so-called large-loss cases that require payouts over $10,000. As you might expect, it’s important for AXA adjusters to understand which clients are at higher risk for such cases in order to optimize the pricing of its policies.
Toward that goal, AXA’s R&D team in Japan has been researching the use of machine learning to predict if a driver may cause a large-loss case during the insurance period. Initially, the team had been focusing on a traditional machine-learning technique called Random Forest. Random Forest is a popular algorithm that uses multiple Decision Trees (such as possible reasons why a driver would cause a large-loss accident) for predictive modeling. Although Random Forest can be effective for certain applications, in AXA’s case, its prediction accuracy of less than 40% was inadequate.
In contrast, after developing an experimental deep learning (neural network) model using TensorFlow via Cloud Machine Learning Engine, the team achieved 78% accuracy in its predictions. This improvement could give AXA a significant advantage for optimizing insurance cost and pricing, in addition to the possibility of creating new insurance services such as real-time pricing at the point-of-sale. AXA is still at the early stages with this approach – architecting neural nets to make them transparent and easy to debug will take further development – but it’s a great demonstration of the promise of leveraging these breakthroughs.”
Read more about this case as explained by Kaz Sato, Staff Developer Advocate, Google Cloud, Google Inc.
Bank of America Merrill Lynch announced a new solution in August 2017 called Intelligent Receivables, which uses artificial intelligence and other software to help companies improve their straight-through reconciliation (STR) of incoming payments to help them post their receivables faster.
“Our solution brings together AI, machine learning, and optical character recognition (OCR), setting a new bar in accounts receivable reconciliation and payment matching,” added Gardner. “We’re excited to be working with leading FinTech provider HighRadius to add Intelligent Receivables to our suite of solutions.”
“Bank of America Merrill Lynch’s Intelligent Receivables solution, powered by HighRadius’ cutting-edge machine-learning technology, will enable their corporate clients to accelerate the adoption of electronic payments from their end-customers. We are extremely excited to work with BofA Merrill on modernizing treasury management services and streamlining the receivables-to-cash cycle,” said Sashi Narahari, CEO & President of HighRadius Corporation.
Read more in the official press release.
Cristóbal Sepúlveda, Technical Architect at BBVA, exposed an actual use case of this technology:
“At BBVA, we developed a service recommendation engine for bank users. With this proposal, what we are trying to do is offer the best commercial offer depending on the most used transactions by the user and their navigation patterns. All this information is processed in a classification algorithm which then generates a recommendation. The volume of information is incredibly vast, and the only way to offer a recommendation is using machine learning technologies,” he noted.
Read more on how BBVA embraces artificial intelligence and machine learning, in particular.
Danske Bank, the largest bank in Denmark, has created an in-house startup called Advanced Analytics, whose sole purpose is to use machine learning for predictive models to assess customer behavior and preferences on a personal level.
“By analyzing customer data, we were able to identify the customer’s preferred means of communication, such as phone, letter or email. [This sort of valuable info] has helped improve our marketing campaign hit rate by a factor of four,” says Bjørn Büchmann-Slorup, Head of Advanced Analytics at Danske Bank.
The bank has been working on a project dubbed AppBank. The initiative seeks to use machine learning. AppBank is run by a new business unit, which includes data scientists and machine learning professionals. Its goal is to increase large-scale automation, and while it is particularly focused on operations technology, it will tackle applications across every business unit at the firm.
“The goal is to be able to provide more insight into the health and operations of the systems. We think of it as our ‘check engine light’ product,” said Don Duet, Head of Technology at GS.
Like a light on a car dashboard coming on to indicate a problem, the software would inform users when there was something that could prevent the bank’s technology infrastructure from running smoothly. Read more.
Being one of the most forward-thinking institutions, Goldman Sachs has strong ties (as a customer and as an investor) with AI software provider Digital Reasoning, whose solution GS uses to track traders. The same startup has also launched a program with NASDAQ to use its AI technology to track trading data, communications, emails, chats, and even voice data to ferret out misconduct across the entire electronic stock exchange. Goldman Sachs also uses the machine learning platform Kensho to mine data from the National Bureau of Labor Statistics and compile all that information into regular summaries. The reports feature 13 exhibits predicting stock performances based on similar employment changes in the past, and they’re ready to print just nine minutes after the data is entered.
The CIO at HSBC, Darryl West, said, “The bank is using machine learning to run analytics over this huge data set with great compute capability to identify patterns in the data to bring out what looks like nefarious activity within our customer base. The patterns that we identify are then escalated to the agencies, and we work with them to track down the bad guys.”
The bank said that it is using Google Cloud machine learning capabilities for AML.
At JPMorgan Chase, a learning machine is parsing financial deals that once kept legal teams busy for thousands of hours. The program, called COIN (Contract Intelligence), does the job of interpreting commercial-loan agreements that, until the project went online in June 2016, consumed 360,000 hours of work each year by lawyers and loan officers. The software reviews documents in seconds, is less error-prone, and never asks for vacation.
Made possible by investments in machine learning and a new private cloud network, COIN is just the start for JPMorgan Chase. The firm set up technology hubs for teams specializing in big data, robotics, and cloud infrastructure to find new sources of revenue while reducing expenses and risks. The system is already helping the bank automate some coding activities and making its 20,000 developers more productive, saving money. When needed, the firm can also tap into outside cloud services from Amazon, Microsoft, and IBM. Read more.
Lloyds Banking Group has partnered with AI startup Pindrop to use its machine learning technology to detect fraudulent phone calls. Pindrop can identify 147 different features of a voice from a phone call or even a Skype call, which can help a person identify information such as the location that a caller is in creating an audio fingerprint. Lloyds Banking Group will introduce the software across the Lloyds Bank, Halifax, and Bank of Scotland brands. Lloyds said the partnership with Pindrop would help it cut down call times as well as protect customers.
“The reason for us doing it is to save money from fraud,” said Martin Dodd, Group Telephone Managing Director at Lloyds Banking Group.
Read more about how Lloyds uses Google-backed AI to detect phone fraudsters.
LSE has teamed up with IBM Watson business and cybersecurity firm SparkCognition to develop its AI-enhanced surveillance, said Chris Corrado, Chief Operating Officer of LSE Group, in an interview with Reuters.
MetLife Auto & Home is expanding its usage-based auto insurance program, My Journey, with a new smartphone app to monitor and improve its customers’ driving. Powered by technology from tech firm TrueMotion (a Boston-based tech company that combines the power of mobile technology, machine learning, and data science to impact the rising rates of automobile crashes and fatalities), the app utilizes the capabilities of an iOS or Android smartphone to provide drivers with quick feedback to both improve their driving and lower their auto insurance rates.
The My Journey program app automatically tracks key driving behaviors, including total miles driven, time of day, road type and conditions, hard braking and harsh acceleration, and phone-based distracted driving, in order to arrive at a score for each trip. The app does so by leveraging the sensors that are built into smartphones to continuously analyze data as the car is in motion. The app calculates and immediately displays an overall score for each trip from 1-100, with 100 being the safest possible trip. A cumulative safety score is built as time goes by. Read more.
Shift Technology, a startup based in France, helped a European coalition of insurers analyze 13 million claims. The technology identified 3,000 new cases of potential fraud, including a large, organized crime scheme that impacted nearly all the coalition’s members. The scam had siphoned millions of Euros from the group’s insurance company members over the span of many years, according to a Shift Technology case study. Read more at MetLife.
“As an early adopter, Munich Re has played a key role in helping us shape our new solutions and platform,” said Saurabh Gupta, Director of Analytics Products at SAS. “While working with them, we heard that the new release unifies their analytics infrastructure and enables users of varying skill sets to collaborate to solve the organization’s challenges faster. Customer feedback has always been instrumental in helping SAS release world-class products.”
Munich Re has access to massive amounts of data that are pulled into a centralized environment. Being able to use SAS to run sophisticated machine learning algorithms on big data within a collaborative user interface will allow the company to gain analytic insights to quickly address business challenges and serve clients. Plus, having access to embedded AI capabilities and the latest deep learning algorithms helps the company to stay at the leading edge of what is possible with analytics.
“The newest version of SAS allows all our users to quickly get started and collaborate with a unified and visual interface,” said Wolfgang Hauner, Chief Data Officer at Munich Re. “We like that it allows those who aren’t as familiar with SAS to code in Python and R and run the same actions on the same platform. It is both in tune with the end-to-end needs of an advanced data scientist and is also convenient for beginners. This ability to appeal to data scientists and non-coders will allow multiple users and teams to explore and analyze the same data, making the data discovery and model-building process more collaborative.”
Read more in SAS official press release.
The Singapore-based OCBC Bank has unveiled plans to use artificial intelligence and machine learning as part of its efforts to reduce financial crimes. The bank intends to deploy these technologies to deal with the increasing scale and complexity of AML monitoring, in addition to increasing the bank’s operational efficiency and accuracy in the detection of suspicious transactions. OCBC Bank has conducted a PoC with ThetaRay. Now, the company plans to start an extended PoC and a pre-implementation phase. The algorithm will detect anomalies in transactional behavior by evaluating broad parameters such as products, customers, and risks, instead of looking at each transaction as a standalone. In the PoC stage, the technology was deployed to analyze one year’s worth of OCBC Bank’s corporate banking transaction data. The findings demonstrated that it decreased the number of alerts, which did not require further review by 35%.
In November 2017, Prudential Singapore (Prudential) announced its trial of an industry-first, machine learning-based solution that assesses claims in seconds. It sits at the core of a new customer e-claims platform which Prudential is making available to selected policyholders on a trial basis.
The first phase of the trial (November 2017) was focused on automating the processing of PRUshield pre/post-hospitalization claims from eight major hospitals. These form the bulk of the 14,000 paper bills and receipts that Prudential’s claims assessors review each month. The trial aimed to simplify the process by allowing participants to upload scans or images of bills and invoices through the PRUaccess customer portal, significantly reducing the time that claims assessors spend on handling paper-based submissions. The system’s intelligent decision-making capabilities aim to progressively shorten the claims assessment time from seven days down to mere seconds by the time the trial ends in the first half of 2018.
Once a participant uploads and submits a claim on the trial e-claims system, the inbuilt text-mining engine identifies and categorizes payable and non-payable line items. Then, the intelligent machine learning engine assesses the validity of the claim and recommends an outcome (approve, partial approve or decline) and the payment amount.
The system has already been trained, and back-tested using claims data from the last two years and has reached a good level of accuracy. In the first phase of the trial, claims assessors will review the machine’s recommendations and provide feedback to the engine for continuous learning until it reaches an optimal level of confidence.
Prudential intends to fully launch the e-claims platform with straight-through processing capability in the second half of 2018.
QBE Insurance Group (QBE) not only closed an investment into Cytora through its venture arm in 2017 but also entered an agreement to use the three-year-old London-based startups’ technology. Cytora uses artificial intelligence (AI) and open-source data to help commercial insurers lower loss ratios, grow premiums, and improve expense ratios.
In 2018, the Cytora Risk Engine will be deployed across QBE property and casualty lines. The Cytora Risk Engine, driven by machine learning algorithms, combines an insurer’s internal data on a specific cover with external information from a broad spectrum of sources. This generates a risk score, which provides enhanced insight into expected claims activity on the whole portfolio and also at an individual risk level. Read more at QBE.
The SEC turned to advanced methods after the 2008 crisis. “…the use of simple word counts and something called regular expressions, which is a way to machine-identify structured phrases in text-based documents. In one of our first tests, we examined corporate issuer filings to determine whether we could have foreseen some of the risks posed by the rise and use of credit default swaps [CDS] contracts leading up to the financial crisis. We did this by using text analytic methods to machine-measure the frequency with which these contracts were mentioned in filings by corporate issuers. We then examined the trends across time and across corporate issuers to learn whether any signal of impending risk emerged that could have been used as an early warning.”
Until today, SEC actively studies the potential of machine learning through continuous testing across core activities.
FINRA monitors roughly 50 billion market events a day, including stock orders, modifications, cancellations, and trades. It looks for around 270 patterns to uncover potential rule violations. It would not say how many events are flagged or how many of those yield evidence of misbehavior. The machine learning software FINRA is developing will be able to look beyond those set patterns and understand which situations truly warrant red flags.
More on how FINRA is leveraging machine learning and artificial intelligence to catch stock market cheaters can be found here.
Transamerica, a holding company for US-focused life insurance companies and investment firms, is a subsidiary of Dutch life insurance multinational Aegon. Transamerica provides life and supplemental health insurance, investment, and retirement services to 27 million customers. Transamerica uses EMAP to create a comprehensive view of its clients in order to best serve their needs, given the array of financial services the company offers.
The company integrates data from across its insurance, retirement, and investment lines of business with third-party data. Transamerica’s Enterprise Marketing and Analytics Platform (EMAP) pulls in data from more than 40 sources, including consumer income and social media data. All that information is used to identify new patterns.
Transamerica decided to tackle its data analytics challenges with a Hadoop-based data lake. The company uses Cloudera’s distributed Enterprise Data Hub for storing structured, semi-structured, and unstructured data. Informatica’s Big Data Management (BDM) product handles vital data management functions, including data ingestion and integration, data profiling, and data quality.
Transamerica uses different processing engines, such as MapReduce, to parcel out work to various nodes and organize the results. The company also deployed Spark, a fast, in-memory data processing engine that’s particularly efficient with SQL and machine learning. Transamerica relies significantly on machine learning to draw insight from its data. Machine learning automates data analysis through algorithms that iteratively learn to uncover insights they weren’t specifically programmed to find. The company uses H2O, an open-source machine learning platform. Using H2O, Transamerica leverages in-memory distributed processing on Hadoop and lets data scientists run large numbers of machine learning models using common programming languages for big data, such as R, Python, and Scala. Read the full article at Information Management.
Wells Fargo analysts built a robot called AIERA (artificially intelligent equity research analyst), which is now tracking 13 stocks.
“AIERA’s primary purpose is to track stocks and formulate a daily, weekly and overall view on whether the stocks tracked will go up or down,” said Ken Sena, Managing Director, Global Internet Analyst, Wells Fargo Securities. “View AIERA as enhancing versus replacing.”
The months spent developing the bot helped the team of analysts deepen their understanding of the artificial intelligence and machine learning capabilities used at many internet companies they analyze. While AIERA is not picking stocks in the traditional sense yet, her validity tests continue to indicate above average. Read more.
Similar to QBE, Cytora’s technology is used by the property and casualty insurance and reinsurance provider XL Catlin. XL Catlin said it would use Cytora’s expertise in sourcing and analyzing data from multiple sources and combining them to create new insights into risk.
Cytora’s Risk Engine captures the online footprint of risks clients are continuously facing by crawling data from company websites, news articles & government data sets and processes it using AI algorithms in order to predict future claims, attractive risk profiles, and quality of risks.
Zurich Insurance is deploying artificial intelligence in deciding personal injury claims after trials cut the processing time from an hour to just seconds, its chairman said.
“We recently introduced AI claims handling … and saved 40,000 work hours while speeding up the claim processing time to five seconds. We absolutely plan to expand the use of this type of AI,” Tom de Swaan told Reuters after the insurer started using machines in March 2017 to review paperwork, such as medical reports.
“Accuracy has improved. Because it’s machine learning, every new claim leads to further development and improvements,” de Swaan added. De Swaan said Zurich Insurance, Europe’s fifth-biggest insurer, would increasingly use machine learning, or AI, for handling claims. Read more.
To learn about Prove’s identity solutions and how to accelerate revenue while mitigating fraud, schedule a demo today.
Join over 1,000 businesses that rely on Prove across multiple industries, including banking, FinTech, healthcare, insurance, and e-commerce. Contact us today.
Trusted by 1,000+ leading companies to reduce fraud and improve consumer experiences. Contact us today to learn how you can frictionlessly secure your digital consumer journey — from onboarding to ongoing transactions.
Tap the button below to read our latest white-paper on the subject as industry leaders.
Contact us to learn how leading companies are using Prove Pre-Fill to modernize the account creation process by shaving off clicks and keystrokes that kill conversion.
Get in touch to find out how we can help you identify your customers at every stage of their journey and offer them seamless and secure experiences.
Let our expert team guide you through our identity verification and authentication solutions. Select a date and time that works for you.
Find out how we can help you deliver seamless and secure customer experiences that comply with PSD2/SCA. Select a date and time that works for you.
Download Aite-Novarica Group’s full report about Prove Pre-Fill, including a product overview, customer results, and how the product works.
Download the guide now to learn how you can improve security, cut down on fraud, and create the best possible customer experience.