Kai-fu Lee talks his book

In an essay in the WSJ adapted from a book due out today, Lee writes,

While a human mortgage officer will look at only a few relatively crude measures when deciding whether to grant you a loan (your credit score, income and age), an AI algorithm will learn from thousands of lesser variables (what web browser you use, how often you buy groceries, etc.). Taken alone, the predictive power of each of these is minuscule, but added together, they yield a far more accurate prediction than the most discerning people are capable of.

I am willing to bet against that.

1. A credit score already makes use of a lot of information that human underwriters did not use to look at. Credit scoring was “big data” before that term existed.

2. To use those “lesser variables” in the United States, you have to prove that they don’t harm access to credit of minorities.

3. The marginal value of additional information about the borrower is not very high. In a home price boom, “bad” borrowers will repay their loans; in a bust, some “good” borrowers will default.

I am starting to believe that artificial intelligence, when it consists of making predictions using “big data,” is overrated by many pundits. As with Lee’s mortgage underwriting example, it does not help if AI solves the wrong problem. To take another example, figuring out which ads to serve on content sites that shouldn’t be ad-supported in the first place is solving the wrong problem.

14 thoughts on “Kai-fu Lee talks his book

  1. I am starting to believe that artificial intelligence, when it consists of making predictions using “big data,” is overrated by many pundits.

    I agree. I believe AI is overrated by most pundits regardless of whether or not ‘big data’ is involved. I, too, am a fan of Neal Stephenson’s Diamond Age but am skeptical that AI is going to deliver an ideal education for every child (or any child). And I’ve long been an autonomous vehicle skeptic, who agrees with the recent ‘probably not in my lifetime’ prediction of this Toyota auto exec:

    https://www.consumeraffairs.com/news/toyota-executive-says-self-driving-car-technology-is-overhyped-092118.html

  2. I agree with you take Arnold, but further the idea that “Taken alone, the predictive power of each of these is minuscule, but added together, they yield a far more accurate prediction than the most discerning people are capable of.” is unlikely to be significant. The little things in your life are going to be heavily correlated with your income. Where you shop is going to be influenced by where you live and work. Your available credit is correlated to your previous spending habits etc, etc. In the end these little, subtle things are going to add up to one slightly less little thing.

    The likely case is that you can use this approach to detect potential fraud, where someone claims to live and work in one area but they are shopping for groceries 15 miles away at a store that has a location much nearer? Ok, we should check this application a little more rigorously than usual.

  3. I don’t think you’re thinking on the margin. Maybe you’re right about disparate impact laws, but that’s not a knock against the technology.

    A smart application of big data and machine learning can be much smarter about price discrimination in loans.

    • Large changes in delinquencies come with recessions, non recession delinquencies tend to be low (below 2.5%) making it tough to find large improvements. Most delinquencies are multi casual, someone gets sick, plus there is another unexpected expense (car accident, short term unemployment) for a family that was already on the edge. It is unlikely that machine learning is going to be a large boost to understanding these circumstances.

  4. I agree that additional information will have limited value to the accuracy of predictions, but that doesn’t necessarily mean that additional information will not make the acceptance process more stringent or random.

    A slightly overfit model will produce unexpected rejections or acceptances without obviously degrading overall performance.

    I certain types of data could be very dangerous. Credit scores using mined social graphs from facebook and the like could provide financial incentive to control who you spend time with.

  5. Your point (2) could bear some elaboration. Take this process:

    1) Identify the variables that are obviously relevant to a decision, and account for them as best you can.
    2) Remove one variable (race) because including it perpetuates inequalities and is against the law. Assume for the sake of argument that the removed variable had some statistical relevance beyond its correlation with income and wealth.
    3) Use AI to look at a bunch of variables with no apparent correlation to the decision.

    To the extent step 3) improves the decision, it seems very likely that it’s because the algorithm is finding a shadow variable for race and adding it back into the equation. It’s not just a legal compliance question of *proving* that your algorithm isn’t discrimination, the problem is that you should expect a dutiful algorithm to be primarily focusing on finding shadow variables to circumvent the restriction imposed in step 2). (Note that this would not be the case if race doesn’t have any predictive power and we are primarily concerned about taste-based discrimination from lenders.)

  6. Kai-Fu Lee developed the world’s first speaker-independent, continuous speech recognition system as his Ph.D. thesis at Carnegie Mellon. Wow. And he was not afraid to reject binary thinking: “Which vision to accept? I’d say neither.” You don’t see that much anymore.

    I am ashamed to admit I didn’t know who he was before looking him up while reading this article.

    It is good to see the WSJ offering Chinese intellectuals a platform. I don’t know whether it is just my own ignorance or whether, in general, Chinese intellectuals have less prominent public profiles in the US than their accomplishments would merit.

    For example, I noticed a couple days ago in the South China Morning Post that Shanghai-born Charles Kao, the Nobel-Prize winning scientist who pioneered fiber optics, died in Hong Kong. Seems like he should have had greater stature in the US.

    And its hard not to notice that China is leading the world these days in practical technological accomplishment such as promoting the flourishg of e-commerce. Alipay has outgrown Paypal and China leads in advancing dockless bicycles and high-speed rail,

    As US experts and scientiests increasingly devote themselves exclusively to punditry and prophecies, it appears that Americans just won’t do the hard jobs of producing socially useful research and applied technology. The US really needs to stop funding the impractical programs it funds at US universities devoted to largely philosophical outcomes and attract more Chinese immigrants to take over university positions in order to grow the US economy.

  7. “Taken alone, the predictive power of each of these is minuscule, but added together, they yield a far more accurate prediction than the most discerning people are capable of.”

    This is the part I don’t believe. My prior is that this information is noise, and adding it together will not meaningfully advance the accuracy of the prediction. Indeed, it seems to implicitly rest on some pretty strong behavioural assumptions, of the kind that underly most of the omnicausal social science puffery.

  8. I am starting to believe that artificial intelligence, when it consists of making predictions using “big data,” is overrated by many pundits.

    Isn’t the main point about pundits and investors is they are usually right but it takes a lot longer to come to fruition that expect which is where Big Data will impacts. In 1999, it was predicted dotcom and tech would dominate the economy but it takes decades to take hold. We tend to under-estimate how long it took past inventions to take hold. The car was sort of running by 1895 and it took until 1912 that there were 1M car/trucks in the US, let alone that the average family owned a car. (sometime in the 1920s.)

    Probably the thing about Big Data is:
    1) If well organize, it can improve decision making although it will take a long time to do so without human intervention. (working with some big data I have seen improvements with decision making but it is not like past decisions were all dumb stuff.)
    2) Probably the most obvious point on Big Data is baseball and stats. The use of baseball details stats been around since the early 1980s and yet did HIT big until Moneyball. However you review the history you do see some usage of Moneyball stats with late 1970s Earl Weaver or 1980s Davey Johnson managing. (Or any analysis of 1990s showed executives managers moved this direction. Oddly enough there has been a decrease in platoon moneyball.)

    • The biggest issue of Big Data is there is so much data and how to control to making right decisions from 95% to 97%.

  9. I’m going to predict that “predicting” is found to be highly problematic in many contexts, and a mortgage application is a perfect example. A rating of past performance is the defensible calculation.

    I hope some common sense will prevail. My worry is that AI is so technically opaque outside of the development team. This often causes managers to fail to provide the necessary levels of review and oversight. There is great potential for harm here.

  10. If we had our intelligent cards, the prospective home buyer could just sign up for the homeowners budget. The new smart cards will enforce your budget, warning you when you stray.. After one or two years you show your card to the banker and the card will tell the truth, it contains your credit score accumulated in run time as you spend.

    Thus the person doing the activity is self evaluated by by contracted budgeting. I can do that today on a smartphone.

  11. The point about big data is spot on. A corollary to that is most of the privacy stuff around big data isn’t that big a deal. They really don’t know more than advertisers new previously. Targeting used to consist of using the form of advertising that best reached your audience. It might be a local talk radio show about gardening, or a magazine focused on it, now we can micro target on the big ad-networks, for people who garden. But the result is the same and now you have to compete against other ads for groups they are also a part of. Additionally we’ve killed all of the specialized media because the specialized media used to get the extra value from targeted advertising, now the middle men get it and the ads are everywhere those people look.

  12. I also agree with the overrating of big data and AI, for slightly different reasons, reasons complementary to your own:

    1: AI has to have value function to optimize on. In chess that is easy: every option can be evaluated on how likely it is to result in a win/draw/loss over time. In lending, you can look at estimated loss and gain, say the range of probabilities over “repaid in full” to “defaulted but we could sell the house and make it up” to “all our money just vaporized along with the house.”

    2: AI needs a lot of feed back to train itself on how to analyze the data; it needs to know what results came from certain starting positions. If we are training it to play chess, we can have the computer play against itself, thousands of games at the same time each examining every possible outcome of a particular move to see if it works**. A few hours and trillion games later, the computer has a good sense of what matters.

    In lending, that gets tricky. If we start from new loans to train the computer, we have no real test until lots of defaults happen. If, as mentioned above, only 2.5% of mortgages default in normal situations, it will take a long time to accumulate more observations than there are variables to look at. The machine can’t test itself until there are hundreds or thousands of defaults to compare, even assuming there was not a special case like a financial crisis that skews the numbers. Our only real hope is that the defaults that do happen occur very quickly in the life of the mortgage, first 3-5 years or so, in which case in a decade we will probably have good amounts of data. I don’t know how long it takes the average mortgage default to happen, so it might work, or it might not.

    “But we can look at historical data!” one cries out. Well… maybe. How many years of big data do we have on people, really? During those years, how many bought a house and defaulted on it? If mortgage defaults take on average 10 years out of a 30 year mortgage, we might need a decade of historical big data, with all the relevant observations and variables recorded, to have a decent number of historical points to start with. Maybe more, considering that the 2007-2009 years might not be a great set to include, due to being really abnormal years. Then again, who defines what a normal year looks like?

    Anyway, long story short, if AI’s need to test their predictions to get better at it, need lots of observations of the event of interest to do so, and the event is pretty uncommon except when tons happen all at once for other reasons, it is going to take a really long time to gather enough data to make better decisions.

    People neither buy houses nor default on mortgages often, so looking at targeted ads (where the event of clicking on the ad to go to the site is rare, but millions are potentially doing so every day) as a model is going to be really misleading. When Amazon starts sending me orders for stuff I want to pay for without me having to go on their site or actually even order it, then I will be more optimistic about AI predicting housing things. When the best they can do is present me a list of things I might like and things that other people who bought this item bought, and I don’t immediately buy, I am pretty confident that AI isn’t going to be taking over the world any time soon.

    ** I am not saying that is necessarily how it works now, but that is one very common way to look at it.

Comments are closed.