Some of the dangers of relying upon increasingly complex algorithms are being recognized. IT leaders need to be able and willing to communicate the dangers to prevent disaster.
Companies have become more trustful of algorithms. Many companies are now able to exist and make a profit solely on proprietary algorithms. Investment companies use their own algorithms to trade stocks. Government agencies use algorithms for everything, from housing to criminal sentencing. Numerous companies have predictive algorithms that can forecast product sales and identify potential hackers.
SEE: Artificial Intelligence Ethics Policy (TechRepublic Premium).
Zillow, a real-estate company, is a recent example of an “algorithm going wrong”. Zillow Offers is a business owned by the company. Zillow’s “Zestimate” algorithm-driven valuation of a home’s worth is what consumers are most familiar with. Zillow Offers was an algorithmic version of the traditional idea of flipping houses and buying them at a low price.
It was simple and elegant. This algorithm would find homes to buy, using Zillow’s vast database of real-estate data. It would then search for houses that offer a predictable return and are less risky. Zilliow technology will automate many of these steps, as well as completing the transaction. The company would also make a small profit from the flip and receive predictable returns on transactional fees.
This idea was so appealing that Zillow CEO Rich Barton suggested that Zillow Offers could generate $20 billion in revenue over the next three to five year.
When algorithms go sour
If you have been following the business media, chances are you’ve heard this. Zillow has ceased Zillow offersIt is closing its remaining homes portfolio and going into business. There were many factors that led to the closure, including unexpected difficulties in finding materials and contractors to repair the houses before they are resold, as well as the algorithm’s inability to accurately predict house prices.
Zillow Offers’ demise was also due to human vagaries. A Zillow algorithm can’t predict the preferences of human beings in open-plan kitchens over enclosed ones, even though they are identical. Zillow leaders tried to fix algorithmic mistakes by putting the digital equivalent “finger on scale” which would add or subtract percentages of the algorithm’s estimations in the hope of correcting any missteps.
SEE: Metaverse cheatsheet (free PDF) (TechRepublic).
Conflict can also be caused by competition. Staff who claimed that the algorithm was underestimating home values were not taken seriously. According to a recent WSJ Article. The algorithm that worked in one market proved to be a good fit for another. It was quickly applied to other markets. This coincided with the most bizarre real estate, supply chain, and employment markets in almost a century. Zillow was left with a portfolio that was financially insolvent.
Algorithms can be made more sane with a little bit of sanity
The wonders of algorithms and machine learning, as well as artificial intelligence, are covered extensively. These tools are capable of identifying disease and optimizing complex systems. They can even outperform humans at complex games. These tools are not perfect, and they often struggle with tasks and inferences humans make so naturally that they seem insignificant.
You wouldn’t trust one employee to handle multi-million-dollar transactions without having regular checks and balances, monitoring, evaluations, and controls. This doesn’t necessarily mean that a machine can perform these transactions. Similar oversight, controls, and periodic reviews should be in place.
SEE: Stop ghosting job and client candidates: It can hurt your business in the long run (TechRepublic)
Although algorithms aren’t like humans, they won’t be bad or try to steal. But, they will still have imperfect information and may face other shortcomings. The monitoring requirements are even more acute when you add an algorithm to uncertain economic and socio-economic conditions.
Your organization should educate its peers about the limitations and capabilities of algorithms. It is possible for machines to do things that seem amazing, such as spotting cancerous tumors in an MRI scan or identifying objects within a photograph. A machine can identify tumors in images if it has enough images. When applied to dynamic markets algorithms face the same problems as humans. This is best illustrated by the warning that every prospectus warns investors that “past performance doesn’t necessarily indicate future results.” Recognize their potential, but communicate their limitations.