Hi! ๐ Welcome to The Big Y!
Like the European Union, China has also released several proposals for the regulation of AI and algorithms, with the most recent release focused on the regulation of recommendation algorithms. In this new proposal, companies need to not only disclose how the recommendation algorithm works, but also allow users to turn off the recommendation service. These recommendation algorithms are also not allowed to be used to make users spend excessively or bring on addictive behavior.ย
One clear example of addictive algorithms can be found within the gaming industry. Gamers can be convinced of buying loot boxes, the use of which is similar to gambling, where an algorithm is trained to give the player just enough to get them to buy the next loot box.ย
Many of us interact with a recommendation algorithm daily when it comes to online shopping and generally browsing the internet. While it can be useful for a recommendation algorithm to help you find relevant goods to buy, at what point is the algorithm pushing you to buy something unnecessary? And should this push be regulated? Regulators (or companies) will need to determine where the line is between pushing consumers to excessive spending or helpful suggestions.
The Chinese proposal is similar to the EUโs proposal where AI canโt be used for consumer deception. For both the EU and China, it will be interesting to see where the line is drawn on these topics and how it will be effectively enforced.
Understanding how and why AI systems do what they do is increasingly important as the role they play in our society continues to increase. A new study found that both people with AI backgrounds and those without have the tendency to place too high trust in AI outcomes while misunderstanding how the AI reached its results. The authors conclude that it is important to understand how users perceive and interact with AI when building the systems.ย
Thanks for reading! Share this with a friend if you think they'd like it too. Have a great week! ๐
๐ The Big Y Podcast: Listen on Spotify, Apple Podcasts, Stitcher, Substack
Where is the dividing line between Gamification features in AI Products and Addictive Algorithms? Is there any aspect of human psychology input into the design of Addictive Algorithms?