Hi! đ Welcome to The Big Y!
This is your yearly reminder to not anthropomorphize AI.Â
Recently an article was published about a Google engineer who believes that Googleâs chatbot system, LaMDA, is sentient through interactions he had with the chatbot. LaMDA (Language Model for Dialogue Applications) is based on a LLM (large language model), which is built on data gathered from all corners of the internet.Â
Because LLMs are trained on human generated data, like our writing, our discussions, our art and more, it becomes an easy trap to humanize AI because it is generating human looking outputs. We need to remember that language based AIs are just mechanical, using inputs to make outputs that are dependent on previously created human content.
Last week I touched on potential issues regarding IP, ownership, and rights of individuals whose work might be used by an AI without authorization. We canât have both usage authorization issues and believe that AI is sentient at the same time. We need to remember that the outputs of AI arenât unique creations, just an amalgamation of human generated content.
At the end of the day, AIs are algorithmic derivatives of human creativity.
Last week, the majority of Axonâs AI Ethics Board resigned over the companyâs announcement to use Tasers on drones to stop school shootings. Ethics boards are many times made up of people with strong pedigrees and reputations and individuals who lend their names and reputations to these boards. For a company, parading strong names on a board can give your company legitimacy in the business space - but do they have any power to control outcomes?Â
Thanks for reading! Share this with a friend if you think they'd like it too. Have a great week! đ
đ The Big Y Podcast: Listen on Spotify, Apple Podcasts, Stitcher, Substack