Hi! 👋 Welcome to The Big Y!
Two Google DeepMind researchers released a preprint about AI agents from a forthcoming book they are publishing about AI. The main thesis of this paper is that the current paradigm of AI, built on human-generated data, is reaching its limits because the AI agents being built today are unable to think beyond the existing human knowledge that they are currently trained on. The limit of available human data is something I have mentioned before and the current solution is for companies to intensely rely on synthetic data.
In the same way we’ve seen smaller and smaller increases in LLM performance with subsequent model launches, the idea is that we will see AI plateau in its capabilities if it is dependent solely on pre-existing human data.
The solution? The researchers propose moving to an “era of experience,” where agents learn from their own interactions with the physical world, rather than simply relying only on existing human data. These agents, by generating and learning from hands-on experiences, should, in theory, be able to grow their capabilities beyond what they can with static data.
Learning through experience should enable agents to adapt, optimize, and innovate and mirror more closely how humans learn and evolve in our own environments. AI agents are the new, hot obsession within the AI ecosystem, so it will be interesting to see if they are able to live up to the hype this paper thinks they can reach.
The Tidbit: Apparently there is a new cybersecurity threat posed by GenAI… “slopsquatting”. Slopsquatting occurs when generative AI models used for coding hallucinate a non-existent software package, subsequently referencing or recommending it. Adversarial parties can then exploit this by publishing a malicious package with the same or a similar name.
Enjoyed this? Forward to a friend who’d like it too!
Thanks for reading, have a great week!