Hi! 👋 Welcome to The Big Y!
There are so many predictions for 2023 floating around right now that I thought I would go in a little bit of a different direction. Looking forward, below are five key themes I anticipate will characterize 2023. (Sorry this one is going to a bit past my 10 sentence limit 😬)
The Year of LLM Applications
In 2023 tools based on generative LLMs will break into the everyday life of “normal people” (those living outside the tech bubble). I think we’ll see very practical applications like improvements in help manuals or big improvements in video game dialogues with NPCs. The tools that grow this year will be focused on making humans better at doing their job.
Running Out of Data
Human’s produce a lot of data every day that AI systems are trained on. But as the models we’re building are growing in size we may be running out of real-world data to train on. Combine this with the data feedback loop we’re entering into with our AI-assisted data. Now, when we scrape the internet there is generated data all over the place. This is a challenge that will start being harder to solve as we can’t always distinguish generated data from “real” data
AI Steals and Delivers
I think we’ll see a proliferation of lawsuits centered around the ownership and outputs of generative models. Github is already being sued in regards to ownership of the training data used to build Copilot. A precedent needs to be set on training data and I think it will be difficult to find a one-size-fits-all solution.
Increase in misinformation
Going forward we’ll be seeing two types of misinformation: First there is the traditional “fake news” (deepfakes, disinformation) and secondly there is bad output from generative models. For the first one, there are a lot of new tools that are getting really good at making nice looking fake things, and there is less effort going into tools to detect fake stuff. For the second one, as tools like ChatGPT become better, I would ask, how do you know you can trust the output? If it looks right to you, but you don’t know better then it’s hard to know if GPT is telling us the wrong thing. Trustworthy AI will be the new thing.
Big companies own AI market share
Money continues to be tight in the tech economy, and the cash isn’t as free flowing as it has been up to recently. I think we’ll continue to see big breakthroughs coming from the big companies. Big Tech will be publishing a flurry of papers and releasing products & services so they don’t get left in the dust. But if an AI application startup can raise money, there’s a blue ocean waiting for them to build new tools to improve human creativity and productivity. May a thousand AI flowers bloom.
And just to add a couple controversial themes that I don’t think we’ll be seeing much progress going forward, here are two areas I think we’ll continue to see slow progress and limit applicability.
Driverless cars are out: The marketing for self-driving cars has been great for the past couple of years, but progress is slow. The real world is complicated and I don’t see driverless cars being allowed much outside their very curated test areas until much more progress is made.
Robots are underdelivering: Nobody needs a humanoid. But in reality, we’re seeing that the real targets of automation are information workers, where human analysis and decision making are being replaced by AI. Robots have a huge barrier in their physical limitations. They’re expensive and are tangible goods, where tools like ChatGPT are just an interface and some typing, much more ethereal in nature and easier to deploy and expand.
What do you think? Let me know if you disagree.
The hype continues with OpenAI. Rumors are they are selling existing shares at a price that values the company at $29B. At the same time, there’s also talk of incorporating ChatGPT into various Microsoft tools, including Bing and their Office suite.
Thanks for reading! Share this with a friend if you think they'd like it too. Have a great week! 😁
🎙 The Big Y Podcast: Listen on Spotify, Apple Podcasts, Stitcher, Substack