Hi! 👋 Welcome to The Big Y!
There currently are no standards when it comes to trustworthy AI, meaning that there are no formal methods to test and validate AI and ML outcomes. Trustworthy computing is an established research area and has helped to set high benchmark standards within the software engineering community, but these established methods cannot be directly applied to AI. This paper dives into the challenges and potential solutions for trustworthy AI.
AI and ML methods are a shift from deterministic code towards probabilities, meaning that a new system of verification needs to be implemented. We are moving from Boolean testing objectives (true/false) to statistical methods for ML models which do not always have a simple true/false answer.
Data also plays a significant role in all phases of model development and operation. The fundamentals behind testing and verifications have changed, and we need to address different trustworthy AI standards from fairness to robustness.
Right now, companies and organizations have started building their own standards, but the AI community needs cohesive standards that AI-based systems can be held to. Software developers are not being taught trustworthy AI standards in school, so research and teaching within this area is extremely important as AI becomes more embedded in our everyday lives (do I have to mention the Tesla Autopilot?).
Killer robots are already in action. An automated system shot a human target while being operated remotely thousands of miles away. This is likely just the beginning of AI-driven weapons and killings.
Thanks for reading! Share this with a friend if you think they'd like it too. Have a great week! 😁
🎙 The Big Y Podcast: Listen on Spotify, Apple Podcasts, Stitcher, Substack
Trust in Autonomous Vehicle has to be earned... and verified. The key make/break question is: Would you trust a Tesla along with your children riding it?