Black Hat USA 2023: Five Lessons in Artificial Intelligence
“We’ve put a lot of trust in Five Lessons Artificial Intelligence, fast — but it’s time we start to exercise some skepticism,” stated Ram Shankar Siva Kumar, a Microsoft machine learning and security data analyst, at a Black Hat USA 2023 session on cybersecurity and artificial intelligence. Kumar’s address outlined the dangers of placing premature trust in automated capabilities. Kumar divides his time between Microsoft and the Berkman Klein Center for Internet and Society at Harvard University.
Kumar warned that people should be wary of trusting AI systems, as doing so might lead them to believe that the system will always act in their best interests. Rather than seeing AI as a substitute for reliable data, he suggested it as an additional tool to use with others. We need to “interrogate the validity of AI’s answers and cross-check it,” says Kumar.
1. AI systems are susceptible to manipulated data.
Cyberattackers allegedly employ AI to recreate and alter data points to exploit users’ vulnerabilities. A hacker may have previously inserted fake information into AI-generated data if IT leaders decide to use it. The hacker knows that users will find out.
An article in The Economist recently referred to this strategy as “data poisoning,” according to Alina Oprea, a computer science associate professor at Northeastern University. When AI systems rely on data found on the open web, they leave themselves subject to hacks.
2. Artificial standards are written vaguely.
According to Kumar’s research, software engineers perceive AI standards as lacking in clarity and precision in their terminology. This is especially true when it comes to discussing ethical implications. When technical standards are “so vague,” he cautioned, more damaging “suitcase words” are used to describe these systems.
While many firms have implemented regulations prohibiting AI models from responding to unsafe or unethical requests, Kumar pointed out that additional training is still necessary. Privacy and source guidelines should likewise be taken into account. For example, one can wonder: Where does this data originate? Who can legally make use of it? So far, has anyone checked it?
3. Fast data collection has its tradeoffs.
While AI simplifies learning the fundamentals of any given subject or dataset, there are several downsides to the rapid data collection and classification that AI employs. With efficiency comes the risk of losing subtlety and in-depth understanding, as Kumar put it. “AI has a lot of brainpower, but it still has a ways to go before it can do comprehensive research or manage different points of view.”
4. Competing interests in AI make objective results hard to come by.
The powers that be aren’t politically neutral, even though technology is. “This tool will be handled by private companies and used for their interests. If no one is solely managing the validity of these AI answers.” According to Kumar, IT corporations are currently making their voices heard. The most important, but the next stage, is considering the opinions of academics, data scientists, and other experts.
5. An increase in AI awareness must come from IT leaders
Leaders in information technology must raise consciousness throughout the company if Five Lessons Artificial Intelligence is to become a reliable and well-rounded tool. To address the risks and increase staff knowledge, Kumar stressed the need for a mental shift at the very top of leadership.
AI Is Great, But With It Comes Inherent Issues of Trust
According to Kumar, “we need to double-click on that word, trust” if AI systems are seen as a means to speed up the future. There is an underlying theme of trust and, maybe, our tendency to trust too readily throughout these five teachings. I cautioned that “generative AI answers can be gamed.” Kumar, even though it is presumed that the system is reasonable, truthful, and morally sound.