AI Can Become Better Than Humans: Edward Snowden
Former NSA contractor Edward Snowden recently spoke at Consensus 2023 about the latest developments in artificial intelligence (AI) and its potential for both human flourishing and greater surveillance. While acknowledging the risks and dangers associated with the use of AI, Snowden believes that AI has the potential to become better than humans. In this article, we will discuss Snowden’s thoughts on AI and its potential, as well as the risks and challenges associated with its development.
According to Snowden, the primary mistake that AI developers are making is trying to teach their machines to learn like humans. He believes that AI should be “better than us” and that we should teach it accordingly. Snowden argues that humans have limitations in their learning processes, and that AI has the potential to surpass these limitations.
Crippling Training Data
Snowden also criticized companies like OpenAI and Stable Diffusion for their lack of transparency when it comes to their training data. He called out OpenAI’s “cruel joke” in branding itself with the name it chose, given that it won’t provide open access to its training data. He accused Stable Diffusion of deliberately crippling its training set from version 1.5 to 2.0 because of moral panics and various accusations.
Snowden argued that the liability of AI is distinct between the people who create the model and the people who use the model. He compared it to the gun industry in the US, where liability is distinct between the gun manufacturers and the people who use them. Snowden believes that legislation is necessary to ensure that the creators of AI models are held responsible for their actions.
Snowden also expressed concerns about the increasing surveillance and privacy violations perpetrated by corporations and governments through AI. He argued that these entities have distinct advantages in the AI field thanks to the vast data centers they have access to about citizen behavior. Snowden believes that people are becoming more “legible” and “malleable” as a result, which poses a significant threat to privacy and individual rights.
Finally, Snowden argued that AI models must be open to prevent a kind of “software communism.” He believes that corporations and governments must be held accountable for their use of AI, and that the best way to do this is to ensure that the models they use are open and transparent.
Comments are closed.