If a student has ever plugged in an essay prompt to ChatGPT and received a dissertation on the level of Shakespeare, they are undoubtedly familiar with the immense power of artificial intelligence. Between new essay-checking guidelines and phone policies at CHS, it cannot be denied that having a tool like that at the student’s fingertips harbors great potential. But that potential can and has crossed the line from harmless creative writing towards malice many times, with large defense companies coordinating cyberattacks and refining weaponry through algorithms developed in part by artificial intelligence tools. Between art and the art of war, California legislators realized that a line had to be drawn, which is why Senate Bill 1047 (SB 1047 for short) was passed on August 30th of this year. Dubbed the “Safe and Secure Innovation for Frontier Artificial Intelligence Models Act,” this law makes California the first state to place restrictions on AI use and marks the first step towards moderating the potency of this tool across the entire country.
But if the act is so monumental, what does it even do? Simply put, it targets companies with AI models that are worth more than $100 million in the money used to train it by forcing them to create and use a testing procedure on their model as well as any of its sub-features to observe its capacity to cause “critical harms” that include: cyberattacks, the development of chemical and nuclear weapons, public damage that is worth at least $500 million, and anything with limited human oversight. The act also creates protections for whistleblowers who expose dangerous company details to the government; specifically, company administration and AI model developers cannot stop any employee from disclosing information about the harms of the AI model and cannot retaliate against whistleblowers by any means, including firing and docking pay. If any of these rules are broken, the Attorney General is allowed to sue these companies as they see fit.
The measures covered by this bill are definitely extreme, but State Senator Scott Wiener says that he intended to prepare for unnecessarily dire circumstances while writing and sponsoring it.
“We have a history with technology of waiting for harm to happen, and then wringing our hands,” Wiener said. “Let’s not wait for something bad to happen. Let’s just get out ahead of it.”
Despite how speculative the bill is, it is already sparking controversy among tech employees and companies. Wiener was able to garner 113 signatures from leading tech company employees on a petition to support SB 1047, while still more support has come from past AI company whistleblowers, notable AI researchers Geoffrey Hinton and Yoshua Bengio, and even tech CEO Elon Musk. At the same time, most large Silicon Valley tech companies like OpenAI, Google DeepMind, and Meta have publicly opposed the passage along with small tech startups, venture capitalists, San Francisco mayor London Breed, San Francisco Congressperson Nancy Pelosi, and even Congresspeople like Ro Khanna, Anna Eshoo, and Zoe Lofgren who represent Silicon Valley in Congress. Ultimately, it was Gavin Newsom’s decision to sign off on this piece of legislation that pushed it in favor of the proponents and their belief that AI was bound to become malicious if unmonitored, but that doesn’t make the concerns about stifling technological innovation any less important.
“This bill should not lead to a reduction in the use of AI by companies,” said CHS junior Aabhisaar Shrivastav. “This bill is only regulating the developers of large frontier AI models, and won’t affect companies which only plan to use AI as a tool. However, this bill could potentially lead to a decrease in AI development, which in turn could slow down the use of AI by companies.”
What SB 1047 does has undeniably sent a message that has shaken Silicon Valley, the tech capital of the world, and left an impression on the United States at large. And whether or not that impression is a step in the right direction remains to be seen, but we can at least hope that the worst thing a student at CHS does with ChatGPT is cheating on an essay rather than developing weapons of mass destruction like we may have seen tech conglomerates doing in the future without SB 1047.
Policy With Pablo: Artificial Intelligence Accountability Is Finally Actualized!
Tags:
Donate to The Wolfpacket
$50
$500
Contributed
Our Goal
Hello there! Our goal is to provide relavent, engaging journalism for readers of all ages. Your donation will support the student journalists of the Wolfpacket at Claremont High School, and will allow us to purchase equipment, print our monthly issues, and enter in journalism competitions. We appreciate your consideration!
More to Discover
About the Contributor
Pablo Guevara, Assistant Opinions Editor
Pablo Guevara is a junior at CHS and Assistant Opinions editor for the Wolfpacket. He cares strongly about personal advocacy and civic competency in his everyday life, which is exactly why he’s drawn to sharing even his most controversial ideas in the Wolfpacket. Outside of the newspaper, he continues this interest with Politilingo, a politically informative Instagram page that he runs, as well as with his positions on the Claremont City Teen Committee, the TurnUp Activism team, his connections with iCivics and the Congressional Hispanic Caucus Institute, and his captaincy spot on the school’s Speech And Debate team. He’s a sucker for old rock music and will visit Six Flags at the first chance he gets, but for now he’s content to help the Wolfpacket be the best student-run group on campus.