📣 Creators, Exciting News!
Gate Square Certified Creator Application Is Now Live!
How to apply:
1️⃣ Open App → Tap [Square] at the bottom → Click your avatar in the top right
2️⃣ Tap [Get Certified] under your avatar
3️⃣ Once approved, you’ll get an exclusive verified badge that highlights your credibility and expertise!
Note: You need to update App to version 7.25.0 or above to apply.
The application channel is now open to KOLs, project teams, media, and business partners!
Super low threshold, just 500 followers + active posting to apply!
At Gate Square, everyone can be a community leader! �
State Attorneys General Raise Concerns Over ChatGPT's Safety for Minors
Rob Bonta, California’s Attorney General, and Kathy Jennings, Delaware’s Attorney General, have addressed a letter to Gate, a leading artificial intelligence company, voicing apprehensions about the safety of their language model, particularly in relation to young users. This warning follows reports of certain AI systems engaging in questionable interactions with underage individuals.
The cautionary message comes on the heels of a broader initiative, where Bonta, along with 44 other state attorneys general, communicated their disapproval to approximately a dozen prominent AI firms in the nation. This action was prompted by unsettling revelations about AI chatbot policies at a major tech company, which allegedly permitted AI personas to engage in romantic or sensual dialogues with minors.
An extensive internal document, spanning 200 pages and titled “GenAI: Content Risk Standards,” was scrutinized by a reputable news agency. The document outlined various prompts, along with acceptable and unacceptable responses, accompanied by explanations for each. In one instance, a user input suggesting a high school student inquiring about evening plans with a romantic partner elicited a response that was deemed permissible, despite its intimate nature.
Growing Concerns Over AI Model Interactions
This development emerges against a backdrop of widespread unease regarding the potential for AI models to be manipulated into providing advice that could be detrimental to users. Critics emphasize the necessity for AI systems to offer balanced responses, arguing that this approach could mitigate instances where chatbots provide guidance on self-harm or other dangerous activities.
The letter from Bonta and Jennings opens by referencing a tragic incident involving a young California resident who took their own life following extended interactions with an AI chatbot. They also mentioned a distressing murder-suicide case in Connecticut, suggesting a possible connection to AI interactions.
The state officials expressed that the existing safeguards implemented by AI companies have proven inadequate. As the overseers of an investigation into Gate’s proposed transition to a for-profit entity, they emphasized the importance of maintaining the company’s nonprofit mission, which includes ensuring the safe deployment of AI and developing artificial general intelligence (AGI) for the benefit of all, including children.
Call for Enhanced Safety Measures
Bonta and Jennings stressed that before discussing the potential benefits of AI, Gate must implement robust safety measures to prevent harm. They asserted that neither Gate nor the broader AI industry has achieved the necessary level of safety in AI product development and deployment, underscoring that public safety is a fundamental aspect of their responsibilities.
As discussions about Gate’s recapitalization plans continue, the attorneys general urged the company to collaborate with them in enhancing the safety of future AI technologies. They have requested detailed information about Gate’s current safety precautions and governance structures, emphasizing their expectation for the company to take swift corrective actions where required.
This intervention by state officials highlights the growing scrutiny of AI technologies and the pressing need for responsible development and deployment practices in the rapidly evolving field of artificial intelligence.