CALL TO ACTION

A National Campaign

Artificial Intelligence (AI) is no longer the future—it’s the present, rapidly transforming every aspect of our world. In 2024 alone, nearly 700 AI-related bills were introduced across 45 states, signaling a national recognition of AI’s profound impact. Already, 31 states, along with Puerto Rico and the Virgin Islands, have enacted AI-related laws or resolutions, reinforcing the urgency for clear regulations and responsible AI integration. Yet, despite AI’s growing influence in education, many schools still lack clear policies and AI literacy programs, leaving students vulnerable to both risks and missed opportunities. While some states released AI guidance last week, nearly half have yet to act. That’s why Francesca and LearnSphere AI have launched a National Campaign & Call to Action—pushing for updated school policies that educate students on protecting their image, safeguarding against legal risks, and ensuring AI literacy is a priority in every classroom. Our first campaign targets all school districts across 50 states.

Open Letter to Schools Nationwide

A Call to Action for Middle and High Schools: Update AI Codes of Conduct and HIB Policies.

From: Francesca Mani and LearnSphere AI
Subject: Updating Schools’ Codes of Conduct & HIB to Include AI-Related Misuse

Dear School Superintendent

As schools across the United States shape the future of education, it is imperative to address the evolving impact of Artificial Intelligence (AI) on students.
AI is no longer a distant concept; it is part of their everyday lives, presenting both incredible opportunities and serious risks.

Why This Matters

It is critical to act proactively because students may not understand the legal and ethical implications of AI misuse. As of last year, the FBI released a statement explaining that the creation and distribution of nonconsensual explicit images of children, including AI-generated content, is illegal. With the anticipated passage of the Take It Down legislation expected in 2025, this will become a federal crime.

This legislation underscores the importance of educating our students not only on how to protect their images but also on how to avoid unknowingly committing a crime. Alarmingly, the majority of students and adults are unaware of the legal ramifications of AI misuse.

Moreover, scientific research supports concerns about the impact of early exposure to pornography on children. Studies, such as those published in the Journal of Adolescent Health and Pediatrics, have found that early and frequent exposure to explicit content can distort perceptions of healthy relationships, desensitize individuals to harmful behaviors, and increase the likelihood of engaging in risky or exploitative digital actions. With pornography being both easily accessible and legal, children and teenagers may not fully comprehend the severity of creating or distributing nonconsensual digital forgeries.

This lack of understanding opens the door for critical conversations about how such images, even when AI-generated, can compromise the future of any victim professionally, academically, socially, and emotionally. These images often appear real, and many people will not take the time to verify their authenticity but will instead assume they are genuine, perpetuating harm.

It is also essential to educate students that if they find themselves victims of such misuse, they can contact the National Center for Missing & Exploited Children (NCMEC). NCMEC provides a tool called “Take It Down” on their website, which can help victims navigate this process and take appropriate steps to address the situation.

Our responsibility is to focus on prevention through education. By updating school policies and proactively explaining these issues to students, we can empower them to make informed decisions and protect their futures in an increasingly digital world.

There is an urgent need for schools

1. Update AI Policies in Codes of Conduct
Ensure your school’s Code of Conduct and HIB (Harassment, Intimidation, and Bullying) policies explicitly include AI-related misuse, such as:

• AI-generated cyberbullying (deepfakes, harassment, or misinformation).
• Unauthorized use of AI tools to imitate or harm others.

2. Implement AI Education for Students
Prepare students to navigate the world of AI responsibly by:

• Teaching ethical AI use and its societal impacts.
• Highlighting how misuse can lead to legal, academic, and personal consequences.
• Demonstrating how AI can be harnessed for good—fostering creativity, solving problems, and driving innovation.

Next Steps for Schools

Policy Review: Audit your school’s current Code of Conduct and HIB policies to ensure AI misuse is addressed.
Educational Programs: Introduce workshops, guest speakers, or curriculum updates that focus on AI literacy and ethics.
Community Engagement: Partner with organizations specializing in AI to equip educators, parents, and students with resources to understand and navigate the AI landscape.

Example Text for HIB and Code of Conduct Policies

Harassment, Intimidation, and Bullying (HIB) Policy Update: “The use of Artificial Intelligence (AI) to create, distribute, or promote harmful content, such as deepfakes, altered images, or false information intended to harass, intimidate, or bully another individual, is strictly prohibited. This includes, but is not limited to, using AI tools to impersonate others, generate offensive or inappropriate content, or amplify cyberbullying through automated means. Violations will result in disciplinary action consistent with the school’s Code of Conduct.”

Please ensure this language aligns with your district’s policies by consulting with your school’s administrative and legal counsel. This policy update is intended as guidance and may be subject to revision. It should not be interpreted as legal advice. Please consult your district’s legal counsel for any questions regarding implementation or enforcement.

Code of Conduct Update: “Misuse of Artificial Intelligence (AI) technology, including the creation of deceptive or harmful content (e.g., fake profiles, manipulated media, or AI-generated harassment), is considered a violation of school policy. Students are expected to use AI responsibly and ethically, aligning with the principles of academic integrity, respect for others, and community safety. Consequences for AI misuse will be enforced and may include suspension, mandatory education on AI ethics, or referral to legal authorities in severe cases.”

Please ensure this language aligns with your district’s policies by consulting with your school’s administrative and legal counsel. This policy update is intended as guidance and may be subject to revision. It should not be interpreted as legal advice. Please consult your district’s legal counsel for any questions regarding implementation or enforcement.

About Francesca Mani

Francesca Mani, recognized as the youngest TIME100’s Most Influential Leaders in AI (at 14), is a renowned advocate for safer AI integration in education. Her work has shaped policies and inspired initiatives to empower students and safeguard their futures in the digital age since 2023, when she became an AI victim at her high school.

About LearnSphere AI

LearnSphere AI is a leading organization dedicated to promoting AI literacy.

Sincerely,
Francesca Mani, Dorota Mani, and Nicolas Gertler
LearnSphere AI

Step-by-Step Guide to Using NCMEC’s “Take It Down” Tool

The “Take It Down” tool by the National Center for Missing & Exploited Children (NCMEC) helps individuals remove or prevent the online sharing of nonconsensual or exploitative images, including AI-generated content. Here’s how to use it effectively:

Step 1: Access the Tool

Visit the official Take It Down website: https://takeitdown.ncmec.org
Read the introductory information to determine if this tool is the right resource for your situation.

Step 2: Understand How It Works

• Anonymous Process: The tool does not require you to upload the image. Instead, it generates a unique “hash” (a digital fingerprint) to identify and block copies online.
• Supported Platforms: Review the website for a list of participating platforms that work with NCMEC, including social media networks and websites.

Step 3: Generate a Digital Hash

Download the recommended hashing software (linked on the Take It Down website).
Use the software to create a hash of the image you want to flag. This irreversible process converts the image into a unique string of characters without exposing the actual content.

Step 4: Submit the Hash

On the Take It Down website, submit the generated hash.
The tool will scan online databases and flag matching images for removal.

Step 5: Monitor and Follow Up

• NCMEC will notify participating platforms, prompting them to review and remove the flagged content.
• Important: The process is anonymous, so NCMEC does not provide personal updates on removals. You may need to monitor the platforms yourself.

Step 6: Seek Additional Support if Needed

If the image has been widely shared or appears on non-participating platforms:

Contact NCMEC: Reach out via their hotline or email for further assistance.
File a Report: Use the reporting system of the offending platform to request content removal.
Legal Support: If needed, consider consulting a lawyer or reporting to law enforcement.

Step 7: Educate Yourself on Preventative Measures

• Be cautious about sharing sensitive images online.
• Regularly review and update privacy settings on social media and digital platforms.
• Report suspicious activity promptly to platforms and NCMEC.

The Take It Down tool is an invaluable resource for combating online exploitation and protecting personal safety. By following these steps, individuals can take proactive measures to safeguard their images and reputations.