AI Law & Safety Guide: Regulations, Copyright, and Protecting Your Privacy
If you've been using AI tools, chances are you've wondered at some point:
“Could my posts or artwork be used to train AI without my knowledge?”
“Can I claim copyright over something I created with AI?”
“Is my personal data being exposed?”
AI is advancing faster than the laws designed to govern it — but as of 2026, AI-specific legislation is finally taking effect around the world.
Here's a practical breakdown of how to use AI responsibly, without running into legal trouble.
listContentsexpand_more
- How Are Countries Regulating AI?
- Is AI Using My Content and Personal Data Without My Permission?
- How to Protect Your Content
- Can I Own the Copyright to Something I Made with AI?
- Case 1: No copyright for purely AI-generated work
- Case 2: Hundreds of prompts still aren't enough
- Case 3: Human-written text + curatorial choices can qualify
- Case 4: Extensive inpainting can tip the scales
- So, What Should Creators Do?
- 1. Document your creative process
- 2. Think in terms of compilations
- 3. Be transparent about AI use
- AI Safety Checklist
- Sources & Further Reading
- AI Regulations by Country
- Copyright Cases Involving AI
How Are Countries Regulating AI?
AI regulation varies significantly by region. South Korea and the EU lean toward tighter controls, Japan takes a more permissive approach, and the U.S. remains a patchwork of state-level laws.
| Country | Key Legislation | What It Means |
|---|---|---|
| South Korea | AI Basic Act | Aims to balance innovation with safety. High-risk AI systems must have risk management plans and explainability requirements. Violations can result in fines of up to ₩30 million. |
| EU | EU AI Act | The world's most comprehensive AI regulation. Uses a tiered risk-based framework. Violations can carry fines of up to 7% of global annual revenue. Requires opt-in consent from copyright holders for training data. |
| United States | Executive Orders + State Laws | No unified federal AI law yet, but states like California and Colorado have passed laws guaranteeing consumer opt-out rights for AI-driven automated decision-making. |
| Japan | AI Promotion Act (effective Sep 2025) + Copyright Act Article 30-4 | Japan's first AI-specific law. Takes an “innovation-first” approach rather than strict EU-style regulation — establishing an AI Strategy Headquarters and granting authority to publish the names of companies that violate human rights. Data mining for AI training remains broadly permitted, except where it unjustly harms copyright holders' interests. |
| United Kingdom | TDM Exception (non-commercial only, for now) | Currently allows AI training data use only for non-commercial research. A model permitting training unless copyright holders opt out is under consideration. |
Is AI Using My Content and Personal Data Without My Permission?
Content you post online — text, photos, artwork — can potentially be used to train AI models without your knowledge.
This is especially important when signing up for AI services: always check your privacy and data-sharing settings. Many platforms default to “consent.”
What Is an AI Crawler?
AI companies use automated programs called crawlers to collect content from the web.
Tools like OpenAI's GPTBot and Google's Googlebot generally respect site-level instructions (robots.txt), but some crawlers ignore these settings and scrape content without permission.
How to Protect Your Content
-
Personal websites and blogs: Add directives to your robots.txt file to block AI crawlers. Keep in mind this may also affect your visibility in regular search engine results.
-
Use platform-level AI blocking settings: Some platforms offer options to restrict AI data collection — but policies vary widely and are subject to change.
-
Check opt-out settings for AI services: Most platforms bury this option deep in settings menus. After signing up for any AI service, take a moment to review your data preferences.
Can I Own the Copyright to Something I Made with AI?
This is the question most creators care about. The short answer: AI-only outputs don't qualify, but meaningful human involvement can make a difference.
Case 1: No copyright for purely AI-generated work
In Thaler v. Perlmutter (affirmed on appeal, March 2025), an attempt to register a work created autonomously by an AI called “Creativity Machine” was rejected. The court reaffirmed that copyright requires human authorship.

Case 2: Hundreds of prompts still aren't enough
In Allen v. Perlmutter (currently in litigation), Jason Allen sought to register “Théâtre D'opéra Spatial,” created using over 600 Midjourney prompts. The U.S. Copyright Office required him to disclaim the AI-generated portions. Allen refused, and registration was denied. The case is pending in federal court in Colorado.

Case 3: Human-written text + curatorial choices can qualify
In Zarya of the Dawn (2023), AI-generated images in a graphic novel were denied individual copyright protection — but the author's original text and the creative “selection and arrangement” of images were recognized as a protectable compilation.

Case 4: Extensive inpainting can tip the scales
In A Single Piece of American Cheese (2025), Kent Keirsey, founder of Invoke AI, manually inpainted over 35 distinct regions of an AI-generated image — a technique where specific areas are selected and reworked individually. The Copyright Office determined that this level of human creative selection and arrangement was sufficient to warrant registration.

So, What Should Creators Do?
1. Document your creative process
Don't just save your prompts. Keep a record of every manual intervention — composition adjustments, retouching, combining elements — to demonstrate meaningful human authorship.
2. Think in terms of compilations
Even if individual AI-generated elements don't qualify for protection on their own, combining them into a new, coherent work through creative curation may still be protectable.
3. Be transparent about AI use
When registering a copyright, clearly distinguish AI-generated content from your own contributions — and emphasize the creative choices you made.
AI Safety Checklist
Once data has been fed into an AI system, it's virtually impossible to “untrain” it. Think before you share. And regardless of copyright or quality concerns, always review and revise AI outputs before putting them to use.
-
Audit your cloud sharing settings
Google Drive: set sharing to “Restricted” under Share > General access. Slack: disable external sharing under Settings > Channel Management > Default Channels. Notion: review “Publish to web” and “Allow guests” under Settings > Security. -
Review browser extension permissions
Open chrome://extensions (Chrome) or edge://extensions (Edge). Remove any extensions with “Read and change all your data on websites” permissions, any that have been removed from the app store, or any that haven't been updated in over six months. -
Mask sensitive information before inputting it
Replace client names with “Company A/B,” phone numbers with “XXX-XXX-XXXX,” and API keys, tokens, or DB credentials with<REDACTED>. For bulk processing, consider open-source anonymization tools like Microsoft Presidio. -
Disable model training in AI service settings
ChatGPT: Settings > Data Controls > “Improve the model for everyone” → OFF. Gemini: My Activity > Gemini App Activity → pause saving. Claude: Pro plans don't use your data for training, but free plans may — check the Privacy Policy. -
Default to temporary/incognito mode for sensitive work
Use ChatGPT's Temporary Chat, Gemini's incognito mode, or Claude's auto-delete settings to avoid your inputs being stored or used for training. -
Don't blindly copy-paste prompts from the internet
Community and social media prompts may contain hidden prompt injection attacks — instructions designed to override system behavior, disguised using invisible Unicode characters or white text. Always read the full content before using any third-party prompt. -
Never paste AI output directly into your final work
AI systems can confidently produce inaccurate information (hallucinations). Always verify legal citations, statistics, quotes, and URLs against primary sources. A real-world cautionary tale: lawyers who cited fake AI-generated cases in court were sanctioned (Mata v. Avianca, 2023). -
Don't ship AI-generated code without a security review
AI code can contain hardcoded secrets, SQL injection vulnerabilities, or outdated library versions. Always run a code review or SAST (static analysis) scan before deploying to production. -
Strip metadata from files before uploading
PDFs, images, and Word documents may contain author names, GPS coordinates, and revision history. Remove this data via File Properties > Details > Remove Properties and Personal Information, or use a tool like ExifTool. -
Verify the source and date of AI responses
AI models have a training data cutoff and may not reflect current information. For anything involving law, taxes, healthcare, or security, always cross-check with official, up-to-date sources. -
Never upload confidential documents to external AI services
NDAs, draft financial statements, and unreleased product specs carry real leak risk the moment you enter them. For sensitive work, use enterprise AI deployments (e.g., Azure OpenAI, AWS Bedrock) or enterprise plans with data training opt-outs.
Sources & Further Reading
AI Regulations by Country
-
South Korea AI Basic Act — 인공지능 발전과 신뢰 기반 조성 등에 관한 법률 (effective Jan 2026)
-
EU AI Act — Regulation (EU) 2024/1689
-
Japan AI Promotion Act — AI の活用等の推進に関する法律 (effective Sep 2025) · FPF Overview · Japanese Government
-
Japan Copyright Act Article 30-4 — 情報解析を目的とした著作物の利用
Copyright Cases Involving AI
-
Thaler v. Perlmutter, No. 22-cv-01564 (D.D.C. 2023; appeal affirmed Mar 2025)
-
Allen v. Perlmutter (D. Colo., pending)
-
Zarya of the Dawn — Registration # VAu001480196 (USCO 2023)
-
A Single Piece of American Cheese — Registration # VA0002427087 (USCO 2025)
-
Mata v. Avianca, No. 22-cv-01461 (S.D.N.Y. 2023)