MADQs

  • At MAD, we believe that innovation and responsibility go hand in hand. As we explore the exciting possibilities of AI-powered content creation, we are guided by one simple principle:

    Challenge the status quo, but stay true to your values.

    AI enables us to imagine more boldly, experiment more freely and breathe new life into stories. However, as Uncle Ben says in Spider-Man, with great power comes great responsibility, and we must use technology thoughtfully. We are committed to upholding the highest ethical standards, protecting artistic integrity, confronting bias and ensuring creativity remains inclusive and human at its core.

    We view our work as an ongoing journey of constant learning, adaptation, and growth. By sharing knowledge with our partners and fostering an open culture, we ensure that every innovation is grounded in care, transparency and respect.

    For us, storytelling is not just brand communication; it's an invitation to create meaningful experiences. Using AI as a catalyst and ethics as our compass, we create narratives that inspire and connect while staying true to the diversity of the world around us.

  • As a creative production house, we are aware that our work has an impact on the environment. We know that we are part of the problem – and that’s exactly why we want to be part of the solution.

    We aim to make our daily operations as environmentally friendly as possible, choosing bicycles and public transportation over cars, and preferring train travel over more polluting alternatives whenever possible.

    Alongside this, we rely on renewable energy, lean and resource-efficient production processes, and our membership in 1% for the Planet to continually reduce our ecological footprint.

    Equally important to us is fostering a respectful and open working atmosphere. We are committed to equality, tolerance, and appreciation, creating an environment where everyone feels safe, seen, and heard.

    Diversity is our strength, and we aim to reflect this mindset in all our projects. Together, we take responsibility, for the environment, for our team, and for a sustainable future in our creative processes.

  • Starting 2 August 2026, the EU’s new AI Act introduces a simple but powerful principle: If AI creates or manipulates content that could be mistaken for something genuinely human, it needs to be labeled as such. This aims to bring more transparency to the digital world and to make it harder for deepfakes and misleading AI-generated material to slip under the radar.

    What exactly needs to be labeled?
    In short: anything produced or altered by AI that appears convincingly real. This includes:

    • texts that read like polished human writing,

    • images depicting realistic people or scenes,

    • videos that seem authentic but are artificially constructed,

    • synthetic voices mimicking real individuals.

    The core objective is simple:
    Preventing people from being unintentionally misled by AI-generated content. Good news: Not everything requires a label. The regulation is strict, but it also acknowledges how humans and AI commonly work together.

    1. Human review removes the labeling requirement: If a human reviews, edits, approves, and takes responsibility for the final content, no AI label is required. In other words: As long as a real person is supervising the output – and it isn’t being auto-published without oversight – you are compliant.

    2. AI "assistance" is fine without labeling if AI only helps with: Wording, spell-checking, translation, structural suggestions and the actual content originates from a human, no labeling is needed.

    3. Content used privately or internally: Material that never gets published publicly does not require any disclosure.

    4. Art, satire, and parody: These forms are still subject to transparency, but the labeling can be subtle – just enough for viewers to understand that artificial elements are involved, without compromising the artistic intent.

    Where do these rules apply
    Essentially everywhere content is made publicly accessible:

    • websites and blogs,

    • newsletters,

    • social media platforms,

    • public presentations or documents,

    • advertisements,

    • voicebots and chatbots without explicit disclosure.

    Even websites outside the EU must comply if their content is targeting or used by people within the EU, for example, if the site is in an EU language or shows prices in euros.

    Other legal risks to keep in mind:
    Beyond labeling obligations, AI-generated content raises additional legal concerns:

    1. Copyright issues: AI output generally has no legal author. This means: AI-generated content is not automatically protected. Others may reuse similar AI-created material without restriction. Unintentional plagiarism may occur if training data was too similar to existing works.

    2. Personality rights and likeness protection: AI-generated images that resemble real individuals can infringe personality rights, even if the resemblance was accidental and the person is not explicitly named.

    3. Liability for incorrect AI-generated information: If an AI chatbot or system gives users wrong or misleading information, the operator, not the AI, is legally responsible. This has already been confirmed in international legal cases and is expected to apply similarly within the EU.

    In summary:
    Starting in August 2026, transparency becomes mandatory. If AI created it and it looks real, label it. But as long as humans remain in control, review the content, or only use AI for assistance, no disclosure is required.