Qamaniq Records AI Policy
Qamaniq Records is a small, Inuit-led independent label. AI can help small operations like ours stay viable, or it can be used to displace the artists we exist to support. This page lays out where we draw the line.
What AI Will Never Touch
These are absolute, unconditional prohibitions. We cannot waive them.
Our artists' voices and likenesses. We will never use AI to generate, clone, simulate, or imitate the voice, image, likeness, or identity of any artist on the label — for any reason, including promotion.
Our artists' work as AI training data. Recordings, compositions, lyrics, metadata, and personal information from our artists will never be used to train, improve, or evaluate any AI system.
Traditional and cultural material. Katajjaq, qilaujjaq, pisiit, Inuktitut-language content, traditional compositions, and any material rooted in Inuit cultural traditions will never be submitted to or processed by any AI system. This is non-negotiable.
Visual art and design. Going forward we will not use AI to generate album art, promotional graphics, video, or design work. We prioritize hire Inuit and Indigenous artists for this work and are committed to keeping it that way.
Work by artists we hire. We will never train any AI model on the work of any artist, designer, or creator we commission.
Credit and royalties. AI will never receive authorship credit or any share of royalties on the work we release.
What We Use AI For
We use AI as an administrative tool — not a creative one.
Day-to-day operations. We run local AI models on our own hardware for most routine work. Data stays on our machines and is never sent to a third party. Local models handle grant writing, financial reporting, accounting, royalty calculations, contract and license preparation, legal document review, and catalogue management.
Bigger tasks. For work that needs more powerful models — web development, financial modelling, complex research, marketing strategy — we use Anthropic's Claude with maximum privacy settings. Anthropic's terms prohibit using input data for model training.
Creative-adjacent work. If we want to use AI for anything that touches an artist's creative output or public presence — drafting social media posts, processing audio, translating works, or producing tour materials — we ask for the artist's written approval first. They can say no, and they can revoke approval at any time.
Why We Care About Which AI We Use
Environment. Large AI models consume significant energy. Our local-first approach reduces our reliance on energy-intensive cloud services. Our long-term goal is to move from "local-first" to "local-only" as smaller models become more capable.
Ethics. We evaluate AI providers on their corporate conduct, not just their technology. We will stop using any provider whose partnerships or practices conflict with our values — for example, companies involved in autonomous weapons development, or surveillance technologies.
Accountability. We document our AI provider choices and the reasons behind any switch. This documentation is available on request.
Keeping This Current
AI changes fast. We review this policy whenever something material changes — new tools, new providers, new laws, or a provider doing something we disagree with.
For artists signed to the label, this policy is also bound to your contract as Schedule B. Either party can request a review at any time, and changes require both parties' written agreement.
Last updated: May 2026