What industry pros really want from AI: Takeaways from Confluence 2024

In October 2024, Water & Music hosted an intimate, 20-person workshop at Confluence in Charlotte, North Carolina, to explore how AI is being used in music creation and industry operations.

Ahead of the event, we ran an input survey to gauge attendee interest around different AI use cases. Contrary to many broader industry narratives, the No. 1 area participants wanted to learn about was music marketing, not purely creative music tasks like text-to-audio composition or stem separation. This suggests that for many music professionals, the most valuable AI tools might revolve around growing and sustaining an audience.

During the workshop, we showcased a variety of AI tools — from text-to-audio generators and stem-separation software to large-language models for data analysis and legal assistance — all through the lens of how they can best solve real pain points in the modern music business. The conversation also touched on critical topics like watermarking, data governance, voice cloning, and open-source AI tools.

Below is a recap of the conversation, highlighting the session’s major themes and takeaways.


Why marketing matters most

Our attendee survey revealed that many music professionals see AI as a pathway to addressing immediate behind-the-scenes business challenges: Audience growth, fan engagement, and ticket sales.

With budgets tightening and social media algorithms in flux, marketing can be an exhausting process, especially for independent artists or smaller teams without dedicated staff. Hence, contrary to broader industry narratives that focus heavily on creative tools like text-to-audio generators, our participants expressed even greater urgency around AI that drives tangible marketing ROI — for example, segmenting prospective fans or rapidly generating new social copy.

While AI’s perceived novelty and impact in music production and songwriting is real, the most pressing pain point for many in the room was sustaining an audience in an increasingly volatile music landscape.


Generative audio and voice synthesis

Despite the stronger emphasis on marketing, we kicked off the workshop by exploring how text-to-audio platforms — like Suno and Udio— function behind the scenes. Participants were fascinated by the speed and quality of AI music generation across genres and styles. For many of them, these tools could theoretically augment their creative process, from background tracks for social-video content to rough musical “sketches” for further refinement.

However, several attendees raised concerns about the legal and ethical side of audio generation, especially with lawsuits pending around data sourcing and fair use. Does the training process infringe on copyrighted works? Will future court rulings restrict how these models can be deployed?

Voice cloning in particular stirred debate. If an artist’s voice is part of a public data set, is it automatically fair game for AI to replicate? Attendees pointed out that legal precedent for vocal biometrics remains murky, and that while such cloning can be a “creative add-on,” it raises serious questions about identity rights and potential misuse.


Stem separation and remixing

We next surveyed AI-driven stem-separation software such as AudioShake, along with several open-source alternatives accessible via platforms like Replicate.

By allowing users to isolate vocals, drums, or instrumentals in a finished track, these tools significantly reduce time spent on remixing and sampling.

The conversation here moved beyond pure novelty toward real-world commercial utility, such as isolating specific vocal lines for a remix or removing instrumentation for a sync licensing pitch. Attendees with production or DJ backgrounds were particularly interested in how AI could help them to repurpose older songs or expand their creative options in live sets.

While these tools might not singlehandedly solve a marketing challenge, they feed into faster content creation — ultimately enhancing fan engagement by offering more versions and creative assets for audiences to discover.


Large-language models (LLMs) for data analysis and contracts

Turning to language-based AI, we demonstrated how models like ChatGPT, Claude, and Google Gemini can parse large volumes of text and data.

For instance, an independent artist could upload past tour data and ticket-sales spreadsheets, then prompt the model for audience insights — spotlighting local demographics, fan hotspots, or historical patterns in attendance. Attendees also saw how these models can quickly summarize and highlight key clauses in lengthy recording or licensing contracts.

Still, the consensus was that LLMs are not a replacement for legal counsel or a proper data science team, and thatuser oversight and a final review by a professional remain crucial. AI “hallucinations” or outdated training data mean the user must carefully fact-check any output.

Participants also worried about the privacy risks in uploading sensitive business documents to external servers, especially in the case of open-source solutions. Enterprise or self-hosted solutions may mitigate some of these concerns, but they can be more complicated to set up than consumer-facing platforms.

Ultimately, no single solution fits all — it depends on your budget, risk tolerance, technical capacity, and what you need most from AI (e.g. speed, fidelity, customizability, or data privacy).


Marketing use cases in action

Because the surveyed group was especially interested in marketing, the workshop featured multiple examples of AI-driven campaign strategies:

Despite these potential gains, participants also noted the importance of authenticity. AI can produce large volumes of marketing copy, but over-automation runs the risk of sounding generic. A copy-and-paste approach risks alienating fans, who often respond best to unique, personal engagement.

As with generative audio, the technology’s real value lies in lowering the barrier to creative exploration — freeing up time and resources for more human touches.


A final area of focus was the question of liability and rights.

With AI’s ability to remix, reproduce, or even conjure entirely new vocal performances, many participants wondered about future takedown notices and lawsuits around content incorporating AI. Tools for watermarking or attributing AI-generated content remain in a nascent stage, and many felt unsure how future legislative changes or court rulings would shape the landscape.

Ultimately, the group favored a cautious approach:


Looking ahead

By the end of the session, it was clear that while AI can benefit almost every facet of music creation and commerce, practical marketing solutions remain the top priority for immediate ROI.

This does not diminish the creative promise of generative audio, stem-separation, and voice-synthesis tools — rather, it underscores the broader challenge: Bridging art and commerce sustainably.

To achieve that balance, many participants emphasized:

  1. Deploying AI carefully and with an eye toward potential legal gray areas.
  2. Investing in prompt-engineering skills, especially for marketing campaigns.
  3. Staying informed on rapidly changing open-source and enterprise AI offerings.

As AI’s role in the music industry continues to evolve, Water & Music will keep tracking and testing new tools, pilot programs, and business models. If you want deeper technical dives or real-world case studies, consider joining our membership for exclusive research and webinar sessions.

Thank you to everyone who helped make this workshop possible. We hope it served as a glimpse into how AI can address the everyday hurdles of the music business, while still sparking new avenues of creative experimentation.


Revisit Water & Music’s previous research on music AI: