Shopping Cart
Total:

£0.00

Items:

0

Your cart is empty
Keep Shopping

Spotify’s New Rules: AI Music is Allowed but Spam and Deepfakes Will Be Cracked Down On

On AI News Today, Spotify’s new rules have voice replicas, and deepfake content that hasn’t been made public. 

Spotify’s founders claim that the platform is open to “authentic and responsibly created” AI music and doesn’t want to punish musicians that use creative AI tools. 

The purpose is to stop “mass uploads, duplicates, SEO manipulations, artificially short tracks, and low-quality spam” that affect the artist

Artists are urged to say if AI was used in their songs, although they don’t have to. They can do this using a universal metadata standard made with DDEX. This covers credits for writing songs, mixing them, making vocals, and so forth.

How Will Spotify’s New Rules Work?

In the past year, Spotify took down more than 75 million spam recordings, many of which were made by AI programs or were part of schemes that uploaded a lot of files at once.

The company is using better AI to make spam filters and algorithms that can find bogus artists, profile mismatches, and block spammers from getting fake royalties. 

Spotify’s algorithms won’t recommend tracks that are marked as spam or low-quality, although they can still be available to stream. 

The Downside of Spotify’s New Rules 

Spotify’s new rules make it harder to use “un authorised AI voice replicas” and deepfake impersonations.

It says that artists must give permission for their voices to be used in AI initiatives. The company will prohibit or punish anything that doesn’t match these requirements. 

Spotify’s new rules goal is to strike a balance between artist originality, platform safety, and openness.

 They want to promote a responsible use of AI while keeping dangerous or deceptive content out of the music ecosystem.

Also read: AI Makes History: UK Creator Lands Record Deal With Fully AI-Generated Music 

Comments are closed